Artículos de revistas sobre el tema "Batches of task graphs"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Batches of task graphs.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Batches of task graphs".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

DIAKITÉ, SÉKOU, LORIS MARCHAL, JEAN-MARC NICOD y LAURENT PHILIPPE. "PRACTICAL STEADY-STATE SCHEDULING FOR TREE-SHAPED TASK GRAPHS". Parallel Processing Letters 21, n.º 04 (diciembre de 2011): 397–412. http://dx.doi.org/10.1142/s0129626411000291.

Texto completo
Resumen
In this paper, we focus on the problem of scheduling a collection of similar task graphs on a heterogeneous platform, when the task graph is an intree. We rely on steady-state scheduling techniques, and aim at optimizing the throughput of the system. Contrarily to previous studies, we concentrate on practical aspects of steady-state scheduling, when dealing with a collection (or batch) of limited size. We focus here on two optimizations. The first one consists in reducing the processing time of each task graph, thus making steady-state scheduling applicable to smaller batches. The second one consists in degrading a little the optimal-throughput solution to get a simpler solution, more efficient on small batches. We present our optimizations in details, and show that they both help to overcome the limitation of steady-state scheduling: our simulations show that we are able to reach a better efficiency on small batches, to reduce the size of the buffers, and to significantly decrease the processing time of a single task graph (latency).
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Xun, Chaogang Zhang, Ying Zhang, Xiangyu Meng, Zhiyuan Zhang, Xin Shi y Tao Song. "IMGG: Integrating Multiple Single-Cell Datasets through Connected Graphs and Generative Adversarial Networks". International Journal of Molecular Sciences 23, n.º 4 (14 de febrero de 2022): 2082. http://dx.doi.org/10.3390/ijms23042082.

Texto completo
Resumen
There is a strong need to eliminate batch-specific differences when integrating single-cell RNA-sequencing (scRNA-seq) datasets generated under different experimental conditions for downstream task analysis. Existing batch correction methods usually transform different batches of cells into one preselected “anchor” batch or a low-dimensional embedding space, and cannot take full advantage of useful information from multiple sources. We present a novel framework, called IMGG, i.e., integrating multiple single-cell datasets through connected graphs and generative adversarial networks (GAN) to eliminate nonbiological differences between different batches. Compared with current methods, IMGG shows excellent performance on a variety of evaluation metrics, and the IMGG-corrected gene expression data incorporate features from multiple batches, allowing for downstream tasks such as differential gene expression analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Wang, Yue, Ruiqi Xu, Xun Jian, Alexander Zhou y Lei Chen. "Towards distributed bitruss decomposition on bipartite graphs". Proceedings of the VLDB Endowment 15, n.º 9 (mayo de 2022): 1889–901. http://dx.doi.org/10.14778/3538598.3538610.

Texto completo
Resumen
Mining cohesive subgraphs on bipartite graphs is an important task. The k -bitruss is one of many popular cohesive subgraph models, which is the maximal subgraph where each edge is contained in at least k butterflies. The bitruss decomposition problem is to find all k -bitrusses for k ≥ 0. Dealing with large graphs is often beyond the capability of a single machine due to its limited memory and computational power, leading to a need for efficiently processing large graphs in a distributed environment. However, all current solutions are for a single machine and a centralized environment, where processors can access the graph or auxiliary indexes randomly and globally. It is difficult to directly deploy such algorithms on a shared-nothing model. In this paper, we propose distributed algorithms for bitruss decomposition. We first propose SC-HBD as the baseline, which uses H -function to define bitruss numbers and computes them iteratively to a fix point in parallel. We then introduce a subgraph-centric peeling method SC-PBD, which peels edges in batches over different butterfly complete subgraphs. We then introduce local indexes on each fragment, study the butterfly-aware edge partition problem including its hardness, and propose an effective partitioner. Finally we present the bitruss butterfly-complete subgraph concept, and divide and conquer DC-BD method with optimization strategies. Extensive experiments show the proposed methods solve graphs with 30 trillion butterflies in 2.5 hours, while existing parallel methods under shared-memory model fail to scale to such large graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jia, Haozhang. "Graph sampling based deep metric learning for cross-view geo-localization". Journal of Physics: Conference Series 2711, n.º 1 (1 de febrero de 2024): 012004. http://dx.doi.org/10.1088/1742-6596/2711/1/012004.

Texto completo
Resumen
Abstract Cross-view geo-localization has emerged as a novel computer vision task that has garnered increasing attention. This is primarily attributed to its practical significance in the domains of drone navigation and drone-view localization. Moreover, the work is particularly demanding due to its inherent requirement for cross-domain matching. There are generally two ways to train a neural network to match similar satellite and drone-view images: presentation learning with classifiers and identity loss, and metric learning with pairwise matching within mini-batches. The first takes extra computing and memory costs in large-scale learning, so this paper follows a person-reidentification method called QAConv-GS, and implements a graph sampler to mine the hardest data to form mini-batches, and a QAConv module with extra attention layers appended to compute similarity between image pairs. Batch-wise OHEM triplet loss is then used for model training. With these implementations and adaptions combined, this paper significantly improves the state of the art on the challenging University-1652 dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Angriman, Eugenio, Michał Boroń y Henning Meyerhenke. "A Batch-dynamic Suitor Algorithm for Approximating Maximum Weighted Matching". ACM Journal of Experimental Algorithmics 27 (31 de diciembre de 2022): 1–41. http://dx.doi.org/10.1145/3529228.

Texto completo
Resumen
Matching is a popular combinatorial optimization problem with numerous applications in both commercial and scientific fields. Computing optimal matchings w.r.t. cardinality or weight can be done in polynomial time; still, this task can become infeasible for very large networks. Thus, several approximation algorithms that trade solution quality for a faster running time have been proposed. For networks that change over time, fully dynamic algorithms that efficiently maintain an approximation of the optimal matching after a graph update have been introduced as well. However, no semi- or fully dynamic algorithm for (approximate) maximum weighted matching has been implemented. In this article, we focus on the problem of maintaining a \( 1/2 \) -approximation of a maximum weighted matching (MWM) in fully dynamic graphs. Limitations of existing algorithms for this problem are (i) high constant factors in their time complexity, (ii) the fact that none of them supports batch updates, and (iii) the lack of a practical implementation, meaning that their actual performance on real-world graphs has not been investigated. We propose and implement a new batch-dynamic \( 1/2 \) -approximation algorithm for MWM based on the Suitor algorithm and its local edge domination strategy [Manne and Halappanavar, IPDPS 2014]. We provide a detailed analysis of our algorithm and prove its approximation guarantee. Despite having a worst-case running time of \( \mathcal {O}(n + m) \) for a single graph update, our extensive experimental evaluation shows that our algorithm is much faster in practice. For example, compared to a static recomputation with sequential Suitor , single-edge updates are handled up to \( 10^5\times \) to \( 10^6\times \) faster, while batches of \( 10^4 \) edge updates are handled up to \( 10^2\times \) to \( 10^3\times \) faster.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Xin, Yanyan Shen, Yingxia Shao y Lei Chen. "DUCATI: A Dual-Cache Training System for Graph Neural Networks on Giant Graphs with the GPU". Proceedings of the ACM on Management of Data 1, n.º 2 (13 de junio de 2023): 1–24. http://dx.doi.org/10.1145/3589311.

Texto completo
Resumen
Recently Graph Neural Networks (GNNs) have achieved great success in many applications. The mini-batch training has become the de-facto way to train GNNs on giant graphs. However, the mini-batch generation task is extremely expensive which slows down the whole training process. Researchers have proposed several solutions to accelerate the mini-batch generation, however, they (1) fail to exploit the locality of the adjacency matrix, (2) cannot fully utilize the GPU memory, and (3) suffer from the poor adaptability to diverse workloads. In this work, we propose DUCATI, aDual-Cache system to overcome these drawbacks. In addition to the traditionalNfeat-Cache, DUCATI introduces a newAdj-Cache to further accelerate the mini-batch generation and better utilize GPU memory. DUCATI develops a workload-awareDual-Cache Allocator which adaptively finds the best cache allocation plan under different settings. We compare DUCATI with various GNN training systems on four billion-scale graphs under diverse workload settings. The experimental results show that in terms of training time, DUCATI can achieve up to 3.33 times speedup (2.07 times on average) compared to DGL and up to 1.54 times speedup (1.32 times on average) compared to the state-of-the-artSingle-Cache systems. We also analyze the time-accuracy trade-offs of DUCATI and four state-of-the-art GNN training systems. The analysis results offer users some guidelines on system selection regarding different input sizes and hardware resources.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Vo, Tham y Phuc Do. "GOW-Stream: A novel approach of graph-of-words based mixture model for semantic-enhanced text stream clustering". Intelligent Data Analysis 25, n.º 5 (15 de septiembre de 2021): 1211–31. http://dx.doi.org/10.3233/ida-205443.

Texto completo
Resumen
Recently, rapid growth of social networks and online news resources from Internet have made text stream clustering become an insufficient application in multiple domains (e.g.: text retrieval diversification, social event detection, text summarization, etc.) Different from traditional static text clustering approach, text stream clustering task has specific key challenges related to the rapid change of topics/clusters and high-velocity of coming streaming document batches. Recent well-known model-based text stream clustering models, such as: DTM, DCT, MStream, etc. are considered as word-independent evaluation approach which means largely ignoring the relations between words while sampling clusters/topics. It definitely leads to the decrease of overall model accuracy performance, especially for short-length text documents such as comments, microblogs, etc. in social networks. To tackle these existing problems, in this paper we propose a novel approach of graph-of-words (GOWs) based text stream clustering, called GOW-Stream. The application of common GOWs which are generated from each document batch while sampling clusters/topics can support to overcome the word-independent evaluation challenge. Our proposed GOW-Stream is promising to significantly achieve better text stream clustering performance than recent state-of-the-art baselines. Extensive experiments on multiple benchmark real-world datasets demonstrate the effectiveness of our proposed model in both accuracy and time-consuming performances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Da San Martino, Giovanni, Alessandro Sperduti, Fabio Aiolli y Alessandro Moschitti. "Efficient Online Learning for Mapping Kernels on Linguistic Structures". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 3421–28. http://dx.doi.org/10.1609/aaai.v33i01.33013421.

Texto completo
Resumen
Kernel methods are popular and effective techniques for learning on structured data, such as trees and graphs. One of their major drawbacks is the computational cost related to making a prediction on an example, which manifests in the classification phase for batch kernel methods, and especially in online learning algorithms. In this paper, we analyze how to speed up the prediction when the kernel function is an instance of the Mapping Kernels, a general framework for specifying kernels for structured data which extends the popular convolution kernel framework. We theoretically study the general model, derive various optimization strategies and show how to apply them to popular kernels for structured data. Additionally, we derive a reliable empirical evidence on semantic role labeling task, which is a natural language classification task, highly dependent on syntactic trees. The results show that our faster approach can clearly improve on standard kernel-based SVMs, which cannot run on very large datasets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Luo, Haoran, Haihong E, Yuhao Yang, Gengxian Zhou, Yikai Guo, Tianyu Yao, Zichen Tang, Xueyuan Lin y Kaiyang Wan. "NQE: N-ary Query Embedding for Complex Query Answering over Hyper-Relational Knowledge Graphs". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 4 (26 de junio de 2023): 4543–51. http://dx.doi.org/10.1609/aaai.v37i4.25576.

Texto completo
Resumen
Complex query answering (CQA) is an essential task for multi-hop and logical reasoning on knowledge graphs (KGs). Currently, most approaches are limited to queries among binary relational facts and pay less attention to n-ary facts (n≥2) containing more than two entities, which are more prevalent in the real world. Moreover, previous CQA methods can only make predictions for a few given types of queries and cannot be flexibly extended to more complex logical queries, which significantly limits their applications. To overcome these challenges, in this work, we propose a novel N-ary Query Embedding (NQE) model for CQA over hyper-relational knowledge graphs (HKGs), which include massive n-ary facts. The NQE utilizes a dual-heterogeneous Transformer encoder and fuzzy logic theory to satisfy all n-ary FOL queries, including existential quantifiers (∃), conjunction (∧), disjunction (∨), and negation (¬). We also propose a parallel processing algorithm that can train or predict arbitrary n-ary FOL queries in a single batch, regardless of the kind of each query, with good flexibility and extensibility. In addition, we generate a new CQA dataset WD50K-NFOL, including diverse n-ary FOL queries over WD50K. Experimental results on WD50K-NFOL and other standard CQA datasets show that NQE is the state-of-the-art CQA method over HKGs with good generalization capability. Our code and dataset are publicly available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Auerbach, Joshua, David F. Bacon, Rachid Guerraoui, Jesper Honig Spring y Jan Vitek. "Flexible task graphs". ACM SIGPLAN Notices 43, n.º 7 (27 de junio de 2008): 1–11. http://dx.doi.org/10.1145/1379023.1375659.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

BENTEN, MUHAMMAD S. T. y SADIQ M. SAIT. "Genetic scheduling of task graphs". International Journal of Electronics 77, n.º 4 (octubre de 1994): 401–15. http://dx.doi.org/10.1080/00207219408926072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ranade, Abhiram G. "Scheduling loosely connected task graphs". Journal of Computer and System Sciences 67, n.º 1 (agosto de 2003): 198–208. http://dx.doi.org/10.1016/s0022-0000(03)00071-0.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Фомина, Мария, Mariya Fomina, Владимир Коновалов, Vladimir Konovalov, Вячеслав Терюшков, Vyacheslav Teryushkov, Алексей Чупшев y Aleksey Chupshev. "THE EFFECT OF MIXING DURATION AND THE PROPORTION OF SMALLER COMPONENT FOR THE PERFORMANCE OF THE PADDLE MIXER RUNNING WITH EXTRA BLADES". Bulletin Samara State Agricultural Academy 2, n.º 3 (27 de julio de 2017): 40–45. http://dx.doi.org/10.12737/17452.

Texto completo
Resumen
The purpose of research is justification of area efficiency of the proposed mixer with vertical shaft and paddle stirrer, the edges of the blades which are fixed sinusoidal blades. Research has shown that, by virtue of the kinetics of mixing all the mixers in the early period mixing significantly improve the quality of the mixture, after which stabilization of the quality indicators, and in some cases starts and segregation of the mixture. The nature of the change of the uniformity of the mixture is kind of exponential time mixing. In this connection there is the task of identifying areas of efficiency and opportunities the application of paddle mixer proposed design for the preparation of dry feed mixtures. Important manufacturing concentrate mixtures (compaund feeds, feed concentrates or forage medicinal mixtures) based on the purchase of BVD and your own forage. Purpose: the establishment of functional dependence between the technological parameters of the mixer (the proportion of the control component and the duration of mixing) and process performance (uneven mix and adjusted intensity of mixing taking into account the uniformity of the mixture); identifying rational values of technological parameters of the mixer, providing the desired quality mix and minimum energy intensity of mixing. It is provided the description and structural diagram mixer dry material batch. The technique is described and results of experimental studies of the mixer. It is presented the expressions describing the unevenness of the mix and the energy intensity of stirring, depending on the proportion of the control component and the duration of mixing; the required duration of mixing depending on the proportion of the control component. It is built two-dimensional section of the surface response in the studied parameters. Based on the analysis of the given graphs justifies the area efficiency of the mixer: the proportion of the control component is not less than 3%; when the portion of the control component 5% duration of mixing – 300 s, when the portion of the control component 10% the duration of the mixing – 200 s.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Ganesan, Ghurumuruhan. "Weighted Eulerian extensions of random graphs". Gulf Journal of Mathematics 16, n.º 2 (12 de abril de 2024): 1–11. http://dx.doi.org/10.56947/gjom.v16i2.1866.

Texto completo
Resumen
The Eulerian extension number of any graph H (i.e. the minimum number of edges needed to be added to make H Eulerian) is at least t(H), half the number of odd degree vertices of H. In this paper we consider weighted Eulerian extensions of a random graph G where we add edges of bounded weights and use an iterative probabilistic method to obtain sufficient conditions for the weighted Eulerian extension number of G to grow linearly with t(G). We derive our conditions in terms of the average edge probabilities and edge density and also show that bounded extensions are rare by estimating the skewness of a fixed weighted extension. Finally, we briefly describe a decomposition involving Eulerian extensions of G to convert a large dataset into small dissimilar batches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Gupta, Apurv, Gilles Parmentier y Denis Trystram. "Scheduling Precedence Task Graphs with Disturbances". RAIRO - Operations Research 37, n.º 3 (julio de 2003): 145–56. http://dx.doi.org/10.1051/ro:2003018.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Lee, J., L. F. Lai y W. T. Haung. "Task-based specifications through conceptual graphs". IEEE Expert 11, n.º 4 (agosto de 1996): 60–70. http://dx.doi.org/10.1109/64.511868.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Semar Shahul, Ahmed Zaki y Oliver Sinnen. "Scheduling task graphs optimally with A*". Journal of Supercomputing 51, n.º 3 (marzo de 2010): 310–32. http://dx.doi.org/10.1007/s11227-010-0395-1.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Galkin, Alexander, Vladimir Istomin y Vladimir Alekseev. "Automation of Assembly Batches Hot Rolled of Metallurgical Production". Mathematical Physics and Computer Simulation, n.º 4 (diciembre de 2022): 66–77. http://dx.doi.org/10.15688/mpcm.jvolsu.2022.4.6.

Texto completo
Resumen
The paper considers the task of automating the process of forming assembly batches installation in a hot rolling mill. To solve this problem, a developed algorithm for the formation of optimal assembly batches at a hot rolling mill is proposed, taking into account the technological limitations imposed on the production process. The optimization of the set of assembly batches consists in the construction of a set with maximum productivity, which is achieved by reducing the time for the reconstruction of the equipment when switching to different width and thickness of rolling stock. A program for automatic formation of assembly batches at a hot-rolled steel mill has been implemented. It is now possible to save each batch included in the generated set to a separate file, as well as write general information about the entire set to a file. The algorithm was tested when forming assembly batches from a set of slabs available at the warehouse. Calculations on the formation of optimal assembly batches have been carried out. The presented results of the study show the increase of the formed assembly batches productivity and their compliance with all technological restrictions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Johnson, Heather Lynn, Evan D. McClintock y Amber Gardner. "Opportunities for Reasoning: Digital Task Design to Promote Students’ Conceptions of Graphs as Representing Relationships between Quantities". Digital Experiences in Mathematics Education 6, n.º 3 (28 de febrero de 2020): 340–66. http://dx.doi.org/10.1007/s40751-020-00061-9.

Texto completo
Resumen
Abstract We posit a dual approach to digital task design: to engineer opportunities for students to conceive of graphs as representing relationships between quantities and to foreground students’ reasoning and exploration, rather than their answer-finding. Locally integrating Ference Marton’s variation theory and Patrick Thompson’s theory of quantitative reasoning, we designed digital task sequences, in which students were to create different graphs linked to the same video animations. We report results of a qualitative study of thirteen secondary students (aged 15–17), who participated in digital, task-based, individual interviews. We investigated two questions: (1) How do students conceive of what graphs represent when engaging with digital task sequences? (2) How do student conceptions of graphs shift when working within and across digital task sequences? Two conceptions were particularly stable – relationships between quantities and literal motion of an object. When students demonstrated conceptions of graphs as representing change in a single quantity, they shifted to conceptions of relationships between quantities. We explain how a critical aspect: What graphs should represent, intertwined with students’ graph-sketching. Finally, we discuss implications for digital task design to promote students’ conceptions of mathematical representations, such as graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Feidengol'd, V. B. "Factors determining of forming grain batches at procurement elevators and grain receiving enterprises". Khleboproducty 31, n.º 5 (2022): 26–33. http://dx.doi.org/10.32462/0235-2508-2022-31-5-26-33.

Texto completo
Resumen
Transformations related to the improvement of grain classification by quality, as well as the technical base of procurement granaries, ensuring the formation of grain batches that meet the requirements for their safety and intended purpose, are considered. A model of optimization of the process of forming grain batches for an elevator with a specific technical equipment is presented, which can be transformed into a task to justify the technical reequipment of the elevator.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

KOTLERMAN, LILI, IDO DAGAN, BERNARDO MAGNINI y LUISA BENTIVOGLI. "Textual entailment graphs". Natural Language Engineering 21, n.º 5 (23 de junio de 2015): 699–724. http://dx.doi.org/10.1017/s1351324915000108.

Texto completo
Resumen
AbstractIn this work, we present a novel type of graphs for natural language processing (NLP), namely textual entailment graphs (TEGs). We describe the complete methodology we developed for the construction of such graphs and provide some baselines for this task by evaluating relevant state-of-the-art technology. We situate our research in the context of text exploration, since it was motivated by joint work with industrial partners in the text analytics area. Accordingly, we present our motivating scenario and the first gold-standard dataset of TEGs. However, while our own motivation and the dataset focus on the text exploration setting, we suggest that TEGs can have different usages and suggest that automatic creation of such graphs is an interesting task for the community.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Ekenna, Ifeoma Cynthia y Sunday Okorie Abali. "Comparison of the Use of Kinetic Model Plots and DD Solver Software to Evaluate the Drug Release from Griseofulvin Tablets". Journal of Drug Delivery and Therapeutics 12, n.º 2-S (15 de abril de 2022): 5–13. http://dx.doi.org/10.22270/jddt.v12i2-s.5402.

Texto completo
Resumen
Awareness of the release kinetics of active drugs is important in formulating drugs that have the desired delivery and in predicting the behaviour of the formulated drug in vivo. The study aims to determine the mechanism of drug release from griseofulvin tablets formulated with different surfactants using mathematical models and to compare the use of graphs and DD solver software in fitting dissolution profiles to kinetic models. The batches P1 -P3 were composed of the surfactant - PEG 4000 in different concentrations. A control batch without surfactant and a commercial brand (Mycoxyl 500) were used for comparison. Granule and tablet quality tests indicated quality formulations. Dissolution profiles showed that the surfactant improved drug release of griseofulvin and batches (batches P1 -P3) formulated with PEG 4000 had the best release profiles comparable with the commercial brand. The Excel Add-in DD solver and kinetic plots were used to determine the kinetic model of best fit. The Higuchi model was the best fit for batches P1 -P3. The first order and Hixon -Crowell also fit batches P2 and P3. The Korsmeyer’s model showed that batches P1 -P3 exhibited anomalous diffusion. The tablets formulated with PEG were as good as the commercial brand and they had an anomalous diffusion of the drug from the tablet; meaning that drug diffused following Fickian law and also diffused through a swollen and porous matrix. Kinetic plots and the DD solver can be used for fitting dissolution profiles to kinetic models. Keywords: Griseofulvin, Kinetics, Models, Surfactants, Polyethylene glycol (PEG) 4000, DD solver, Dissolution profile, mathematical models
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ekenna, Ifeoma Cynthia y Sunday Okorie Abali. "Comparison of the Use of Kinetic Model Plots and DD Solver Software to Evaluate the Drug Release from Griseofulvin Tablets". Journal of Drug Delivery and Therapeutics 12, n.º 2-S (15 de abril de 2022): 5–13. http://dx.doi.org/10.22270/jddt.v12i2-s.5402.

Texto completo
Resumen
Awareness of the release kinetics of active drugs is important in formulating drugs that have the desired delivery and in predicting the behaviour of the formulated drug in vivo. The study aims to determine the mechanism of drug release from griseofulvin tablets formulated with different surfactants using mathematical models and to compare the use of graphs and DD solver software in fitting dissolution profiles to kinetic models. The batches P1 -P3 were composed of the surfactant - PEG 4000 in different concentrations. A control batch without surfactant and a commercial brand (Mycoxyl 500) were used for comparison. Granule and tablet quality tests indicated quality formulations. Dissolution profiles showed that the surfactant improved drug release of griseofulvin and batches (batches P1 -P3) formulated with PEG 4000 had the best release profiles comparable with the commercial brand. The Excel Add-in DD solver and kinetic plots were used to determine the kinetic model of best fit. The Higuchi model was the best fit for batches P1 -P3. The first order and Hixon -Crowell also fit batches P2 and P3. The Korsmeyer’s model showed that batches P1 -P3 exhibited anomalous diffusion. The tablets formulated with PEG were as good as the commercial brand and they had an anomalous diffusion of the drug from the tablet; meaning that drug diffused following Fickian law and also diffused through a swollen and porous matrix. Kinetic plots and the DD solver can be used for fitting dissolution profiles to kinetic models. Keywords: Griseofulvin, Kinetics, Models, Surfactants, Polyethylene glycol (PEG) 4000, DD solver, Dissolution profile, mathematical models
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Lai, Zeqiang, Kaixuan Wei, Ying Fu, Philipp Härtel y Felix Heide. "∇-Prox: Differentiable Proximal Algorithm Modeling for Large-Scale Optimization". ACM Transactions on Graphics 42, n.º 4 (26 de julio de 2023): 1–19. http://dx.doi.org/10.1145/3592144.

Texto completo
Resumen
Tasks across diverse application domains can be posed as large-scale optimization problems, these include graphics, vision, machine learning, imaging, health, scheduling, planning, and energy system forecasting. Independently of the application domain, proximal algorithms have emerged as a formal optimization method that successfully solves a wide array of existing problems, often exploiting problem-specific structures in the optimization. Although model-based formal optimization provides a principled approach to problem modeling with convergence guarantees, at first glance, this seems to be at odds with black-box deep learning methods. A recent line of work shows that, when combined with learning-based ingredients, model-based optimization methods are effective, interpretable, and allow for generalization to a wide spectrum of applications with little or no extra training data. However, experimenting with such hybrid approaches for different tasks by hand requires domain expertise in both proximal optimization and deep learning, which is often error-prone and time-consuming. Moreover, naively unrolling these iterative methods produces lengthy compute graphs, which when differentiated via autograd techniques results in exploding memory consumption, making batch-based training challenging. In this work, we introduce ∇-Prox, a domain-specific modeling language and compiler for large-scale optimization problems using differentiable proximal algorithms. ∇-Prox allows users to specify optimization objective functions of unknowns concisely at a high level, and intelligently compiles the problem into compute and memory-efficient differentiable solvers. One of the core features of ∇-Prox is its full differentiability, which supports hybrid model- and learning-based solvers integrating proximal optimization with neural network pipelines. Example applications of this methodology include learning-based priors and/or sample-dependent inner-loop optimization schedulers, learned with deep equilibrium learning or deep reinforcement learning. With a few lines of code, we show ∇-Prox can generate performant solvers for a range of image optimization problems, including end-to-end computational optics, image deraining, and compressive magnetic resonance imaging. We also demonstrate ∇-Prox can be used in a completely orthogonal application domain of energy system planning, an essential task in the energy crisis and the clean energy transition, where it outperforms state-of-the-art CVXPY and commercial Gurobi solvers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

PRADEEP, B. y C. SIVA RAM MURTHY. "A CONSTANT TIME ALGORITHM FOR REDUNDANCY ELIMINATION IN TASK GRAPHS ON PROCESSOR ARRAYS WITH RECONFIGURABLE BUS SYSTEMS". Parallel Processing Letters 03, n.º 02 (junio de 1993): 171–77. http://dx.doi.org/10.1142/s0129626493000216.

Texto completo
Resumen
The task or precedence graph formalism is a practical tool to study algorithm parallelization. Redundancy in such task graphs gives rise to numerous avoidable inter-task dependencies which invariably complicates the process of parallelization. In this paper we present an O(1) time algorithm for the elimination of redundancy in such graphs on Processor Arrays with Reconfigurable Bus Systemusing O(n4) processors, The previous parallel algorithm available in the literature for redundancy elimination in task graphs takes O(n2) time using O(n) processors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Sun, Jinghao, Tao Jin, Yekai Xue, Liwei Zhang, Jinrong Liu, Nan Guan y Quan Zhou. "ompTG: From OpenMP Programs to Task Graphs". Journal of Systems Architecture 126 (mayo de 2022): 102470. http://dx.doi.org/10.1016/j.sysarc.2022.102470.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Kari, Chadi, Alexander Russell y Narasimha Shashidhar. "Work-Competitive Scheduling on Task Dependency Graphs". Parallel Processing Letters 25, n.º 02 (junio de 2015): 1550001. http://dx.doi.org/10.1142/s0129626415500012.

Texto completo
Resumen
A fundamental problem in distributed computing is the task of cooperatively executing a given set of [Formula: see text] tasks by [Formula: see text] asynchronous processors where the communication medium is dynamic and subject to failures. Also known as do-all, this problem been studied extensively in various distributed settings. In [2], the authors consider a partitionable network scenario and analyze the competitive performance of a randomized scheduling algorithm for the case where the tasks to be completed are independent of each other. In this paper, we study a natural extension of this problem where the tasks have dependencies among them. We present a simple randomized algorithm for [Formula: see text] processors cooperating to perform [Formula: see text] known tasks where the dependencies between them are defined by a [Formula: see text]-partite task dependency graph and additionally these processors are subject to a dynamic communication medium. By virtue of the problem setting, we pursue competitive analysis where the performance of our algorithm is measured against that of the omniscient offline algorithm which has complete knowledge of the dynamics of the communication medium. We show that the competitive ratio of our algorithm is tight and depends on the dynamics of the communication medium viz. the computational width defined in [2] and also on the number of partitions of the task dependency graph.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gerasoulis, Apostolos, Sesh Venugopal y Tao Yang. "Clustering task graphs for message passing architectures". ACM SIGARCH Computer Architecture News 18, n.º 3b (septiembre de 1990): 447–56. http://dx.doi.org/10.1145/255129.255188.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Porat, Talya, Tal Oron-Gilad y Joachim Meyer. "Task-dependent processing of tables and graphs". Behaviour & Information Technology 28, n.º 3 (mayo de 2009): 293–307. http://dx.doi.org/10.1080/01449290701803516.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Riabov, Anton y Jay Sethuraman. "Scheduling periodic task graphs with communication delays". ACM SIGMETRICS Performance Evaluation Review 29, n.º 3 (diciembre de 2001): 17–18. http://dx.doi.org/10.1145/507553.507559.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lee, Jonathan y Lein F. Lai. "Verifying task-based specifications in conceptual graphs". Information and Software Technology 39, n.º 14-15 (enero de 1998): 913–23. http://dx.doi.org/10.1016/s0950-5849(97)00054-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Löwe, Welf y Wolf Zimmermann. "Scheduling balanced task-graphs to LogP-machines". Parallel Computing 26, n.º 9 (julio de 2000): 1083–108. http://dx.doi.org/10.1016/s0167-8191(00)00030-2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Jain, Kamal Kumar y V. Rajaraman. "Parallelism measures of task graphs for multiprocessors". Microprocessing and Microprogramming 40, n.º 4 (mayo de 1994): 249–59. http://dx.doi.org/10.1016/0165-6074(94)90133-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Lombardi, Michele y Michela Milano. "Allocation and scheduling of Conditional Task Graphs". Artificial Intelligence 174, n.º 7-8 (mayo de 2010): 500–529. http://dx.doi.org/10.1016/j.artint.2010.02.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Liu, Zhen y Rhonda Righter. "Optimal parallel processing of random task graphs". Journal of Scheduling 4, n.º 3 (2001): 139–56. http://dx.doi.org/10.1002/jos.70.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Sheikh, Sophiya, Aitha Nagaraju y Mohammad Shahid. "A Parallelized Dynamic Task Scheduling for Batch of Task in a computational grid". International Journal of Computers and Applications 41, n.º 1 (9 de agosto de 2018): 39–53. http://dx.doi.org/10.1080/1206212x.2018.1505018.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Bnaya, Zahy, Ariel Felner, Dror Fried, Olga Maksin y Solomon Shimony. "Repeated-Task Canadian Traveler Problem". Proceedings of the International Symposium on Combinatorial Search 2, n.º 1 (19 de agosto de 2021): 24–30. http://dx.doi.org/10.1609/socs.v2i1.18197.

Texto completo
Resumen
In the Canadian Traveler Problem (CTP) a traveling agent is given a weighted graph, where some of the edges may be blocked, with a known probability. The agent needs to travel to a given goal. A solution for CTP is a policy, that has the smallest expected traversal cost. CTP is intractable. Previous work has focused on the case of a single agent. We generalize CTP to a repeated task version where a number of agents need to travel to the same goal, minimizing their combined travel cost. We provide optimal algorithms for the special case of disjoint path graphs. Based on a previous UCT-based approach for the single agent case, a framework is developed for the multi-agent case and four variants are given - two of which are based on the results for disjoint-path graphs. Empirical results show the benefits of the suggested framework and the resulting heuristics. For small graphs where we could compare to optimal policies, our approach achieves near optimal results at only a fraction of the computation cost.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Arcan, Mihael, Sampritha Manjunath, Cécile Robin, Ghanshyam Verma, Devishree Pillai, Simon Sarkar, Sourav Dutta, Haytham Assem, John P. McCrae y Paul Buitelaar. "Intent Classification by the Use of Automatically Generated Knowledge Graphs". Information 14, n.º 5 (12 de mayo de 2023): 288. http://dx.doi.org/10.3390/info14050288.

Texto completo
Resumen
Intent classification is an essential task for goal-oriented dialogue systems for automatically identifying customers’ goals. Although intent classification performs well in general settings, domain-specific user goals can still present a challenge for this task. To address this challenge, we automatically generate knowledge graphs for targeted data sets to capture domain-specific knowledge and leverage embeddings trained on these knowledge graphs for the intent classification task. As existing knowledge graphs might not be suitable for a targeted domain of interest, our automatic generation of knowledge graphs can extract the semantic information of any domain, which can be incorporated within the classification process. We compare our results with state-of-the-art pre-trained sentence embeddings and our evaluation of three data sets shows improvement in the intent classification task in terms of precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Eshaghian, Mary Mehrnoosh. "MAPPING ARBITRARY HETEROGENEOUS TASK GRAPHS ONTO ARBITRARY HETEROGENEOUS SYSTEM GRAPH". International Journal of Foundations of Computer Science 12, n.º 05 (octubre de 2001): 599–628. http://dx.doi.org/10.1142/s0129054101000680.

Texto completo
Resumen
In this paper, a generic technique for mapping arbitrary heterogeneous task graphs onto arbitrary heterogeneous system graphs is presented. The heterogeneous task and system graphs studied in this paper have nonuniform computation and communication weights associated with the nodes and the edges. Two clustering algorithms have been proposed that can be used to obtain a multilayer clustered graph called a Spec graph from a given task graph and a multilayer clustered graph called a Rep graph from a given system graph. We present a mapping algorithm that produces a suboptimal matching of a given Spec graph containing M task modules onto a Rep graph of N processors, in O(M2) time, where N ≤ M. Our experimental results indicate that our mapping algorithm is the fastest one and generates results that are better than, or similar to, those of other leading techniques, some of which work only for restricted task or system graphs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Čyras, Vytautas. "Sąvokų „paskirtis", „tikslas", „uždavinys" ir „funkcija" modeliavimas teisėje: ar yra problema?" Lietuvos matematikos rinkinys 44 (17 de diciembre de 2004): 250–55. http://dx.doi.org/10.15388/lmr.2004.31643.

Texto completo
Resumen
The formalisation of concepts a ``purpose'', ` ` goal'', ` ` task'', and ``` ` function'' in law is examined. Conceptual graphs called `` ` goal-task graphs'' are proposed to model them. A goal is associated with a predicate, which characterises the goal. A node represents a state, in which the predicate is true. An edge represents the task. A task e = (u, v) represents a mean to implement the state v, which assumes u.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Walker, Bruce N. y Michael A. Nees. "Conceptual versus Perceptual Training for Auditory Graphs". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, n.º 17 (septiembre de 2005): 1598–601. http://dx.doi.org/10.1177/154193120504901721.

Texto completo
Resumen
A study examined different types of brief training for performance of a point estimation task with a sonified graph of quantitative data. For a given trial, participants estimated the price of a stock at a randomly selected hour of a 10-hour trading day as displayed by an auditory graph of the stock price. Sixty Georgia Tech undergraduate students completed a pre-test, an experimental training session, and a post-test for the point estimation task. In an extension of Smith and Walker (in press), a highly conceptual, task analysisderived method of training was examined along with training paradigms that used either practice alone, prompting of correct responses, or feedback for correct answers during the training session. A control group completed a filler task during training. Results indicated that practice with feedback during training produced better post-test scores than the control condition
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Rhodes, David L. y Wayne Wolf. "TWO CONP-COMPLETE SCHEDULE ANALYSIS PROBLEMS". International Journal of Foundations of Computer Science 12, n.º 05 (octubre de 2001): 565–80. http://dx.doi.org/10.1142/s0129054101000667.

Texto completo
Resumen
While many forms of schedule decision problems are known to be NP-complete, two forms of schedule analysis problems are shown to be coNP-complete in the strong sense. Each of these involve guaranteeing that all deadlines are met for a set of task-graphs in statically-mapped, priority-based, multiprocessor schedules given particular variabilities. Specifically, the first of these allows run-times which axe bracketed, where the actual run-time of some tasks can take on any value within a given range. The second deals with task-graphs which arrive asynchronously, that is where the release-time for each task-graphs may take either any or a bracketed value. These variations correspond to task-graphs with either release-time or run-time jitter. The results are robust in the sense that they apply when schedules are either preemptive or non-preemptive as well as for several other problem variations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Baron, Lorraine M. "An Authentic Task That Models Quadratics". Mathematics Teaching in the Middle School 20, n.º 6 (febrero de 2015): 334–40. http://dx.doi.org/10.5951/mathteacmiddscho.20.6.0334.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Difallah, Djellel, Michele Catasta, Gianluca Demartini y Philippe Cudré-Mauroux. "Scaling-Up the Crowd: Micro-Task Pricing Schemes for Worker Retention and Latency Improvement". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 2 (5 de septiembre de 2014): 50–58. http://dx.doi.org/10.1609/hcomp.v2i1.13154.

Texto completo
Resumen
Retaining workers on micro-task crowdsourcing platforms is essential in order to guarantee the timely completion of batches of Human Intelligence Tasks (HITs). Worker retention is also a necessary condition for the introduction of SLAs on crowdsourcing platforms. In this paper, we introduce novel pricing schemes aimed at improving the retention rate of workers working on long batches of similar tasks. We show how increasing or decreasing the monetary reward over time influences the number of tasks a worker is willing to complete in a batch, as well as how it influences the overall latency. We compare our new pricing schemes against traditional pricing methods (e.g., constant reward for all the HITs in a batch) and empirically show how certain schemes effectively function as an incentive for workers to keep working longer on a given batch of HITs. Our experimental results show that the best pricing scheme in terms of worker retention is based on punctual bonuses paid whenever the workers reach predefined milestones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Trisedya, Bayu Distiawan, Jianzhong Qi y Rui Zhang. "Entity Alignment between Knowledge Graphs Using Attribute Embeddings". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julio de 2019): 297–304. http://dx.doi.org/10.1609/aaai.v33i01.3301297.

Texto completo
Resumen
The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration of multiple knowledge graphs. Our model exploits large numbers of attribute triples existing in the knowledge graphs and generates attribute character embeddings. The attribute character embedding shifts the entity embeddings from two knowledge graphs into the same space by computing the similarity between entities based on their attributes. We use a transitivity rule to further enrich the number of attributes of an entity to enhance the attribute character embedding. Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Banerjee, Subhankar y Shayok Chakraborty. "Deterministic Mini-batch Sequencing for Training Deep Neural Networks". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 8 (18 de mayo de 2021): 6723–31. http://dx.doi.org/10.1609/aaai.v35i8.16831.

Texto completo
Resumen
Recent advancements in the field of deep learning have dramatically improved the performance of machine learning models in a variety of applications, including computer vision, text mining, speech processing and fraud detection among others. Mini-batch gradient descent is the standard algorithm to train deep models, where mini-batches of a fixed size are sampled randomly from the training data and passed through the network sequentially. In this paper, we present a novel algorithm to generate a deterministic sequence of mini-batches to train a deep neural network (rather than a random sequence). Our rationale is to select a mini-batch by minimizing the Maximum Mean Discrepancy (MMD) between the already selected mini-batches and the unselected training samples. We pose the mini-batch selection as a constrained optimization problem and derive a linear programming relaxation to determine the sequence of mini-batches. To the best of our knowledge, this is the first research effort that uses the MMD criterion to determine a sequence of mini-batches to train a deep neural network. The proposed mini-batch sequencing strategy is deterministic and independent of the underlying network architecture and prediction task. Our extensive empirical analyses on three challenging datasets corroborate the merit of our framework over competing baselines. We further study the performance of our framework on two other applications besides classification (regression and semantic segmentation) to validate its generalizability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Huixin Wu, Jianghua Huang y Yingfeng Wang. "Algorithms for Numbering Communication Tasks in Task Graphs". International Journal of Advancements in Computing Technology 4, n.º 16 (30 de septiembre de 2012): 55–63. http://dx.doi.org/10.4156/ijact.vol4.issue16.7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

YANG, TAO y APOSTOLOS GERASOULIS. "EXECUTING SCHEDULED TASK GRAPHS ON MESSAGE-PASSING ARCHITECTURES". International Journal of High Speed Computing 08, n.º 03 (septiembre de 1996): 271–94. http://dx.doi.org/10.1142/s012905339600015x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ravindran, R. C., C. Mani Krishna, Israel Koren y Zahava Koren. "Scheduling imprecise task graphs for real-time applications". International Journal of Embedded Systems 6, n.º 1 (2014): 73. http://dx.doi.org/10.1504/ijes.2014.060919.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Hu, Menglan, Jun Luo, Yang Wang y Bharadwaj Veeravalli. "Adaptive Scheduling of Task Graphs with Dynamic Resilience". IEEE Transactions on Computers 66, n.º 1 (1 de enero de 2017): 17–23. http://dx.doi.org/10.1109/tc.2016.2574349.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía