Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Traversal queries.

Artykuły w czasopismach na temat „Traversal queries”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Traversal queries”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bai, Samita, i Shakeel A. Khoja. "Hybrid Query Execution on Linked Data With Complete Results". International Journal on Semantic Web and Information Systems 17, nr 1 (styczeń 2021): 25–49. http://dx.doi.org/10.4018/ijswis.2021010102.

Pełny tekst źródła
Streszczenie:
The link traversal strategies to query Linked Data over WWW can retrieve up-to-date results using a recursive URI lookup process in real-time. The downside of this approach comes with the query patterns having subject unbound (i.e. ?S rdf:type:Class). Such queries fail to start up the traversal process as the RDF pages are subject-centric in nature. Thus, zero-knowledge link traversal leads to the empty query results for these queries. In this paper, the authors analyze a large corpus of real-world SPARQL query logs and identify the Most Frequent Predicates (MFPs) occurring in these queries. The knowledge of these MFPs helps in finding and indexing a limited number of triples from the original data set. Additionally, the authors propose a Hybrid Query Execution (HQE) approach to execute the queries over this index for initial data source selection followed by link traversal process to fetch complete results. The evaluation of HQE on the latest real data benchmarks reveals that it retrieves at least five times more results than the existing approaches.
Style APA, Harvard, Vancouver, ISO itp.
2

Tang, Xian, Junfeng Zhou, Yunyu Shi, Xiang Liu i Keng Lin. "Efficient Processing of k-Hop Reachability Queries on Directed Graphs". Applied Sciences 13, nr 6 (8.03.2023): 3470. http://dx.doi.org/10.3390/app13063470.

Pełny tekst źródła
Streszczenie:
Given a directed graph, a k-hop reachability query, u→?kv, is used to check for the existence of a directed path from u to v that has a length of at most k. Addressing k-hop reachability queries is a fundamental task in graph theory and has been extensively investigated. However, existing algorithms can be inefficient when answering queries because they require costly graph traversal operations. To improve query performance, we propose an approach based on a vertex cover. We construct an index that covers all reachability information using a small set of vertices from the input graph. This allows us to answer k-hop reachability queries without performing graph traversal. We propose a linear-time algorithm to quickly compute a vertex cover, S, which we use to develop a novel labeling scheme and two algorithms for efficient query answering. The experimental results demonstrate that our approach significantly outperforms the existing approaches in terms of query response time.
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Yangjun. "Graph traversal and top-down evaluation of logic queries". Journal of Computer Science and Technology 13, nr 4 (lipiec 1998): 300–316. http://dx.doi.org/10.1007/bf02946620.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Harth, Andreas, i Sebastian Speiser. "On Completeness Classes for Query Evaluation on Linked Data". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 613–19. http://dx.doi.org/10.1609/aaai.v26i1.8209.

Pełny tekst źródła
Streszczenie:
The advent of the Web of Data kindled interest in link-traversal (or lookup-based) query processing methods, with which queries are answered via dereferencing a potentially large number of small, interlinked sources. While several algorithms for query evaluation have been proposed, there exists no notion of completeness for results of so-evaluated queries. In this paper, we motivate the need for clearly-defined completeness classes and present several notions of completeness for queries over Linked Data, based on the idea of authoritativeness of sources, and show the relation between the different completeness classes.
Style APA, Harvard, Vancouver, ISO itp.
5

Ilyas, Qazi Mudassar, Muneer Ahmad, Sonia Rauf i Danish Irfan. "RDF Query Path Optimization Using Hybrid Genetic Algorithms". International Journal of Cloud Applications and Computing 12, nr 1 (styczeń 2022): 1–16. http://dx.doi.org/10.4018/ijcac.2022010101.

Pełny tekst źródła
Streszczenie:
Resource Description Framework (RDF) inherently supports data mergers from various resources into a single federated graph that can become very large even for an application of modest size. This results in severe performance degradation in the execution of RDF queries. As every RDF query essentially traverses a graph to find the output of the Query, an efficient path traversal reduces the execution time of RDF queries. Hence, query path optimization is required to reduce the execution time as well as the cost of a query. Query path optimization is an NP-hard problem that cannot be solved in polynomial time. Genetic algorithms have proven to be very useful in optimization problems. We propose a hybrid genetic algorithm for query path optimization. The proposed algorithm selects an initial population using iterative improvement thus reducing the initial solution space for the genetic algorithm. The proposed algorithm makes significant improvements in the overall performance. We show that the overall number of joins for complex queries is reduced considerably, resulting in reduced cost.
Style APA, Harvard, Vancouver, ISO itp.
6

Nawaz, Waqas, Kifayat Ullah Khan i Young-Koo Lee. "Shortest path analysis for efficient traversal queries in large networks". Contemporary Engineering Sciences 7 (2014): 811–16. http://dx.doi.org/10.12988/ces.2014.4696.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Su, Li, Xiaoming Qin, Zichao Zhang, Rui Yang, Le Xu, Indranil Gupta, Wenyuan Yu, Kai Zeng i Jingren Zhou. "Banyan". Proceedings of the VLDB Endowment 15, nr 10 (czerwiec 2022): 2045–57. http://dx.doi.org/10.14778/3547305.3547311.

Pełny tekst źródła
Streszczenie:
Graph query services (GQS) are widely used today to interactively answer graph traversal queries on large-scale graph data. Existing graph query engines focus largely on optimizing the latency of a single query. This ignores significant challenges posed by GQS, including fine-grained control and scheduling during query execution, as well as performance isolation and load balancing in various levels from across user to intra-query. To tackle these control and scheduling challenges, we propose a novel scoped dataflow for modeling graph traversal queries, which explicitly exposes concurrent execution and control of any subquery to the finest granularity. We implemented Banyan, an engine based on the scoped dataflow model for GQS. Banyan focuses on scaling up the performance on a single machine, and provides the ability to easily scale out. Extensive experiments on multiple benchmarks show that Banyan improves performance by up to three orders of magnitude over state-of-the-art graph query engines, while providing performance isolation and load balancing.
Style APA, Harvard, Vancouver, ISO itp.
8

Sharma, Abhishek, i Kenneth Forbus. "Graph Traversal Methods for Reasoning in Large Knowledge-Based Systems". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (29.06.2013): 1255–61. http://dx.doi.org/10.1609/aaai.v27i1.8473.

Pełny tekst źródła
Streszczenie:
Commonsense reasoning at scale is a core problem for cognitive systems. In this paper, we discuss two ways in which heuristic graph traversal methods can be used to generate plausible inference chains. First, we discuss how Cyc’s predicate-type hierarchy can be used to get reasonable answers to queries. Second, we explain how connection graph-based techniques can be used to identify script-like structures. Finally, we demonstrate through experiments that these methods lead to significant improvement in accuracy for both Q/A and script construction.
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Junhua, Wentao Li, Long Yuan, Lu Qin, Ying Zhang i Lijun Chang. "Shortest-path queries on complex networks". Proceedings of the VLDB Endowment 15, nr 11 (lipiec 2022): 2640–52. http://dx.doi.org/10.14778/3551793.3551820.

Pełny tekst źródła
Streszczenie:
The shortest-path query, which returns the shortest path between two vertices, is a basic operation on complex networks and has numerous applications. To handle shortest-path queries, one option is to use traversal-based methods (e.g., breadth-first search); another option is to use extension-based methods, i.e., extending existing methods that use indexes to handle shortest-distance queries to support shortest-path queries. These two types of methods make different trade-offs in query time and space cost, but comprehensive studies of their performance on real-world graphs are lacking. Moreover, extension-based methods usually use extra attributes to extend the indexes, resulting in high space costs. To address these issues, we thoroughly compare the two types of methods mentioned above. We also propose a new extension-based approach, Monotonic Landmark Labeling (MLL), to reduce the required space cost while still guaranteeing query time. We compare the performance of different methods on ten large real-world graphs with up to 5.5 billion edges. The experimental results reveal the characteristics of various methods, allowing practitioners to select the appropriate method for a specific application.
Style APA, Harvard, Vancouver, ISO itp.
10

Fafalios, Pavlos, i Yannis Tzitzikas. "Answering SPARQL queries on the web of data through zero-knowledge link traversal". ACM SIGAPP Applied Computing Review 19, nr 3 (8.11.2019): 18–32. http://dx.doi.org/10.1145/3372001.3372003.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Qin, Jiwei, Liangli Ma i Qing Liu. "Pruning Optimization over Threshold-Based Historical Continuous Query". Algorithms 12, nr 5 (19.05.2019): 107. http://dx.doi.org/10.3390/a12050107.

Pełny tekst źródła
Streszczenie:
With the increase in mobile location service applications, spatiotemporal queries over the trajectory data of moving objects have become a research hotspot, and continuous query is one of the key types of various spatiotemporal queries. In this paper, we study the sub-domain of the continuous query of moving objects, namely the pruning optimization over historical continuous query based on threshold. Firstly, for the problem that the processing cost of the Mindist-based pruning strategy is too large, a pruning strategy based on extended Minimum Bounding Rectangle overlap is proposed to optimize the processing overhead. Secondly, a best-first traversal algorithm based on E3DR-tree is proposed to ensure that an accurate pruning candidate set can be obtained with accessing as few index nodes as possible. Finally, experiments on real data sets prove that our method significantly outperforms other similar methods.
Style APA, Harvard, Vancouver, ISO itp.
12

Davoudian, Ali, Liu Chen, Hongwei Tu i Mengchi Liu. "A Workload-Adaptive Streaming Partitioner for Distributed Graph Stores". Data Science and Engineering 6, nr 2 (15.04.2021): 163–79. http://dx.doi.org/10.1007/s41019-021-00156-2.

Pełny tekst źródła
Streszczenie:
AbstractStreaming graph partitioning methods have recently gained attention due to their ability to scale to very large graphs with limited resources. However, many such methods do not consider workload and graph characteristics. This may degrade the performance of queries by increasing inter-node communication and computational load imbalance. Moreover, existing workload-aware methods cannot consistently provide good performance as they do not consider dynamic workloads that keep emerging in graph applications. We address these issues by proposing a novel workload-adaptive streaming partitioner named WASP, that aims to achieve low-latency and high-throughput online graph queries. As each workload typically contains frequent query patterns, WASP exploits the existing workload to capture active vertices and edges which are frequently visited and traversed, respectively. This information is used to heuristically improve the quality of partitions either by avoiding the concentration of active vertices in a few partitions proportional to their visit frequencies or by reducing the probability of the cut of active edges proportional to their traversal frequencies. In order to assess the impact of WASP on a graph store and to show how easily the approach can be plugged on top of the system, we exploit it in a distributed graph-based RDF store. Our experiments over three synthetic and real-world graph datasets and the corresponding static and dynamic query workloads show that WASP achieves a better query performance against state-of-the-art graph partitioners, especially in dynamic query workloads.
Style APA, Harvard, Vancouver, ISO itp.
13

TIAN, JIAN-WEI, WEN-HUI QI i XIAO-XIAO LIU. "RETRIEVING DEEP WEB DATA THROUGH MULTI-ATTRIBUTES INTERFACES WITH STRUCTURED QUERIES". International Journal of Software Engineering and Knowledge Engineering 21, nr 04 (czerwiec 2011): 523–42. http://dx.doi.org/10.1142/s0218194011005396.

Pełny tekst źródła
Streszczenie:
A great deal of data on the Web lies in the hidden databases, or the deep Web. Most of the deep Web data is not directly available and can only be accessed through the query interfaces. Current research on deep Web search has focused on crawling the deep Web data via Web interfaces with keywords queries. However, these keywords-based methods have inherent limitations because of the multi-attributes and top-k features of the deep Web. In this paper we propose a novel approach for siphoning structured data with structured queries. Firstly, in order to retrieve all the data non-repeatedly in hidden databases, we model the hidden database as a hierarchy tree. Under this theoretical framework, data retrieving is transformed into the traversing problem in a tree. We also propose techniques to narrow the query space by using heuristic rule, based on mutual information, to guide the traversal process. We conduct extensive experiments over real deep Web sites and controlled databases to illustrate the coverage and efficiency of our techniques.
Style APA, Harvard, Vancouver, ISO itp.
14

Abeywickrama, Tenindra, Muhammad Aamir Cheema i Sabine Storandt. "Hierarchical Graph Traversal for Aggregate k Nearest Neighbors Search in Road Networks". Proceedings of the International Conference on Automated Planning and Scheduling 30 (1.06.2020): 2–10. http://dx.doi.org/10.1609/icaps.v30i1.6639.

Pełny tekst źródła
Streszczenie:
Location-based services rely heavily on efficient methods that search for relevant points-of-interest (POIs) close to a given location. A k nearest neighbors (kNN) query is one such example that finds k closest POIs from an agent's location. While most existing techniques focus on finding nearby POIs for a single agent, many applications require POIs that are close to multiple agents. In this paper, we study a natural extension of the kNN query for multiple agents, namely, the Aggregate k Nearest Neighbors (AkNN) query. An AkNN query retrieves k POIs with the smallest aggregate distances where the aggregate distance of a POI is obtained by aggregating its distances from the multiple agents (e.g., sum of its distances from each agent). Existing search heuristics are designed for a single agent and do not work well for multiple agents. We propose a novel data structure COLT (Compacted Object-Landmark Tree) to address this gap by enabling efficient hierarchical graph traversal. We then utilize COLT for a wide range of aggregate functions to efficiently answer AkNN queries. In our experiments on real-world and synthetic data sets, our techniques significantly improve query performance, typically outperforming existing approaches by more than an order of magnitude in almost all settings.
Style APA, Harvard, Vancouver, ISO itp.
15

Liu, Tiantian, Huan Li, Hua Lu, Muhammad Aamir Cheema i Lidan Shou. "Towards crowd-aware indoor path planning". Proceedings of the VLDB Endowment 14, nr 8 (kwiecień 2021): 1365–77. http://dx.doi.org/10.14778/3457390.3457401.

Pełny tekst źródła
Streszczenie:
Indoor venues accommodate many people who collectively form crowds. Such crowds in turn influence people's routing choices, e.g., people may prefer to avoid crowded rooms when walking from A to B. This paper studies two types of crowd-aware indoor path planning queries. The Indoor Crowd-Aware Fastest Path Query (FPQ) finds a path with the shortest travel time in the presence of crowds, whereas the Indoor Least Crowded Path Query (LCPQ) finds a path encountering the least objects en route. To process the queries, we design a unified framework with three major components. First, an indoor crowd model organizes indoor topology and captures object flows between rooms. Second, a time-evolving population estimator derives room populations for a future timestamp to support crowd-aware routing cost computations in query processing. Third, two exact and two approximate query processing algorithms process each type of query. All algorithms are based on graph traversal over the indoor crowd model and use the same search framework with different strategies of updating the populations during the search process. All proposals are evaluated experimentally on synthetic and real data. The experimental results demonstrate the efficiency and scalability of our framework and query processing algorithms.
Style APA, Harvard, Vancouver, ISO itp.
16

Huang, Y., i E. Stefanakis. "MULTI-RESOLUTION REPRESENTATION USING GRAPH DATABASE". ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-4-2022 (18.05.2022): 173–80. http://dx.doi.org/10.5194/isprs-annals-v-4-2022-173-2022.

Pełny tekst źródła
Streszczenie:
Abstract. Multi-resolution representation has always been an important and popular data source for many research and applications, such as navigation, land cover, map generation, media event forecasting, etc. With one spatial object represented by distinct geometries at different resolutions, multi-resolution representation is high in complexity. Most of the current approaches for storing and retrieving multi-resolution representation are either complicated in structure, or time consuming in traversal and query. In addition, supports on direct navigation between different representations are still intricate in most of the paradigms, especially in topological map sets. To address this problem, we propose a novel approach for storing, querying, and extracting multi-resolution representation. The development of this approach is based on Neo4j, a graph database platform that is famous for its powerful query and advanced flexibility. Benefited from the intuitiveness of the proposed database structure, direct navigation between representations of one spatial object, and between groups of representations at adjacent resolutions are both available. On top of this, collaborating with the self-designed web-based interface, queries within the proposed approach truly embraced the concept of keyword search, which lower the barrier between novice users and complicate queries. In all, the proposed system demonstrates the potential of managing multi-resolution representation data through the graph database and could be a time-saver for related processes.
Style APA, Harvard, Vancouver, ISO itp.
17

BLANDFORD, DANIEL K., GUY E. BLELLOCH, DAVID E. CARDOZE i CLEMENS KADOW. "COMPACT REPRESENTATIONS OF SIMPLICIAL MESHES IN TWO AND THREE DIMENSIONS". International Journal of Computational Geometry & Applications 15, nr 01 (luty 2005): 3–24. http://dx.doi.org/10.1142/s0218195905001580.

Pełny tekst źródła
Streszczenie:
We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently supporting traversal among simplices, storing data on simplices, and insertion and deletion of simplices. Our implementation of the data structures uses about 5 bytes/triangle in two dimensions (2D) and 7.5 bytes/tetrahedron in three dimensions (3D). We use the data structures to implement 2D and 3D incremental algorithms for generating a Delaunay mesh. The 3D algorithm can generate 100 Million tetrahedra with 1 Gbyte of memory, including the space for the coordinates and all data used by the algorithm. The runtime of the algorithm is as fast as Shewchuk's Pyramid code, the most efficient we know of, and uses a factor of 3.5 less memory overall.
Style APA, Harvard, Vancouver, ISO itp.
18

Huang, Qingrong, Xiaocong Lai, Qianxiang Su i Ying Pan. "RDF Subgraph Query Based on Common Subgraph in Distributed Environment". Wireless Communications and Mobile Computing 2023 (13.01.2023): 1–15. http://dx.doi.org/10.1155/2023/7148071.

Pełny tekst źródła
Streszczenie:
With the gradual development of the network, RDF graphs have become more and more complex as the scale of data increases; how to perform more effective query for massive RDF graphs is a hot topic of continuous research. The traditional methods of graph query and graph traversal produce great redundancy of intermediate results, and processing subgraph collection queries in stand-alone mode cannot perform efficient matching when the amount of data is extremely large. Moreover, when processing subgraph collection queries, it is necessary to iterate the query graph multiple times in the query of the common subgraph, and the execution efficiency is not high. In response to the above problems, a distributed query strategy of RDF subgraph set based on composite relation tree is proposed. Firstly, a corresponding composite relationship is established for RDF subgraph set, then the composite relation graph is clipped, and the redundant nodes and edges of the composite relation graph are deleted to obtain the composite relation tree. Finally, using the composite relation tree, a MapReduce-based RDF subgraph set query method is proposed, which can use parallel in the computing environment, the distributed query batch processing is performed on the RDF subgraph set, and the query result of the RDF subgraph set is obtained by traversing the composite relation tree. The experimental results show that the algorithm proposed in this paper can improve the query efficiency of RDF subgraph set.
Style APA, Harvard, Vancouver, ISO itp.
19

Zhang, Fengquan, Jiaojiao Guo, Jianfei Wan i Junli Qin. "Efficient Culling Criteria for Continues Collision Detection Using a Fast Statistical Analysis Method". Open Mechanical Engineering Journal 9, nr 1 (10.09.2015): 569–73. http://dx.doi.org/10.2174/1874155x01509010569.

Pełny tekst źródła
Streszczenie:
Continuous Collision Detection (CCD) between deforming triangle mesh elements in 3D is significant in many computer and graphics applications, such as virtual surgery, simulation and animation. Although CCD is more accurate than discrete methods, its application is limited mainly due to its time-consuming nature. To accelerate computation, we present an efficient CCD method to perform both inter-object and intra-object collision queries of triangle mesh models. Given a model set of different poses as training data, our method uses Statistic Analysis (SA) to make regression on a deformation subspace and also on collision detection conditions in a pre-processing stage, under a uniform framework. A data-driven training process selects a set of “key points” and produces a credible subspace representation, from which a plug-in type of collision culling certificate can be then obtained by regression process. At runtime, our certificate can be easily added to the classic BVH traversal procedure, as a sufficient condition of collision free cases, providing efficient culling in overlapping test and reducing hierarchy updates frequency. In the end, we describe performance and quality of our method using different experiments.
Style APA, Harvard, Vancouver, ISO itp.
20

Ares, Gonzalo, César Castañón Fernández i Isidro Diego Álvarez. "Ultimate Pit-Limit Optimization Algorithm Enhancement Using Structured Query Language". Minerals 13, nr 7 (20.07.2023): 966. http://dx.doi.org/10.3390/min13070966.

Pełny tekst źródła
Streszczenie:
Three-dimensional block models are the most widely used tool for the study and evaluation of ore deposits, the calculation and design of economical pits, mine production planning, and physical and numerical simulations of ore deposits. The way these algorithms and computational techniques are programmed is usually through complex C++, C# or Python libraries. Database programming languages such as SQL (Structured Query Language) have traditionally been restricted to drillhole sample data operation. However, major advances in the management and processing of large databases have opened up the possibility of changing the way in which block model calculations are related to the database. Thanks to programming languages designed to manage databases, such as SQL, the traditional recursive traversal of database records is replaced by a system of database queries. In this way, with a simple SQL, numerous lines of code are eliminated from the different loops, thus achieving a greater calculation speed. In this paper, a floating cone optimization algorithm is adapted to SQL, describing how economical cones can be generated, related and calculated, all in a simple way and with few lines of code. Finally, to test this methodology, a case study is developed and shown.
Style APA, Harvard, Vancouver, ISO itp.
21

Rani, Katari Pushpa, L. Lakshmi, Ch Sabitha, B. Dhana Lakshmi i S. Sreeja. "Top-K search scheme on encrypted data in cloud". International Journal of Advances in Applied Sciences 9, nr 1 (1.03.2020): 67. http://dx.doi.org/10.11591/ijaas.v9.i1.pp67-69.

Pełny tekst źródła
Streszczenie:
<span>A Secure and Effective Multi-keyword Ranked Search Scheme on Encrypted Cloud Data. Cloud computing is providing people a very good knowledge on all the popular and relevant domains which they need in their daily life. For this, all the people who act as Data Owners must possess some knowledge on Cloud should be provided with more information so that it will help them to make the cloud maintenance and administration easy. And most important concern these days is privacy. Some sensitive data exposed in the cloud these days have security issues. So, sensitive information ought to be encrypted earlier before making the data externalized for confidentiality, which makes some keyword-based information retrieval methods outdated. But this has some other problems like the usage of this information becomes difficult and also all the ancient algorithms developed for performing search on these data are not so efficient now because of the encryption done to help data from breaches. In this project, we try to investigate the multi- keyword top-k search problem for encryption against privacy breaks and to establish an economical and secure resolution to the present drawback. we have a tendency to construct a special tree-based index structure and style a random traversal formula, which makes even identical question to supply totally different visiting ways on the index, and may additionally maintain the accuracy of queries unchanged below stronger privacy. For this purpose, we take the help of vector area models and TFIDF. The KNN set of rules are used to develop this approach.</span>
Style APA, Harvard, Vancouver, ISO itp.
22

Quer, Stefano, i Andrea Calabrese. "Graph Reachability on Parallel Many-Core Architectures". Computation 8, nr 4 (2.12.2020): 103. http://dx.doi.org/10.3390/computation8040103.

Pełny tekst źródła
Streszczenie:
Many modern applications are modeled using graphs of some kind. Given a graph, reachability, that is, discovering whether there is a path between two given nodes, is a fundamental problem as well as one of the most important steps of many other algorithms. The rapid accumulation of very large graphs (up to tens of millions of vertices and edges) from a diversity of disciplines demand efficient and scalable solutions to the reachability problem. General-purpose computing has been successfully used on Graphics Processing Units (GPUs) to parallelize algorithms that present a high degree of regularity. In this paper, we extend the applicability of GPU processing to graph-based manipulation, by re-designing a simple but efficient state-of-the-art graph-labeling method, namely the GRAIL (Graph Reachability Indexing via RAndomized Interval) algorithm, to many-core CUDA-based GPUs. This algorithm firstly generates a label for each vertex of the graph, then it exploits these labels to answer reachability queries. Unfortunately, the original algorithm executes a sequence of depth-first visits which are intrinsically recursive and cannot be efficiently implemented on parallel systems. For that reason, we design an alternative approach in which a sequence of breadth-first visits substitute the original depth-first traversal to generate the labeling, and in which a high number of concurrent visits is exploited during query evaluation. The paper describes our strategy to re-design these steps, the difficulties we encountered to implement them, and the solutions adopted to overcome the main inefficiencies. To prove the validity of our approach, we compare (in terms of time and memory requirements) our GPU-based approach with the original sequential CPU-based tool. Finally, we report some hints on how to conduct further research in the area.
Style APA, Harvard, Vancouver, ISO itp.
23

Hume, Samuel, Surendra Sarnikar i Cherie Noteboom. "Enhancing Traceability in Clinical Research Data through a Metadata Framework". Methods of Information in Medicine 59, nr 02/03 (maj 2020): 075–85. http://dx.doi.org/10.1055/s-0040-1714393.

Pełny tekst źródła
Streszczenie:
Abstract Background The clinical research data lifecycle, from data collection to analysis results, functions in silos that restrict traceability. Traceability is a requirement for regulated clinical research studies and an important attribute of nonregulated studies. Current clinical research software tools provide limited metadata traceability capabilities and are unable to query variables across all phases of the data lifecycle. Objectives To develop a metadata traceability framework that can help query and visualize traceability metadata, identify traceability gaps, and validate metadata traceability to improve data lineage and reproducibility within clinical research studies. Methods This research follows the design science research paradigm where the objective is to create and evaluate an information technology (IT) artifact that explicitly addresses an organizational problem or opportunity. The implementation and evaluation of the IT artifact demonstrate the feasibility of both the design process and the final designed product. Results We present Trace-XML, a metadata traceability framework that extends standard clinical research metadata models and adapts graph traversal algorithms to provide clinical research study traceability queries, validation, and visualization. Trace-XML was evaluated using analytical and qualitative methods. The analytical methods show that Trace-XML accurately and completely assesses metadata traceability within a clinical research study. A qualitative study used thematic analysis of interview data to show that Trace-XML adds utility to a researcher's ability to evaluate metadata traceability within a study. Conclusion Trace-XML benefits include features that (1) identify traceability gaps in clinical study metadata, (2) validate metadata traceability within a clinical study, and (3) query and visualize traceability metadata. The key themes that emerged from the qualitative evaluation affirm that Trace-XML adds utility to the task of creating and assessing end-to-end clinical research study traceability.
Style APA, Harvard, Vancouver, ISO itp.
24

Bonerath, Annika, Yu Dong i Jan-Henrik Haunert. "A Data Structure for the Interactive Exploration of Public Transportation Networks". Advances in Cartography and GIScience of the ICA 4 (7.08.2023): 1–8. http://dx.doi.org/10.5194/ica-adv-4-1-2023.

Pełny tekst źródła
Streszczenie:
Abstract. To understand the quality of a public transportation network, map-based visualizations are the first choice. Since the network quality depends on the transportation schedule, the visualization interface should also incorporate time. In this work, we provide such an interface where users can filter the data for time windows. When a user queries for a certain time window, we display all network segments that are traversed by at least θ means of public transportation, e.g., buses in the time window on the map.Since such transportation networks can contain a high number of network segments, testing each network segment at the moment when the user specifies the time-window query is not fast enough. We investigate an approach to provide the query result in real time. Our contribution is a data structure that answers such arbitrary time-window queries from the user interface. The data structure is based on a tree structure which is augmented with further information. To evaluate our data structure, we perform experiments on real-world data. With our data structure, we answer time-window queries in at most 25 milliseconds whereas testing each network segment on-demand takes at least 60 milliseconds for the data set that has 102599 road segments.
Style APA, Harvard, Vancouver, ISO itp.
25

Pratt, Michael P., Srinivas R. Geedipally, Bahar Dadashova, Lingtao Wu i Mohammadali Shirazi. "Familiar versus Unfamiliar Drivers on Curves: Naturalistic Data Study". Transportation Research Record: Journal of the Transportation Research Board 2673, nr 6 (16.05.2019): 225–35. http://dx.doi.org/10.1177/0361198119846481.

Pełny tekst źródła
Streszczenie:
Human factors studies have shown that route familiarity affects driver behavior in various ways. Specifically, when drivers become more familiar with a roadway, they pay less attention to signs, adopt higher speeds, cut curves more noticeably, and exhibit slower reaction times to stimuli in their peripheral vision. Numerous curve speed models have been developed for purposes such as predicting driver behavior, evaluating roadway design consistency, and setting curve advisory speeds. These models are typically calibrated using field data, which gives information about driver behavior in relation to speed and sometimes lane placement, but does not provide insights into the drivers themselves. The objective of this paper is to examine the differences between the speeds of familiar and unfamiliar drivers as they traverse curves. The authors identified four two-lane rural highway sections in the State of Indiana which include multiple horizontal curves, and queried the Second Strategic Highway Research Program (SHRP2) database to obtain roadway inventory and naturalistic driving data for traversals through these curves. The authors applied a curve speed prediction model from the literature to predict the speed at the curve midpoints and compared the predicted speeds with observed speeds. The results of the analysis confirm earlier findings that familiar drivers choose higher speeds through curves. The successful use of the SHRP2 database for this analysis of route familiarity shows that the database can facilitate similar efforts for a wider range of driver behavior and human factors issues.
Style APA, Harvard, Vancouver, ISO itp.
26

Amit, Yali, i Donald Geman. "Shape Quantization and Recognition with Randomized Trees". Neural Computation 9, nr 7 (1.10.1997): 1545–88. http://dx.doi.org/10.1162/neco.1997.9.7.1545.

Pełny tekst źródła
Streszczenie:
We explore a new approach to shape recognition based on a virtually infinite family of binary features (queries) of the image data, designed to accommodate prior information about shape invariance and regularity. Each query corresponds to a spatial arrangement of several local topographic codes (or tags), which are in themselves too primitive and common to be informative about shape. All the discriminating power derives from relative angles and distances among the tags. The important attributes of the queries are a natural partial ordering corresponding to increasing structure and complexity; semi-invariance, meaning that most shapes of a given class will answer the same way to two queries that are successive in the ordering; and stability, since the queries are not based on distinguished points and substructures. No classifier based on the full feature set can be evaluated, and it is impossible to determine a priori which arrangements are informative. Our approach is to select informative features and build tree classifiers at the same time by inductive learning. In effect, each tree provides an approximation to the full posterior where the features chosen depend on the branch that is traversed. Due to the number and nature of the queries, standard decision tree construction based on a fixed-length feature vector is not feasible. Instead we entertain only a small random sample of queries at each node, constrain their complexity to increase with tree depth, and grow multiple trees. The terminal nodes are labeled by estimates of the corresponding posterior distribution over shape classes. An image is classified by sending it down every tree and aggregating the resulting distributions. The method is applied to classifying handwritten digits and synthetic linear and nonlinear deformations of three hundred [Formula: see text] symbols. State-of-the-art error rates are achieved on the National Institute of Standards and Technology database of digits. The principal goal of the experiments on [Formula: see text] symbols is to analyze invariance, generalization error and related issues, and a comparison with artificial neural networks methods is presented in this context. [Figure: see text]
Style APA, Harvard, Vancouver, ISO itp.
27

ATALAY, F. BETUL, i DAVID M. MOUNT. "POINTERLESS IMPLEMENTATION OF HIERARCHICAL SIMPLICIAL MESHES AND EFFICIENT NEIGHBOR FINDING IN ARBITRARY DIMENSIONS". International Journal of Computational Geometry & Applications 17, nr 06 (grudzień 2007): 595–631. http://dx.doi.org/10.1142/s0218195907002495.

Pełny tekst źródła
Streszczenie:
We describe a pointerless representation of hierarchical regular simplicial meshes, based on a bisection approach proposed by Maubach. We introduce a new labeling scheme, called an LPT code, which uniquely encodes the geometry of each simplex of the hierarchy, and we present rules to compute the neighbors of a given simplex efficiently through the use of these codes. In addition, we show how to traverse the associated tree and how to answer point location and interpolation queries. Our system works in arbitrary dimensions.
Style APA, Harvard, Vancouver, ISO itp.
28

Sun, Yuhan, i Mohamed Sarwat. "Riso-Tree: An Efficient and Scalable Index for Spatial Entities in Graph Database Management Systems". ACM Transactions on Spatial Algorithms and Systems 7, nr 3 (16.06.2021): 1–39. http://dx.doi.org/10.1145/3450945.

Pełny tekst źródła
Streszczenie:
With the ubiquity of spatial data, vertexes or edges in graphs can possess spatial location attributes side by side with other non-spatial attributes. For instance, as of June 2018, the Wikidata knowledge graph contains 48,547,142 data items (i.e., vertexes) and 13% of them have spatial location attributes. The article proposes Riso-Tree, a generic efficient and scalable indexing framework for spatial entities in graph database management systems. Riso-Tree enables the fast execution of graph queries that involve different types of spatial predicates (GraSp queries). The proposed framework augments the classic R-Tree structure with pre-materialized sub-graph entries. The pruning power of R-Tree is enhanced with the sub-graph information. Riso-Tree partitions the graph into sub-graphs based on their connectivity to the spatial sub-regions. The proposed index allows for the fast execution of GraSp queries by efficiently pruning the traversed vertexes/edges based upon the materialized sub-graph information. The experiments show that the proposed Riso-Tree achieves up to two orders of magnitude faster execution time than its counterparts when executing GraSp queries on real graphs (e.g., Wikidata). The strategy of limiting the size of each sub-graph entry ( PN max ) is proposed to reduce the storage overhead of Riso-Tree. The strategy can save up to around 70% storage without harming the query performance according to the experiments. Another strategy is proposed to ensure the performance of the index maintenance (Irrelevant Vertexes Skipping). The experiments show that the strategy can improve performance, especially for slow updates. It proves that Riso-Tree is useful for applications that need to support frequent updates.
Style APA, Harvard, Vancouver, ISO itp.
29

Hechenberger, Ryan, Peter J. Stuckey, Daniel Harabor, Pierre Le Bodic i Muhammad Aamir Cheema. "Online Computation of Euclidean Shortest Paths in Two Dimensions". Proceedings of the International Conference on Automated Planning and Scheduling 30 (1.06.2020): 134–42. http://dx.doi.org/10.1609/icaps.v30i1.6654.

Pełny tekst źródła
Streszczenie:
We consider the online version of Euclidean Shortest Path (ESP): a problem that asks for distance minimal trajectories between traversable pairs of points in the plane. The problem is made challenging because each trajectory must avoid a set of non-traversable obstacles represented as polygons. To solve ESP practitioners will often preprocess the environment and construct specialised data structures, such as visibility graphs and navigation meshes. Pathfinding queries on these data structures are fast but the preprocessed data becomes invalidated when obstacles move or change. In this work we propose RayScan, a new algorithmic approach for ESP which is entirely online. The central idea is simple: each time we expand a node we also cast a ray toward the target. If an obstacle intersects the ray we scan its perimeter for a turning point; i.e. a vertex from which a new ray can continue unimpeded towards the target. RayScan is fast, optimal and entirely online. Experiments show that it can significantly improve upon current state-of-the-art methods for ESP in cases where the set of obstacles is dynamic.
Style APA, Harvard, Vancouver, ISO itp.
30

Cox, Steven, Stanley C. Ahalt, James Balhoff, Chris Bizon, Karamarie Fecho, Yaphet Kebede, Kenneth Morton, Alexander Tropsha, Patrick Wang i Hao Xu. "Visualization Environment for Federated Knowledge Graphs: Development of an Interactive Biomedical Query Language and Web Application Interface". JMIR Medical Informatics 8, nr 11 (23.11.2020): e17964. http://dx.doi.org/10.2196/17964.

Pełny tekst źródła
Streszczenie:
Background Efforts are underway to semantically integrate large biomedical knowledge graphs using common upper-level ontologies to federate graph-oriented application programming interfaces (APIs) to the data. However, federation poses several challenges, including query routing to appropriate knowledge sources, generation and evaluation of answer subsets, semantic merger of those answer subsets, and visualization and exploration of results. Objective We aimed to develop an interactive environment for query, visualization, and deep exploration of federated knowledge graphs. Methods We developed a biomedical query language and web application interphase—termed as Translator Query Language (TranQL)—to query semantically federated knowledge graphs and explore query results. TranQL uses the Biolink data model as an upper-level biomedical ontology and an API standard that has been adopted by the Biomedical Data Translator Consortium to specify a protocol for expressing a query as a graph of Biolink data elements compiled from statements in the TranQL query language. Queries are mapped to federated knowledge sources, and answers are merged into a knowledge graph, with mappings between the knowledge graph and specific elements of the query. The TranQL interactive web application includes a user interface to support user exploration of the federated knowledge graph. Results We developed 2 real-world use cases to validate TranQL and address biomedical questions of relevance to translational science. The use cases posed questions that traversed 2 federated Translator API endpoints: Integrated Clinical and Environmental Exposures Service (ICEES) and Reasoning Over Biomedical Objects linked in Knowledge Oriented Pathways (ROBOKOP). ICEES provides open access to observational clinical and environmental data, and ROBOKOP provides access to linked biomedical entities, such as “gene,” “chemical substance,” and “disease,” that are derived largely from curated public data sources. We successfully posed queries to TranQL that traversed these endpoints and retrieved answers that we visualized and evaluated. Conclusions TranQL can be used to ask questions of relevance to translational science, rapidly obtain answers that require assertions from a federation of knowledge sources, and provide valuable insights for translational research and clinical practice.
Style APA, Harvard, Vancouver, ISO itp.
31

De la Fuente López, Laura. "Traducir el Caribe, una travesía rizomática. Propuestas para una traducción feminista y descolonial a partir de "Traversée de la Mangrove", de Maryse Condé". Anales de Filología Francesa 28, nr 1 (20.10.2020): 113–34. http://dx.doi.org/10.6018/analesff.425901.

Pełny tekst źródła
Streszczenie:
El Caribe ha sido, a lo largo de los siglos, un espacio testigo de los choques e influjos de múltiples pueblos, cuyas semillas han terminado fructificando en cosmovisiones y lenguas insospechadas. Deste este espacio emergen voces resistentes que aspiran a romper el silencio epistemológico y literario que la hegemonía occiental ha querido imponerles, es el caso de la guadalupeña Maryse Condé. En este contexto, nuestro artículo tendrá como propósito reflexionar sobre la interración entre los conflictos identitarios vinculados con la etnia, el género y la lengua, todos provocados por la llaga del colonialismo, a través del ejemplo de Traversé de la Mangrove. Tenemos en este texto un complejo contrapunto en el que dialogan la cultura oral de los ancestros y la escrita de los colonizadores, lo que planteará grandes desafíos traductológicos. Así, nuestro objetivo será sugerir una serie de perspectivas y estrategias para emprender una traducción ética y comprometida con el proyecto de la autora. For a long time, the Caribbean has witnessed the meeting and clashes between different peoples, which have replanted the fruit of multiple languages and cosmovisions. In this land, resistant voices are emerging that seek to break the epistemological and literary silence that Western hegemony has sought to impose on them, this is the case of Maryse Condé. Within this framework, this article will seek to reflect on the interaction between identity conflicts linked to ethnicity, gender and language, all motivated by the wounds of colonialism, through the example of Traversé de la Mangrove. We have in this text a complex counterpoint in which the oral culture of the ancestors and the written culture of the colonizers are in dialogue, which will pose great translation challenges. Thus, our aim will be to suggest a series of perspectives and useful strategies for undertaking an ethical translation in support of this author's writing project. Pendant longtemps, les Caraïbes ont été témoins de la rencontre et des chocs entre divers peuples, qui y ont replanté le fruit imprévisible des langues et cosmovisions multiples. Sur cette terre émergent des voix résistantes qui visent à briser le silence épistémologique et littéraire que l’hégémonie occidentale leur a voulu imposer, c’est le cas de Maryse Condé. Dans ce cadre, cet article cherchera à réfléchir sur l’interaction entre les conflits identitaires liés à l’ethnie, au sexe et à la langue, tous motivés par la blessure du colonialisme, à travers l’exemple de Traversé de la Mangrove. Nous avons dans ce texte un complexe contrepoint dans lequel la culture orale des ancêtres et la culture écrite des colonisateurs dialoguent, ce qui posera de grands défis traductologiques. Ainsi, notre but sera de suggérer une série de perspectives et stratégies utiles pour entreprendre une traduction éthique et en faveur du projet de cette auteure.
Style APA, Harvard, Vancouver, ISO itp.
32

Manjula, D., i T. V. Geetha. "Semantic Search Engine". Journal of Information & Knowledge Management 03, nr 01 (marzec 2004): 107–17. http://dx.doi.org/10.1142/s0219649204000729.

Pełny tekst źródła
Streszczenie:
Currently existing search engines index documents only by words and as a result, when a query can be interpreted in different senses, the irrelevant results are obtained in the midst of relevant results. A semantic search engine is proposed here which indexes documents both by words and senses and as a result tries to avoid the irrelevant results. The "crawler" traverses the worldwide web and the normalized documents are sent to the disambiguator module, which identifies the top few sense(s) of ambiguous words by employing a weighted disambiguation algorithm. The documents are then indexed by the words and the senses. The query is also disambiguated in a similar manner and retrieval is performed by matching both the sense and the word. The performance of the semantic search engine is compared against traditional word based indexing and also against the commercial search engines like Google, Yahoo, Hotbot and Lycos. The results show an impressive precision for the semantic search engine compared to other engines, particularly for ambiguous queries.
Style APA, Harvard, Vancouver, ISO itp.
33

Mackenzie, Joel, Matthias Petri i Alistair Moffat. "Anytime Ranking on Document-Ordered Indexes". ACM Transactions on Information Systems 40, nr 1 (31.01.2022): 1–32. http://dx.doi.org/10.1145/3467890.

Pełny tekst źródła
Streszczenie:
Inverted indexes continue to be a mainstay of text search engines, allowing efficient querying of large document collections. While there are a number of possible organizations, document-ordered indexes are the most common, since they are amenable to various query types, support index updates, and allow for efficient dynamic pruning operations. One disadvantage with document-ordered indexes is that high-scoring documents can be distributed across the document identifier space, meaning that index traversal algorithms that terminate early might put search effectiveness at risk. The alternative is impact-ordered indexes, which primarily support top- disjunctions but also allow for anytime query processing, where the search can be terminated at any time, with search quality improving as processing latency increases. Anytime query processing can be used to effectively reduce high-percentile tail latency that is essential for operational scenarios in which a service level agreement (SLA) imposes response time requirements. In this work, we show how document-ordered indexes can be organized such that they can be queried in an anytime fashion, enabling strict latency control with effective early termination. Our experiments show that processing document-ordered topical segments selected by a simple score estimator outperforms existing anytime algorithms, and allows query runtimes to be accurately limited to comply with SLA requirements.
Style APA, Harvard, Vancouver, ISO itp.
34

Portaneri, Cédric, Mael Rouxel-Labbé, Michael Hemmer, David Cohen-Steiner i Pierre Alliez. "Alpha wrapping with an offset". ACM Transactions on Graphics 41, nr 4 (lipiec 2022): 1–22. http://dx.doi.org/10.1145/3528223.3530152.

Pełny tekst źródła
Streszczenie:
Given an input 3D geometry such as a triangle soup or a point set, we address the problem of generating a watertight and orientable surface triangle mesh that strictly encloses the input. The output mesh is obtained by greedily refining and carving a 3D Delaunay triangulation on an offset surface of the input, while carving with empty balls of radius alpha. The proposed algorithm is controlled via two user-defined parameters: alpha and offset. Alpha controls the size of cavities or holes that cannot be traversed during carving, while offset controls the distance between the vertices of the output mesh and the input. Our algorithm is guaranteed to terminate and to yield a valid and strictly enclosing mesh, even for defect-laden inputs. Genericity is achieved using an abstract interface probing the input, enabling any geometry to be used, provided a few basic geometric queries can be answered. We benchmark the algorithm on large public datasets such as Thingi10k, and compare it to state-of-the-art approaches in terms of robustness, approximation, output complexity, speed, and peak memory consumption. Our implementation is available through the CGAL library.
Style APA, Harvard, Vancouver, ISO itp.
35

Craig, Cheryl J. "Opportunities and Challenges in Representing Narrative Inquiries Digitally". Teachers College Record: The Voice of Scholarship in Education 115, nr 4 (kwiecień 2013): 1–45. http://dx.doi.org/10.1177/016146811311500405.

Pełny tekst źródła
Streszczenie:
Background/Context Within the context of four locally funded research projects, the researcher was asked to disseminate the findings of her narrative inquiries not to the research community, which had previously been the case, but to the practice and philanthropic communities. This, in turn, created a representational crisis because practitioners and philanthropists typically do not read research reports. Purposes/Objectives/Research Question/Focus of Study In this paper, two sources previously cut off from one another—the narrative inquiry research method and the digital storytelling approach—were brought together to inform how the live research projects became represented. Setting The four research endeavors, all involving arts-based instruction and all funded by the same reform movement, were undertaken in four different school sites serving primarily underserved minority youth in the fourth largest city in the U.S. Population/Participants/Subjects The participants were mainly teachers, although some principals, students, and grandparents contributed to certain digital representations. Research assistants were also highly involved. Conclusion This meta-level “inquiry into inquiry” traversed all four narrative inquiries and the digital exemplars produced for each to show how digital narrative inquiries (narrative inquiries represented through digital story) attend to eight considerations: relationship, perspective, authorial voice, cultural/contextual considerations, relevance, negotiation, audience and technology were learned. While this “inquiry into inquiry” addresses definitional and others queries at the intersection where narrative inquiry and digital stor y meet, other questions remain to be addressed that will necessitate future research.
Style APA, Harvard, Vancouver, ISO itp.
36

Raza, Shaan M., Angela M. Donaldson, Alpesh Mehta, Apostolos J. Tsiouris, Vijay K. Anand i Theodore H. Schwartz. "Surgical management of trigeminal schwannomas: defining the role for endoscopic endonasal approaches". Neurosurgical Focus 37, nr 4 (październik 2014): E17. http://dx.doi.org/10.3171/2014.7.focus14341.

Pełny tekst źródła
Streszczenie:
Object Because multiple anatomical compartments are involved, the surgical management of trigeminal schwannomas requires a spectrum of cranial base approaches. The endoscopic endonasal approach to Meckel's cave provides a minimal access corridor for surgery, but few reports have assessed outcomes of the procedure or provided guidelines for case selection. Methods A prospectively acquired database of 680 endoscopic endonasal cases was queried for trigeminal schwannoma cases. Clinical charts, radiographic images, and long-term outcomes were reviewed to determine outcome and success in removing tumor from each compartment traversed by the trigeminal nerve. Results Four patients had undergone endoscopic resection of trigeminal schwannomas via the transpterygoid approach (mean follow-up 37 months). All patients had disease within Meckel's cave, and 1 patient had extension into the posterior fossa. Gross-total resection was achieved in 3 patients whose tumors were purely extracranial. One patient with combined Meckel's cave and posterior fossa tumor had complete resection of the extracranial disease and 52% resection of the posterior fossa disease. One patient with posterior fossa disease experienced a sixth cranial nerve palsy in addition to a corneal keratopathy from worsened trigeminal neuropathy. There were no CSF leaks. Over the course of the study, 1 patient with subtotal resection required subsequent stereotactic radiosurgery for disease progression within the posterior fossa. Conclusions Endoscopic endonasal approaches appear to be well suited for trigeminal schwannomas restricted to Meckel's cave and/or extracranial segments of the nerve. Lateral transcranial skull base approaches should be considered for patients with posterior fossa disease. Further multiinstitutional studies will be necessary for adequate power to help determine relative indications between endoscopic and transcranial skull base approaches.
Style APA, Harvard, Vancouver, ISO itp.
37

Duponnois, Robin, Ezékiel Baudoin, Hervé Sanguin, Jean Thioulouse, Christine Le Roux, Estelle Tournier, Antoine Galiana, Yves Prin i Bernard Dreyfus. "L'introduction d'acacias australiens pour réhabiliter des écosystèmes dégradés est-elle dépourvue de risques environnementaux ?" BOIS & FORETS DES TROPIQUES 318, nr 318 (1.12.2013): 59. http://dx.doi.org/10.19182/bft2013.318.a20519.

Pełny tekst źródła
Streszczenie:
L'utilisation d'essences forestières exotiques et plus particulièrement des arbres à croissance rapide (acacias, pins ou eucalyptus) a été fréquemment recommandée pour réhabiliter et restaurer à brève échéance des milieux dégradés suite à des événements naturels ou à des activités anthropiques. L'incidence sur l'environnement de l'introduction de ces espèces, parfois envahissantes, est surtout évaluée pour leur impact sur la biodiversité végétale et les caractéristiques physico-chimiques des sols, mais rarement en ce qui concerne la composition de la microflore. Les micro-organismes, et plus particulièrement les champignons mycorhiziens, jouent un rôle clé vis-à-vis des mécanismes biologiques régissant la fertilité chimique des sols et leur productivité, facteurs de stabilité des écosystèmes terrestres. L'approche retenue a été de décrire l'incidence de l'introduction d'essences exotiques sur les caractéristiques biologiques des sols, ainsi que les conséquences sur la reconstruction d'un couvert végétal composé par des espèces natives du milieu d'origine. Après avoir rappelé l'importance de l'utilisation des acacias à travers le monde, deux études réalisées au Sénégal et en Algérie ont permis de montrer que deux acacias australiens, Acacia holosericea et Acacia mearnsii, induisent de profondes modifications de la diversité fonctionnelle de la microflore du sol et aussi de la structure des microorganismes symbiotiques (champignons mycorhiziens et rhizobia). Ces acacias entraînent une inhibition de la croissance de deux espèces forestières natives, Faidherbia albida et Quercus suber. Les résultats confirment le besoin de cerner les processus biologiques liés aux actions d'introduction d'essences exotiques afin de moduler leur utilisation. Ainsi, cette connaissance préviendra les risques et assurera les performances des opérations de reboisement, notamment pour la réhabilitation des terrains dégradés.
Style APA, Harvard, Vancouver, ISO itp.
38

Buell, Thomas J., Ulas Yener, Tony R. Wang, Avery L. Buchholz, Chun-Po Yen, Mark E. Shaffrey, Christopher I. Shaffrey i Justin S. Smith. "Sacral insufficiency fractures after lumbosacral arthrodesis: salvage lumbopelvic fixation and a proposed management algorithm". Journal of Neurosurgery: Spine 33, nr 2 (sierpień 2020): 225–36. http://dx.doi.org/10.3171/2019.12.spine191148.

Pełny tekst źródła
Streszczenie:
OBJECTIVESacral insufficiency fracture after lumbosacral (LS) arthrodesis is an uncommon complication. The objective of this study was to report the authors’ operative experience managing this complication, review pertinent literature, and propose a treatment algorithm.METHODSThe authors analyzed consecutive adult patients treated at their institution from 2009 to 2018. Patients who underwent surgery for sacral insufficiency fractures after posterior instrumented LS arthrodesis were included. PubMed was queried to identify relevant articles detailing management of this complication.RESULTSNine patients with a minimum 6-month follow-up were included (mean age 73 ± 6 years, BMI 30 ± 6 kg/m2, 56% women, mean follow-up 35 months, range 8–96 months). Six patients had osteopenia/osteoporosis (mean dual energy x-ray absorptiometry hip T-score −1.6 ± 0.5) and 3 received treatment. Index LS arthrodesis was performed for spinal stenosis (n = 6), proximal junctional kyphosis (n = 2), degenerative scoliosis (n = 1), and high-grade spondylolisthesis (n = 1). Presenting symptoms of back/leg pain (n = 9) or lower extremity weakness (n = 3) most commonly occurred within 4 weeks of index LS arthrodesis, which prompted CT for fracture diagnosis at a mean of 6 weeks postoperatively. All sacral fractures were adjacent or involved S1 screws and traversed the spinal canal (Denis zone III). H-, U-, or T-type sacral fracture morphology was identified in 7 patients. Most fractures (n = 8) were Roy-Camille type II (anterior displacement with kyphosis). All patients underwent lumbopelvic fixation via a posterior-only approach; mean operative duration and blood loss were 3.3 hours and 850 ml, respectively. Bilateral dual iliac screws were utilized in 8 patients. Back/leg pain and weakness improved postoperatively. Mean sacral fracture anterolisthesis and kyphotic angulation improved (from 8 mm/11° to 4 mm/5°, respectively) and all fractures were healed on radiographic follow-up (mean duration 29 months, range 8–90 months). Two patients underwent revision for rod fractures at 1 and 2 years postoperatively. A literature review found 17 studies describing 87 cases; potential risk factors were osteoporosis, longer fusions, high pelvic incidence (PI), and postoperative PI-to–lumbar lordosis (LL) mismatch.CONCLUSIONSA high index of suspicion is needed to diagnose sacral insufficiency fracture after LS arthrodesis. A trial of conservative management is reasonable for select patients; potential surgical indications include refractory pain, neurological deficit, fracture nonunion with anterolisthesis or kyphotic angulation, L5–S1 pseudarthrosis, and spinopelvic malalignment. Lumbopelvic fixation with iliac screws may be effective salvage treatment to allow fracture healing and symptom improvement. High-risk patients may benefit from prophylactic lumbopelvic fixation at the time of index LS arthrodesis.
Style APA, Harvard, Vancouver, ISO itp.
39

Lee, Kuo-Kai, Wing-Kai Hon, Chung-Shou Liao, Kunihiko Sadakane i Meng-Tsung Tsai. "Fully Dynamic No-Back-Edge-Traversal Forest via 2D-Range Queries". International Journal of Computational Geometry & Applications, 9.02.2023, 1–12. http://dx.doi.org/10.1142/s0218195922410047.

Pełny tekst źródła
Streszczenie:
Orthogonal range search is ubiquitous nowadays, with natural applications in databases, data mining, and text indexing. Very recently, yet another application was discovered, which is to maintain a DFS forest in a dynamic graph. In this paper, we want to extend the above recent study, by applying orthogonal range search to efficient maintenance of a BFS-like forest, called no-back-edge-traversal (NBET) forest, which refers to a spanning forest obtained from a traversal that does not create any back edge. The study of such a problem is motivated by the fact that NBET forest can be used as a strong certificate of 2-connectivity of an undirected graph, which is more general than a spanning forest obtained from a scan-first search traversal.
Style APA, Harvard, Vancouver, ISO itp.
40

Tesfaye, Bezaye, Nikolaus Augsten, Mateusz Pawlik, Michael H. Böhlen i Christian S. Jensen. "Speeding Up Reachability Queries in Public Transport Networks Using Graph Partitioning". Information Systems Frontiers, 14.08.2021. http://dx.doi.org/10.1007/s10796-021-10164-2.

Pełny tekst źródła
Streszczenie:
AbstractComputing path queries such as the shortest path in public transport networks is challenging because the path costs between nodes change over time. A reachability query from a node at a given start time on such a network retrieves all points of interest (POIs) that are reachable within a given cost budget. Reachability queries are essential building blocks in many applications, for example, group recommendations, ranking spatial queries, or geomarketing. We propose an efficient solution for reachability queries in public transport networks. Currently, there are two options to solve reachability queries. (1) Execute a modified version of Dijkstra’s algorithm that supports time-dependent edge traversal costs; this solution is slow since it must expand edge by edge and does not use an index. (2) Issue a separate path query for each single POI, i.e., a single reachability query requires answering many path queries. None of these solutions scales to large networks with many POIs. We propose a novel and lightweight reachability index. The key idea is to partition the network into cells. Then, in contrast to other approaches, we expand the network cell by cell. Empirical evaluations on synthetic and real-world networks confirm the efficiency and the effectiveness of our index-based reachability query solution.
Style APA, Harvard, Vancouver, ISO itp.
41

Piskachev, Goran, Johannes Späth, Ingo Budde i Eric Bodden. "Fluently specifying taint-flow queries with fluentTQL". Empirical Software Engineering 27, nr 5 (30.05.2022). http://dx.doi.org/10.1007/s10664-022-10165-y.

Pełny tekst źródła
Streszczenie:
AbstractPrevious work has shown that taint analyses are only useful if correctly customized to the context in which they are used. Existing domain-specific languages (DSLs) allow such customization through the definition of deny-listing data-flow rules that describe potentially vulnerable or malicious taint-flows. These languages, however, are designed primarily for security experts who are expected to be knowledgeable in taint analysis. Software developers, however, consider these languages to be complex. This paper thus presents fluent TQL, a query specification language particularly for taint-flows. fluentTQL is internal Java DSL and uses a fluent-interface design. fluentTQL queries can express various taint-style vulnerability types, e.g. injections, cross-site scripting or path traversal. This paper describes fluentTQL’s abstract and concrete syntax and defines its runtime semantics. The semantics are independent of any underlying analysis and allows evaluation of fluent TQL queries by a variety of taint analyses. Instantiations of fluentTQL, on top of two taint analysis solvers, Boomerang and FlowDroid, show and validate fluent TQL expressiveness. Based on existing examples from the literature, we have used fluentTQL to implement queries for 11 popular security vulnerability types in Java. Using our SQL injection specification, the Boomerang-based taint analysis found all 17 known taint-flows in the OWASP WebGoat application, whereas with FlowDroid 13 taint-flows were found. Similarly, in a vulnerable version of the Java Spring PetClinic application, the Boomerang-based taint analysis found all seven expected taint-flows. In seven real-world Android apps with 25 expected malicious taint-flows, 18 taint-flows were detected. In a user study with 26 software developers, fluentTQL reached a high usability score. In comparison to CodeQL, the state-of-the-art DSL by Semmle/GitHub, participants found fluentTQL more usable and with it they were able to specify taint analysis queries in shorter time.
Style APA, Harvard, Vancouver, ISO itp.
42

Jakob, J., i M. Guthe. "Optimizing LBVH‐Construction and Hierarchy‐Traversal to accelerate k NN Queries on Point Clouds using the GPU". Computer Graphics Forum, 28.10.2020. http://dx.doi.org/10.1111/cgf.14177.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

BOGAERTS, BART, BAS KETSMAN, YOUNES ZEBOUDJ, HEBA AAMER, RUBEN TAELMAN i RUBEN VERBORGH. "Distributed Subweb Specifications for Traversing the Web". Theory and Practice of Logic Programming, 25.04.2023, 1–27. http://dx.doi.org/10.1017/s1471068423000054.

Pełny tekst źródła
Streszczenie:
Abstract Link traversal–based query processing (ltqp), in which a sparql query is evaluated over a web of documents rather than a single dataset, is often seen as a theoretically interesting yet impractical technique. However, in a time where the hypercentralization of data has increasingly come under scrutiny, a decentralized Web of Data with a simple document-based interface is appealing, as it enables data publishers to control their data and access rights. While (ltqp allows evaluating complex queries over such webs, it suffers from performance issues (due to the high number of documents containing data) as well as information quality concerns (due to the many sources providing such documents). In existing ltqp approaches, the burden of finding sources to query is entirely in the hands of the data consumer. In this paper, we argue that to solve these issues, data publishers should also be able to suggest sources of interest and guide the data consumer toward relevant and trustworthy data. We introduce a theoretical framework that enables such guided link traversal and study its properties. We illustrate with a theoretic example that this can improve query results and reduce the number of network requests. We evaluate our proposal experimentally on a virtual linked web with specifications and indeed observe that not just the data quality but also the efficiency of querying improves.
Style APA, Harvard, Vancouver, ISO itp.
44

"Personalized Web Explore with Disguised User Contour Structure". International Journal of Engineering and Advanced Technology 8, nr 6S3 (22.11.2019): 1895–98. http://dx.doi.org/10.35940/ijeat.f1364.0986s319.

Pełny tekst źródła
Streszczenie:
site structures square measure altered to boost the user navigations. net personification methodology reconstructs the page relates with relevance the traversal path and contour of a particular user. Users knowledge square measure collected and analyzed to fetch the intention behind the issued question. User customizable Privacy protective Explore (UPS) is employed to generalize contours by queries with user privacy necessities. Greedy discriminating power algorithmic program (GreedyDP) is employed to maximise the discriminating power of the user contours. Greedy knowledge Loss (GreedyIL) is employed to attenuate the info loss in user contours. GreedyIL algorithmic program achieves high potency than the GreedyDP algorithmic program. The Personification net Explore (PWS) theme is increased to manage topic relationship based mostly knowledgeable attacks. The User customizable Privacy-preserving Explore (UPS) model is increased to resist question session based mostly attacks. question generalization is performed with question priority values. Anonymisation and topic taxonomy models square measure wont to improve the personification method.
Style APA, Harvard, Vancouver, ISO itp.
45

Fan, Wenfei, i Chao Tian. "Incremental Graph Computations: Doable and Undoable". ACM Transactions on Database Systems, 10.03.2022. http://dx.doi.org/10.1145/3500930.

Pełny tekst źródła
Streszczenie:
The incremental problem for a class \({\mathcal {Q}} \) of graph queries aims to compute, given a query \(Q \in {\mathcal {Q}} \) , graph G , answers Q ( G ) to Q in G and updates ΔG to G as input, changes ΔO to output Q ( G ) such that Q ( G ⊕ ΔG ) = Q ( G )⊕ ΔO . It is called bounded if its cost can be expressed as a polynomial function in the sizes of Q , ΔG and ΔO , which reduces the computations on possibly big G to small ΔG and ΔO . No matter how desirable, however, our first results are negative: for common graph queries such as traversal, connectivity, keyword search, pattern matching, and maximum cardinality matching, their incremental problems are unbounded. In light of the negative results, we propose two characterizations for the effectiveness of incremental graph computation: (a) localizable , if its cost is decided by small neighbors of nodes in ΔG instead of the entire G ; and (b) bounded relative to a batch graph algorithm \({\mathcal {T}} \) , if the cost is determined by the sizes of ΔG and changes to the affected area that is necessarily checked by any algorithms that incrementalize \({\mathcal {T}} \) . We show that the incremental computations above are either localizable or relatively bounded, by providing corresponding incremental algorithms. That is, we can either reduce the incremental computations on big graphs to small data, or incrementalize existing batch graph algorithms by minimizing unnecessary recomputation. Using real-life and synthetic data, we experimentally verify the effectiveness of our incremental algorithms.
Style APA, Harvard, Vancouver, ISO itp.
46

Garay-Ruiz, Diego, i Carles Bo. "Chemical reaction network knowledge graphs: the OntoRXN ontology". Journal of Cheminformatics 14, nr 1 (30.05.2022). http://dx.doi.org/10.1186/s13321-022-00610-x.

Pełny tekst źródła
Streszczenie:
Abstract The organization and management of large amounts of data has become a major point in almost all areas of human knowledge. In this context, semantic approaches propose a structure for the target data, defining ontologies that state the types of entities on a certain field and how these entities are interrelated. In this work, we introduce OntoRXN, a novel ontology describing the reaction networks constructed from computational chemistry calculations. Under our paradigm, these networks are handled as undirected graphs, without assuming any traversal direction. From there, we propose a core class structure including reaction steps, network stages, chemical species, and the lower-level entities for the individual computational calculations. These individual calculations are founded on the OntoCompChem ontology and on the ioChem-BD database, where information is parsed and stored in CML format. OntoRXN is introduced through several examples in which knowledge graphs based on the ontology are generated for different chemical systems available on ioChem-BD. Finally, the resulting knowledge graphs are explored through SPARQL queries, illustrating the power of the semantic approach to standardize the analysis of intricate datasets and to simplify the development of complex workflows. Graphical Abstract
Style APA, Harvard, Vancouver, ISO itp.
47

Xing, Xiaogang, Yuling Chen, Tao Li, Yang Xin i Hongwei Sun. "A blockchain index structure based on subchain query". Journal of Cloud Computing 10, nr 1 (2.10.2021). http://dx.doi.org/10.1186/s13677-021-00268-0.

Pełny tekst źródła
Streszczenie:
AbstractBlockchain technology has the characteristics of decentralization and tamper resistance, which can store data safely and reduce the cost of trust effectively. However, the existing blockchain system has weak performance in data management, and only supports traversal queries with transaction hashes as keywords. The query method based on the account transaction trace chain (ATTC) improves the query efficiency of historical transactions of the account. However, the efficiency of querying accounts with longer transaction chains has not been effectively improved. Given the inefficiency and single method of the ATTC index in the query, we propose a subchain-based account transaction chain (SCATC) index structure. First, the account transaction chain is divided into subchains, and the last block of each subchain is connected by a hash pointer. The block-by-block query mode in ATTC is converted to the subchain-by-subchain query mode, which shortens the query path. Multiple transactions of the same account in the same block are merged and stored, which simplifies the construction cost of the index and saves storage resources. then, the construction algorithm and query algorithm is given for the SCATC index structure. Simulation analysis shows that the SCATC index structure significantly improves query efficiency.
Style APA, Harvard, Vancouver, ISO itp.
48

Evangelou, Iordanis, Georgios Papaioannou, Konstantinos Vardis i Anastasios Gkaravelis. "A neural builder for spatial subdivision hierarchies". Visual Computer, 24.07.2023. http://dx.doi.org/10.1007/s00371-023-02975-y.

Pełny tekst źródła
Streszczenie:
AbstractSpatial data structures, such as k-d trees and bounding volume hierarchies, are extensively used in computer graphics for the acceleration of spatial queries in ray tracing, nearest neighbour searches and other tasks. Typically, the splitting strategy employed during the construction of such structures is based on the greedy evaluation of a predefined objective function, resulting in a less than optimal subdivision scheme. In this work, for the first time, we propose the use of unsupervised deep learning to infer the structure of a fixed-depth k-d tree from a constant, subsampled set of the input primitives, based on the recursive evaluation of the cost function at hand. This results in high-quality upper spatial hierarchy, inferred in constant time and without paying the intractable price of a fully recursive tree optimisation. The resulting fixed-depth tree can then be further expanded, in parallel, into either a full k-d tree or transformed into a bounding volume hierarchy, with any known conventional tree builder. The approach is generic enough to accommodate different cost functions, such as the popular surface area and volume heuristics. We experimentally validate that the resulting hierarchies have competitive traversal performance with respect to established tree builders, while maintaining minimal overhead in construction times.
Style APA, Harvard, Vancouver, ISO itp.
49

Broka, Filjor. "Abstraction Sampling in Graphical Models". Proceedings of the AAAI Conference on Artificial Intelligence 32, nr 1 (29.04.2018). http://dx.doi.org/10.1609/aaai.v32i1.11365.

Pełny tekst źródła
Streszczenie:
We present a new sampling scheme for approximating hard to compute queries over graphical models, such as computing the partition function. The scheme builds upon exact algorithms that traverse a weighted directed state-space graph representing a global function over a graphical model (e.g., probability distribution). With the aid of an abstraction function and randomization, the state space can be compacted (trimmed) to facilitate tractable computation, yielding a Monte Carlo estimate that is unbiased. We present the general idea and analyze its properties analytically and empirically.
Style APA, Harvard, Vancouver, ISO itp.
50

Brewster, Christopher, Nikos Kalatzis, Barry Nouwt, Han Kruiger i Jack Verhoosel. "Data sharing in agricultural supply chains: Using semantics to enable sustainable food systems". Semantic Web, 25.05.2023, 1–31. http://dx.doi.org/10.3233/sw-233287.

Pełny tekst źródła
Streszczenie:
The agrifood system faces a great many economic, social and environmental challenges. One of the biggest practical challenges has been to achieve greater data sharing throughout the agrifood system and the supply chain, both to inform other stakeholders about a product and equally to incentivise greater environmental sustainability. In this paper, a data sharing architecture is described built on three principles (a) reuse of existing semantic standards; (b) integration with legacy systems; and (c) a distributed architecture where stakeholders control access to their own data. The system has been developed based on the requirements of commercial users and is designed to allow queries across a federated network of agrifood stakeholders. The Ploutos semantic model is built on an integration of existing ontologies. The Ploutos architecture is built on a discovery directory and interoperability enablers, which use graph query patterns to traverse the network and collect the requisite data to be shared. The system is exemplified in the context of a pilot involving commercial stakeholders in the processed fruit sector. The data sharing approach is highly extensible with considerable potential for capturing sustainability related data.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii