To see the other types of publications on this topic, follow the link: Code property graph.

Journal articles on the topic 'Code property graph'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Code property graph.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Saenpholphat, Varaporn, and Ping Zhang. "Conditional resolvability in graphs: a survey." International Journal of Mathematics and Mathematical Sciences 2004, no. 38 (2004): 1997–2017. http://dx.doi.org/10.1155/s0161171204311403.

Full text
Abstract:
For an ordered setW={w1,w2,…,wk}of vertices and a vertexvin a connected graphG, the code ofvwith respect toWis thek-vectorcW(v)=(d(v,w1),d(v,w2),…,d(v,wk)), whered(x,y)represents the distance between the verticesxandy. The setWis a resolving set forGif distinct vertices ofGhave distinct codes with respect toW. The minimum cardinality of a resolving set forGis its dimensiondim(G). Many resolving parameters are formed by extending resolving sets to different subjects in graph theory, such as the partition of the vertex set, decomposition and coloring in graphs, or by combining resolving property with another graph-theoretic property such as being connected, independent, or acyclic. In this paper, we survey results and open questions on the resolving parameters defined by imposing an additional constraint on resolving sets, resolving partitions, or resolving decompositions in graphs.
APA, Harvard, Vancouver, ISO, and other styles
2

Klavžar, Sandi, Uroš Milutinović, and Ciril Petr. "1-perfect codes in Sierpiński graphs." Bulletin of the Australian Mathematical Society 66, no. 3 (December 2002): 369–84. http://dx.doi.org/10.1017/s0004972700040235.

Full text
Abstract:
Sierpiński graphs S (n, κ) generalise the Tower of Hanoi graphs—the graph S (n, 3) is isomorphic to the graph Hn of the Tower of Hanoi with n disks. A 1-perfect code (or an efficient dominating set) in a graph G is a vertex subset of G with the property that the closed neighbourhoods of its elements form a partition of V (G). It is proved that the graphs S (n, κ) possess unique 1-perfect codes, thus extending a previously known result for Hn. An efficient decoding algorithm is also presented. The present approach, in particular the proposed (de)coding, is intrinsically different from the approach to Hn.
APA, Harvard, Vancouver, ISO, and other styles
3

Pilongo, Jupiter, Leonard Mijares Paleta, and Philip Lester P. Benjamin. "Vertex-weighted $(k_{1},k_{2})$ $E$-torsion Graph of Quasi Self-dual Codes." European Journal of Pure and Applied Mathematics 17, no. 2 (April 30, 2024): 1369–84. http://dx.doi.org/10.29020/nybg.ejpam.v17i2.4867.

Full text
Abstract:
In this paper, we have introduced a graph $G_{EC}$ generated by type-$(k_{1},k_{2})$ $E$-codes which is $(k_{1},k_{2})$ $E$-torsion graph. The binary code words of the torsion code of $C$ are the set of vertices, and the edges are defined using the construction of $E$-codes. Also, we characterized the graph obtained when $k_{1}=0$ and $k_{2}=0$ and calculated the degrees of every vertex and the number of edges of $G_{EC}$. Moreover, we presented necessary and sufficient conditions for a vertex to be in the center of a graph given the property of the code word corresponding to the vertex. Finally, we represent every quasi-self dual codes of short length by defining the vertex-weighted $(k_{1},k_{2})$ $E$-torsion graph, where the weight of every vertex is the weight of the code word corresponding to the vertex.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Chunhui, Tengfei Tu, Cheng Wang, and Sujuan Qin. "VulPathsFinder: A Static Method for Finding Vulnerable Paths in PHP Applications Based on CPG." Applied Sciences 13, no. 16 (August 14, 2023): 9240. http://dx.doi.org/10.3390/app13169240.

Full text
Abstract:
Today, as PHP application technology is becoming increasingly mature, the functions of modern multi-layer web applications are becoming more and more complete, and the complexity is also gradually increasing. While providing developers with various business functions and interfaces, multi-tier Web applications also successfully cover the details of application development. This type of web application adopts a unified entrance, many object-oriented codes are used, and features such as encapsulation, inheritance, and polymorphism bring challenges to vulnerability mining from the perspective of static analysis. A large amount of object-oriented code makes it impossible for a simple function name-matching method to build a complete call graph (CG), resulting in the inability to perform a comprehensive interprocedural analysis. At the same time, the encapsulation feature of the class makes the data hidden in the object attribute, and the vulnerability path cannot be obtained through the general data-flow analysis. In response to the above issues, we propose a vulnerability detection method that supports vulnerability detection for multi-layer web applications based on MVC (Model-View-Control) architecture. First, we improve the construction of the call graph and Code Property Graph (CPG). Then, based on the enhanced Code Property Graph, we propose a technique to support vulnerability detection for multi-layer web applications. Based on this approach, we implemented a prototype of VulPathsFinder, a security analysis tool extended from the PHP security analyzer Joern-PHP. Then, we select ten MVC based and ten non-MVC-based applications to form a test dataset and validate the effectiveness of VulPathsFinder based on this dataset. Experimental results show that, compared with currently available tools, VulPathsFinder can handle framework applications more effectively, build a complete code property map, and detect vulnerabilities in framework applications that existing tools cannot detect.
APA, Harvard, Vancouver, ISO, and other styles
5

Kunz, Immanuel, Konrad Weiss, Angelika Schneider, and Christian Banse. "Privacy Property Graph: Towards Automated Privacy Threat Modeling via Static Graph-based Analysis." Proceedings on Privacy Enhancing Technologies 2023, no. 2 (April 2023): 171–87. http://dx.doi.org/10.56553/popets-2023-0046.

Full text
Abstract:
Privacy threat modeling should be done frequently throughout development and production to be able to quickly mitigate threats. Yet, it can also be a very time-consuming activity. In this paper, we use an enhanced code property graph to partly automate the privacy threat modeling process: It automatically generates a data flow diagram from source code which exhibits privacy properties of data flows, and which can be analyzed semi-automatically via queries. We provide a list of such reusable queries that can be used to detect various privacy threats. To enable this analysis, we integrate a taint-tracking mechanism into the graph using privacy-specific labels. Since no benchmark for such an approach exists, we also present a test suite for privacy threat implementations which comprises implementations for 22 privacy threats in multiple programming languages. We expect that our approach significantly reduces time consumption of threat modeling and show that it also has potential beyond the threat categories defined by LINDDUN, e.g. to detect privacy anti-patterns and verify compliance to privacy policies.
APA, Harvard, Vancouver, ISO, and other styles
6

Paiva, José, José Leal, and Álvaro Figueira. "Comparing semantic graph representations of source code: The case of automatic feedback on programming assignments." Computer Science and Information Systems, no. 00 (2024): 4. http://dx.doi.org/10.2298/csis230615004p.

Full text
Abstract:
Static source code analysis techniques are gaining relevance in automated assessment of programming assignments as they can provide less rigorous evaluation and more comprehensive and formative feedback. These techniques focus on source code aspects rather than requiring effective code execution. To this end, syntactic and semantic information encoded in textual data is typically represented internally as graphs, after parsing and other preprocessing stages. Static automated assessment techniques, therefore, draw inferences from intermediate representations to determine the correctness of a solution and derive feedback. Consequently, achieving the most effective semantic graph representation of source code for the specific task is critical, impacting both techniques? accuracy, outcome, and execution time. This paper aims to provide a thorough comparison of the most widespread semantic graph representations for the automated assessment of programming assignments, including usage examples, facets, and costs for each of these representations. A benchmark has been conducted to assess their cost using the Abstract Syntax Tree (AST) as a baseline. The results demonstrate that the Code Property Graph (CPG) is the most feature-rich representation, but also the largest and most space-consuming (about 33% more than AST).
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Ji, Baoming Bai, Xijin Mu, Hengzhou Xu, Zhen Liu, and Huaan Li. "Construction and Decoding of Rate-Compatible Globally Coupled LDPC Codes." Wireless Communications and Mobile Computing 2018 (2018): 1–14. http://dx.doi.org/10.1155/2018/4397671.

Full text
Abstract:
This paper presents a family of rate-compatible (RC) globally coupled low-density parity-check (GC-LDPC) codes, which is constructed by combining algebraic construction method and graph extension. Specifically, the highest rate code is constructed using the algebraic method and the codes of lower rates are formed by successively extending the graph of the higher rate codes. The proposed rate-compatible codes provide more flexibility in code rate and guarantee the structural property of algebraic construction. It is confirmed, by numerical simulations over the AWGN channel, that the proposed codes have better performances than their counterpart GC-LDPC codes formed by the classical method and exhibit an approximately uniform gap to the capacity over a wide range of rates. Furthermore, a modified two-phase local/global iterative decoding scheme for GC-LDPC codes is proposed. Numerical results show that the proposed decoding scheme can reduce the unnecessary cost of local decoder at low and moderate SNRs, without any increase in the number of decoding iterations in the global decoder at high SNRs.
APA, Harvard, Vancouver, ISO, and other styles
8

Ma, Hehuan, Yatao Bian, Yu Rong, Wenbing Huang, Tingyang Xu, Weiyang Xie, Geyan Ye, and Junzhou Huang. "Cross-dependent graph neural networks for molecular property prediction." Bioinformatics 38, no. 7 (January 30, 2022): 2003–9. http://dx.doi.org/10.1093/bioinformatics/btac039.

Full text
Abstract:
Abstract Motivation The crux of molecular property prediction is to generate meaningful representations of the molecules. One promising route is to exploit the molecular graph structure through graph neural networks (GNNs). Both atoms and bonds significantly affect the chemical properties of a molecule, so an expressive model ought to exploit both node (atom) and edge (bond) information simultaneously. Inspired by this observation, we explore the multi-view modeling with GNN (MVGNN) to form a novel paralleled framework, which considers both atoms and bonds equally important when learning molecular representations. In specific, one view is atom-central and the other view is bond-central, then the two views are circulated via specifically designed components to enable more accurate predictions. To further enhance the expressive power of MVGNN, we propose a cross-dependent message-passing scheme to enhance information communication of different views. The overall framework is termed as CD-MVGNN. Results We theoretically justify the expressiveness of the proposed model in terms of distinguishing non-isomorphism graphs. Extensive experiments demonstrate that CD-MVGNN achieves remarkably superior performance over the state-of-the-art models on various challenging benchmarks. Meanwhile, visualization results of the node importance are consistent with prior knowledge, which confirms the interpretability power of CD-MVGNN. Availability and implementation The code and data underlying this work are available in GitHub at https://github.com/uta-smile/CD-MVGNN. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
9

Cao, Xiansheng, Junfeng Wang, Peng Wu, and Zhiyang Fang. "VulMPFF: A Vulnerability Detection Method for Fusing Code Features in Multiple Perspectives." IET Information Security 2024 (March 22, 2024): 1–15. http://dx.doi.org/10.1049/2024/4313185.

Full text
Abstract:
Source code vulnerabilities are one of the significant threats to software security. Existing deep learning-based detection methods have proven their effectiveness. However, most of them extract code information on a single intermediate representation of code (IRC), which often fails to extract multiple information hidden in the code fully, significantly limiting their performance. To address this problem, we propose VulMPFF, a vulnerability detection method that fuses code features under multiple perspectives. It extracts IRC from three perspectives: code sequence, lexical and syntactic relations, and graph structure to capture the vulnerability information in the code, which effectively realizes the complementary information of multiple IRCs and improves vulnerability detection performance. Specifically, VulMPFF extracts serialized abstract syntax tree as IRC from code sequence, lexical and syntactic relation perspective, and code property graph as IRC from graph structure perspective, and uses Bi-LSTM model with attention mechanism and graph neural network with attention mechanism to learn the code features from multiple perspectives and fuse them to detect the vulnerabilities in the code, respectively. We design a dual-attention mechanism to highlight critical code information for vulnerability triggering and better accomplish the vulnerability detection task. We evaluate our approach on three datasets. Experiments show that VulMPFF outperforms existing state-of-the-art vulnerability detection methods (i.e., Rats, FlawFinder, VulDeePecker, SySeVR, Devign, and Reveal) in Acc and F1 score, with improvements ranging from 14.71% to 145.78% and 152.08% to 344.77%, respectively. Meanwhile, experiments in the open-source project demonstrate that VulMPFF has the potential to detect vulnerabilities in real-world environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Ferreira, Mafalda, Miguel Monteiro, Tiago Brito, Miguel E. Coimbra, Nuno Santos, Limin Jia, and José Fragoso Santos. "Efficient Static Vulnerability Analysis for JavaScript with Multiversion Dependency Graphs." Proceedings of the ACM on Programming Languages 8, PLDI (June 20, 2024): 417–41. http://dx.doi.org/10.1145/3656394.

Full text
Abstract:
While static analysis tools that rely on Code Property Graphs (CPGs) to detect security vulnerabilities have proven effective, deciding how much information to include in the graphs remains a challenge. Including less information can lead to a more scalable analysis but at the cost of reduced effectiveness in identifying vulnerability patterns, potentially resulting in classification errors. Conversely, more information in the graph allows for a more effective analysis but may affect scalability. For example, scalability issues have been recently highlighted in ODGen, the state-of-the-art CPG-based tool for detecting Node.js vulnerabilities. This paper examines a new point in the design space of CPGs for JavaScript vulnerability detection. We introduce the Multiversion Dependency Graph (MDG), a novel graph-based data structure that captures the state evolution of objects and their properties during program execution. Compared to the graphs used by ODGen, MDGs are significantly simpler without losing key information needed for vulnerability detection. We implemented Graph.js, a new MDG-based static vulnerability scanner specialized in analyzing npm packages and detecting taint-style and prototype pollution vulnerabilities. Our evaluation shows that Graph.js outperforms ODGen by significantly reducing both the false negatives and the analysis time. Additionally, we have identified 49 previously undiscovered vulnerabilities in npm packages.
APA, Harvard, Vancouver, ISO, and other styles
11

Yang, Yixiao, Xiang Chen, and Jiaguang Sun. "Improve Language Modeling for Code Completion Through Learning General Token Repetition of Source Code with Optimized Memory." International Journal of Software Engineering and Knowledge Engineering 29, no. 11n12 (November 2019): 1801–18. http://dx.doi.org/10.1142/s0218194019400229.

Full text
Abstract:
In last few years, applying language model to source code is the state-of-the-art method for solving the problem of code completion. However, compared with natural language, code has more obvious repetition characteristics. For example, a variable can be used many times in the following code. Variables in source code have a high chance to be repetitive. Cloned code and templates, also have the property of token repetition. Capturing the token repetition of source code is important. In different projects, variables or types are usually named differently. This means that a model trained in a finite data set will encounter a lot of unseen variables or types in another data set. How to model the semantics of the unseen data and how to predict the unseen data based on the patterns of token repetition are two challenges in code completion. Hence, in this paper, token repetition is modelled as a graph, we propose a novel REP model which is based on deep neural graph network to learn the code toke repetition. The REP model is to identify the edge connections of a graph to recognize the token repetition. For predicting the token repetition of token [Formula: see text], the information of all the previous tokens needs to be considered. We use memory neural network (MNN) to model the semantics of each distinct token to make the framework of REP model more targeted. The experiments indicate that the REP model performs better than LSTM model. Compared with Attention-Pointer network, we also discover that the attention mechanism does not work in all situations. The proposed REP model could achieve similar or slightly better prediction accuracy compared to Attention-Pointer network and consume less training time. We also find other attention mechanism which could further improve the prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
12

Kang, Mengjia, Jose A. Alvarado-Guzman, Luke V. Rasmussen, and Justin B. Starren. "Evolution of a Graph Model for the OMOP Common Data Model." Applied Clinical Informatics 15, no. 05 (October 2024): 1056–65. https://doi.org/10.1055/s-0044-1791487.

Full text
Abstract:
Abstract Objective Graph databases for electronic health record (EHR) data have become a useful tool for clinical research in recent years, but there is a lack of published methods to transform relational databases to a graph database schema. We developed a graph model for the Observational Medical Outcomes Partnership (OMOP) common data model (CDM) that can be reused across research institutions. Methods We created and evaluated four models, representing two different strategies, for converting the standardized clinical and vocabulary tables of OMOP into a property graph model within the Neo4j graph database. Taking the Successful Clinical Response in Pneumonia Therapy (SCRIPT) and Collaborative Resource for Intensive care Translational science, Informatics, Comprehensive Analytics, and Learning (CRITICAL) cohorts as test datasets with different sizes, we compared two of the resulting graph models with respect to database performance including database building time, query complexity, and runtime for both cohorts. Results Utilizing a graph schema that was optimized for storing critical information as topology rather than attributes resulted in a significant improvement in both data creation and querying. The graph database for our larger cohort, CRITICAL, can be built within 1 hour for 134,145 patients, with a total of 749,011,396 nodes and 1,703,560,910 edges. Discussion To our knowledge, this is the first generalized solution to convert the OMOP CDM to a graph-optimized schema. Despite being developed for studies at a single institution, the modeling method can be applied to other OMOP CDM v5.x databases. Our evaluation with the SCRIPT and CRITICAL cohorts and comparison between the current and previous versions show advantages in code simplicity, database building, and query speed. Conclusion We developed a method for converting OMOP CDM databases into graph databases. Our experiments revealed that the final model outperformed the initial relational-to-graph transformation in both code simplicity and query efficiency, particularly for complex queries.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yuanyuan, and Jing Chen. "Research on a New Analysis Algorithm of Virtual Trajectory Cryptography Technology for Password Encryption and Decryption Through Base Expanding." Journal of Physics: Conference Series 2179, no. 1 (January 1, 2022): 012032. http://dx.doi.org/10.1088/1742-6596/2179/1/012032.

Full text
Abstract:
Abstract In the rapidly developing Internet era, network security and data sharing coexist. With the progress of information reform, all walks of life need to ensure the security of data information. Password is a tool to protect their privacy, property, information and important data, and some criminals through technical means to crack the password, peeping and stealing other people’s important information, data, meanwhile, the data caused information and property losses also happen. Information security has become a strategic issue that people must pay attention to, which is related to social stability, economic development and national security. In this paper, we study a new virtual trajectory password technology, analyzes the algorithm, complete the structure design of code disk and solve the problem of random generation of code base in code disk. The method of “base expanding” applied to enhance the strength coefficient of the password, and the random sequence generated in the chaotic encryption algorithm is used for encryption and decryption, and a random reference graph “code disk” is generated as an auxiliary memory method to help people remember some special lengthened passwords that cannot be remembered conventionally.
APA, Harvard, Vancouver, ISO, and other styles
14

Laursen, Mathias Rud, Wenyuan Xu, and Anders Møller. "Reducing Static Analysis Unsoundness with Approximate Interpretation." Proceedings of the ACM on Programming Languages 8, PLDI (June 20, 2024): 1165–88. http://dx.doi.org/10.1145/3656424.

Full text
Abstract:
Static program analysis for JavaScript is more difficult than for many other programming languages. One of the main reasons is the presence of dynamic property accesses that read and write object properties via dynamically computed property names. To ensure scalability and precision, existing state-of-the-art analyses for JavaScript mostly ignore these operations although it results in missed call edges and aliasing relations. We present a novel dynamic analysis technique named approximate interpretation that is designed to efficiently and fully automatically infer likely determinate facts about dynamic property accesses, in particular those that occur in complex library API initialization code, and how to use the produced information in static analysis to recover much of the abstract information that is otherwise missed. Our implementation of the technique and experiments on 141 real-world Node.js-based JavaScript applications and libraries show that the approach leads to significant improvements in call graph construction. On average the use of approximate interpretation leads to 55.1% more call edges, 21.8% more reachable functions, 17.7% more resolved call sites, and only 1.5% fewer monomorphic call sites. For 36 JavaScript projects where dynamic call graphs are available, average analysis recall is improved from 75.9% to 88.1% with a negligible reduction in precision.
APA, Harvard, Vancouver, ISO, and other styles
15

Duprat, François, Jean-Luc Ploix, Jean-Marie Aubry, and Théophile Gaudin. "Fast and Accurate Prediction of Refractive Index of Organic Liquids with Graph Machines." Molecules 28, no. 19 (September 26, 2023): 6805. http://dx.doi.org/10.3390/molecules28196805.

Full text
Abstract:
The refractive index (RI) of liquids is a key physical property of molecular compounds and materials. In addition to its ubiquitous role in physics, it is also exploited to impart specific optical properties (transparency, opacity, and gloss) to materials and various end-use products. Since few methods exist to accurately estimate this property, we have designed a graph machine model (GMM) capable of predicting the RI of liquid organic compounds containing up to 16 different types of atoms and effective in discriminating between stereoisomers. Using 8267 carefully checked RI values from the literature and the corresponding 2D organic structures, the GMM provides a training root mean square relative error of less than 0.5%, i.e., an RMSE of 0.004 for the estimation of the refractive index of the 8267 compounds. The GMM predictive ability is also compared to that obtained by several fragment-based approaches. Finally, a Docker-based tool is proposed to predict the RI of organic compounds solely from their SMILES code. The GMM developed is easy to apply, as shown by the video tutorials provided on YouTube.
APA, Harvard, Vancouver, ISO, and other styles
16

Niu, Guanglin, and Bo Li. "Logic and Commonsense-Guided Temporal Knowledge Graph Completion." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4569–77. http://dx.doi.org/10.1609/aaai.v37i4.25579.

Full text
Abstract:
A temporal knowledge graph (TKG) stores the events derived from the data involving time. Predicting events is extremely challenging due to the time-sensitive property of events. Besides, the previous TKG completion (TKGC) approaches cannot represent both the timeliness and the causality properties of events, simultaneously. To address these challenges, we propose a Logic and Commonsense-Guided Embedding model (LCGE) to jointly learn the time-sensitive representation involving timeliness and causality of events, together with the time-independent representation of events from the perspective of commonsense. Specifically, we design a temporal rule learning algorithm to construct a rule-guided predicate embedding regularization strategy for learning the causality among events. Furthermore, we could accurately evaluate the plausibility of events via auxiliary commonsense knowledge. The experimental results of TKGC task illustrate the significant performance improvements of our model compared with the existing approaches. More interestingly, our model is able to provide the explainability of the predicted results in the view of causal inference. The appendix, source code and datasets of this paper are available at https://github.com/ngl567/LCGE.
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Suyeon, Dongha Lee, SeongKu Kang, Seonghyeon Lee, and Hwanjo Yu. "Learning Topology-Specific Experts for Molecular Property Prediction." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 7 (June 26, 2023): 8291–99. http://dx.doi.org/10.1609/aaai.v37i7.26000.

Full text
Abstract:
Recently, graph neural networks (GNNs) have been successfully applied to predicting molecular properties, which is one of the most classical cheminformatics tasks with various applications. Despite their effectiveness, we empirically observe that training a single GNN model for diverse molecules with distinct structural patterns limits its prediction performance. In this paper, motivated by this observation, we propose TopExpert to leverage topology-specific prediction models (referred to as experts), each of which is responsible for each molecular group sharing similar topological semantics. That is, each expert learns topology-specific discriminative features while being trained with its corresponding topological group. To tackle the key challenge of grouping molecules by their topological patterns, we introduce a clustering-based gating module that assigns an input molecule into one of the clusters and further optimizes the gating module with two different types of self-supervision: topological semantics induced by GNNs and molecular scaffolds, respectively. Extensive experiments demonstrate that TopExpert has boosted the performance for molecular property prediction and also achieved better generalization for new molecules with unseen scaffolds than baselines. The code is available at https://github.com/kimsu55/ToxExpert.
APA, Harvard, Vancouver, ISO, and other styles
18

Ko, Jong Won. "Design of MDA Based Model Transformation Profile for Model Verification." Applied Mechanics and Materials 224 (November 2012): 69–72. http://dx.doi.org/10.4028/www.scientific.net/amm.224.69.

Full text
Abstract:
Validation of existing software design models and transformed target models for the study, mainly checking (Model Checking) with a code-based software designed to define in the abstract syntax tree or on the models generated using refactoring on design models for refinery operations and define how to perform. The problem with these traditional research methods, but the first model, design model for checking the information with the model by defining a formal representation in the form of an abstract syntax tree, as you've shown how to perform validation of UML design model. Additional steps need to define more complex due to a software problem that is not the way to the model suitable for model transformation verification. In this paper, as defined in the MDA based model transformation studies of a graph based mode transformation, and how to perform model transformation verification through improving graph comparison algorithm and model property information.
APA, Harvard, Vancouver, ISO, and other styles
19

Cheon, Mookyung, Choongrak Kim, and Iksoo Chang. "Uncovering multiloci-ordering by algebraic property of Laplacian matrix and its Fiedler vector." Bioinformatics 32, no. 6 (November 14, 2015): 801–7. http://dx.doi.org/10.1093/bioinformatics/btv669.

Full text
Abstract:
AbstractMotivation: The loci-ordering, based on two-point recombination fractions for a pair of loci, is the most important step in constructing a reliable and fine genetic map.Results: Using the concept from complex graph theory, here we propose a Laplacian ordering approach which uncovers the loci-ordering of multiloci simultaneously. The algebraic property for a Fiedler vector of a Laplacian matrix, constructed from the recombination fraction of the loci-ordering for 26 loci of barley chromosome IV, 846 loci of Arabidopsisthaliana and 1903 loci of Malus domestica, together with the variable threshold uncovers their loci-orders. It offers an alternative yet robust approach for ordering multiloci.Availability and implementation : Source code program with data set is available as supplementary data and also in a software category of the website (http://biophysics.dgist.ac.kr)Contact: crkim@pusan.ac.kr or iksoochang@dgist.ac.kr.Supplementary information: Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
20

Zhang, Feng, Guofan Li, Cong Liu, and Qian Song. "Flowchart-Based Cross-Language Source Code Similarity Detection." Scientific Programming 2020 (December 17, 2020): 1–15. http://dx.doi.org/10.1155/2020/8835310.

Full text
Abstract:
Source code similarity detection has various applications in code plagiarism detection and software intellectual property protection. In computer programming teaching, students may convert the source code written in one programming language into another language for their code assignment submission. Existing similarity measures of source code written in the same language are not applicable for the cross-language code similarity detection because of syntactic differences among different programming languages. Meanwhile, existing cross-language source similarity detection approaches are susceptible to complex code obfuscation techniques, such as replacing equivalent control structure and adding redundant statements. To solve this problem, we propose a cross-language code similarity detection (CLCSD) approach based on code flowcharts. In general, two source code fragments written in different programming languages are transformed into standardized code flowcharts (SCFC), and their similarity is obtained by measuring their corresponding SCFC. More specifically, we first introduce the standardized code flowchart (SCFC) model to be the uniform flowcharts representation of source code written in different languages. SCFC is language-independent, and therefore, it can be used as the intermediate structure for source code similarity detection. Meanwhile, transformation techniques are given to transform source code written in a specific programming language into an SCFC. Second, we propose the SCFC-SPGK algorithm based on the shortest path graph kernel to measure the similarity between two SCFCs. Thus, the similarity between two pieces of source code in different programming languages is given by the similarity between SCFCs. Experimental results show that compared with existing approaches, CLCSD has higher accuracy in cross-language source code similarity detection. Furthermore, CLCSD cannot only handle common source code obfuscation techniques used by students in computer programming teaching but also obtain nearly 90% accuracy in dealing with some complex obfuscation techniques.
APA, Harvard, Vancouver, ISO, and other styles
21

Meng, Qingkun, Chao Feng, Bin Zhang, and Chaojing Tang. "Assisting in Auditing of Buffer Overflow Vulnerabilities via Machine Learning." Mathematical Problems in Engineering 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/5452396.

Full text
Abstract:
Buffer overflow vulnerability is a kind of consequence in which programmers’ intentions are not implemented correctly. In this paper, a static analysis method based on machine learning is proposed to assist in auditing buffer overflow vulnerabilities. First, an extended code property graph is constructed from the source code to extract seven kinds of static attributes, which are used to describe buffer properties. After embedding these attributes into a vector space, five frequently used machine learning algorithms are employed to classify the functions into suspicious vulnerable functions and secure ones. The five classifiers reached an average recall of 83.5%, average true negative rate of 85.9%, a best recall of 96.6%, and a best true negative rate of 91.4%. Due to the imbalance of the training samples, the average precision of the classifiers is 68.9% and the average F1 score is 75.2%. When the classifiers were applied to a new program, our method could reduce the false positive to 1/12 compared to Flawfinder.
APA, Harvard, Vancouver, ISO, and other styles
22

Jin, Hongjoo, Jiwon Lee, Sumin Yang, Kijoong Kim, and Dong Hoon Lee. "A Framework to Quantify the Quality of Source Code Obfuscation." Applied Sciences 14, no. 12 (June 10, 2024): 5056. http://dx.doi.org/10.3390/app14125056.

Full text
Abstract:
Malicious reverse engineering of software has served as a valuable technique for attackers to infringe upon and steal intellectual property. We can employ obfuscation techniques to protect against such attackers as useful tools to safeguard software. Applying obfuscation techniques to source code can prevent malicious attackers from reverse engineering a program. However, the ambiguity surrounding the protective efficacy of these source code obfuscation tools and techniques presents challenges for users in evaluating and comparing the varying degrees of protection provided. This paper addresses these issues and presents a methodology to quantify the effect of source code obfuscation. Our proposed method is based on three main types of data: (1) the control flow graph, (2) the program path, and (3) the performance overhead added to the process—all of which are derived from a program analysis conducted by human experts and automated tools. For the first time, we have implemented a tool that can quantitatively evaluate the quality of obfuscation techniques. Then, to validate the effectiveness of the implemented framework, we conducted experiments using four widely recognized commercial and open-source obfuscation tools. Our experimental findings, based on quantitative values related to obfuscation techniques, demonstrate that our proposed framework effectively assesses obfuscation quality.
APA, Harvard, Vancouver, ISO, and other styles
23

Tran, Van Dinh, Alessandro Sperduti, Rolf Backofen, and Fabrizio Costa. "Heterogeneous networks integration for disease–gene prioritization with node kernels." Bioinformatics 36, no. 9 (January 28, 2020): 2649–56. http://dx.doi.org/10.1093/bioinformatics/btaa008.

Full text
Abstract:
Abstract Motivation The identification of disease–gene associations is a task of fundamental importance in human health research. A typical approach consists in first encoding large gene/protein relational datasets as networks due to the natural and intuitive property of graphs for representing objects’ relationships and then utilizing graph-based techniques to prioritize genes for successive low-throughput validation assays. Since different types of interactions between genes yield distinct gene networks, there is the need to integrate different heterogeneous sources to improve the reliability of prioritization systems. Results We propose an approach based on three phases: first, we merge all sources in a single network, then we partition the integrated network according to edge density introducing a notion of edge type to distinguish the parts and finally, we employ a novel node kernel suitable for graphs with typed edges. We show how the node kernel can generate a large number of discriminative features that can be efficiently processed by linear regularized machine learning classifiers. We report state-of-the-art results on 12 disease–gene associations and on a time-stamped benchmark containing 42 newly discovered associations. Availability and implementation Source code: https://github.com/dinhinfotech/DiGI.git. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
24

Xu, Chenxin, Siheng Chen, Maosen Li, and Ya Zhang. "Invariant Teacher and Equivariant Student for Unsupervised 3D Human Pose Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 3013–21. http://dx.doi.org/10.1609/aaai.v35i4.16409.

Full text
Abstract:
We propose a novel method based on teacher-student learning framework for 3D human pose estimation without any 3D annotation or side information. To solve this unsupervised-learning problem, the teacher network adopts pose-dictionary-based modeling for regularization to estimate a physically plausible 3D pose. To handle the decomposition ambiguity in the teacher network, we propose a cycle-consistent architecture promoting a 3D rotation-invariant property to train the teacher network. To further improve the estimation accuracy, the student network adopts a novel graph convolution network for flexibility to directly estimate the 3D coordinates. Another cycle-consistent architecture promoting 3D rotation-equivariant property is adopted to exploit geometry consistency, together with knowledge distillation from the teacher network to improve the pose estimation performance. We conduct extensive experiments on Human3.6M and MPI-INF-3DHP. Our method reduces the 3D joint prediction error by 11.4% compared to state-of-the-art unsupervised methods and also outperforms many weakly-supervised methods that use side information on Human3.6M. Code will be available at https://github.com/sjtuxcx/ITES.
APA, Harvard, Vancouver, ISO, and other styles
25

Adams, Michael D., Eric Griffis, Thomas J. Porter, Sundara Vishnu Satish, Eric Zhao, and Cyrus Omar. "Grove: A Bidirectionally Typed Collaborative Structure Editor Calculus." Proceedings of the ACM on Programming Languages 9, POPL (January 7, 2025): 2176–204. https://doi.org/10.1145/3704909.

Full text
Abstract:
Version control systems typically rely on a patch language , heuristic patch synthesis algorithms like diff, and three-way merge algorithms . Standard patch languages and merge algorithms often fail to identify conflicts correctly when there are multiple edits to one line of code or code is relocated. This paper introduces Grove, a collaborative structure editor calculus that eliminates patch synthesis and three-way merge algorithms entirely. Instead, patches are derived directly from the log of the developer’s edit actions and all edits commute, i.e. the repository state forms a commutative replicated data type (CmRDT). To handle conflicts that can arise due to code relocation, the core datatype in Grove is a labeled directed multi-graph with uniquely identified vertices and edges. All edits amount to edge insertion and deletion, with deletion being permanent. To support tree-based editing, we define a decomposition from graphs into groves , which are a set of syntax trees with conflicts—including local, relocation, and unicyclic relocation conflicts—represented explicitly using holes and references between trees. Finally, we define a type error localization system for groves that enjoys a totality property, i.e. all editor states in Grove are statically meaningful, so developers can use standard editor services while working to resolve these explicitly represented conflicts. The static semantics is defined as a bidirectional marking system in line with recent work, with gradual typing employed to handle situations where errors and conflicts prevent type determination. We then layer on a unification-based type inference system to opportunistically fill type holes and fail gracefully when no solution exists. We mechanize the metatheory of Grove using the Agda theorem prover. We implement these ideas as the Grove Workbench , which generates the necessary data structures and algorithms in OCaml given a syntax tree specification.
APA, Harvard, Vancouver, ISO, and other styles
26

Buterez, David, Ioana Bica, Ifrah Tariq, Helena Andrés-Terré, and Pietro Liò. "CellVGAE: an unsupervised scRNA-seq analysis workflow with graph attention networks." Bioinformatics 38, no. 5 (December 2, 2021): 1277–86. http://dx.doi.org/10.1093/bioinformatics/btab804.

Full text
Abstract:
Abstract Motivation Single-cell RNA sequencing allows high-resolution views of individual cells for libraries of up to millions of samples, thus motivating the use of deep learning for analysis. In this study, we introduce the use of graph neural networks for the unsupervised exploration of scRNA-seq data by developing a variational graph autoencoder architecture with graph attention layers that operates directly on the connectivity between cells, focusing on dimensionality reduction and clustering. With the help of several case studies, we show that our model, named CellVGAE, can be effectively used for exploratory analysis even on challenging datasets, by extracting meaningful features from the data and providing the means to visualize and interpret different aspects of the model. Results We show that CellVGAE is more interpretable than existing scRNA-seq variational architectures by analysing the graph attention coefficients. By drawing parallels with other scRNA-seq studies on interpretability, we assess the validity of the relationships modelled by attention, and furthermore, we show that CellVGAE can intrinsically capture information such as pseudotime and NF-ĸB activation dynamics, the latter being a property that is not generally shared by existing neural alternatives. We then evaluate the dimensionality reduction and clustering performance on 9 difficult and well-annotated datasets by comparing with three leading neural and non-neural techniques, concluding that CellVGAE outperforms competing methods. Finally, we report a decrease in training times of up to × 20 on a dataset of 1.3 million cells compared to existing deep learning architectures. Availabilityand implementation The CellVGAE code is available at https://github.com/davidbuterez/CellVGAE. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
27

Skavantzos, Philipp, and Sebastian Link. "Normalizing Property Graphs." Proceedings of the VLDB Endowment 16, no. 11 (July 2023): 3031–43. http://dx.doi.org/10.14778/3611479.3611506.

Full text
Abstract:
Normalization aims at minimizing sources of potential data inconsistency and costs of update maintenance incurred by data redundancy. For relational databases, different classes of dependencies cause data redundancy and have resulted in proposals such as Third, Boyce-Codd, Fourth and Fifth Normal Form. Features of more advanced data models make it challenging to extend achievements from the relational model to missing, non-atomic, or uncertain data. We initiate research on the normalization of graph data, starting with a class of functional dependencies tailored to property graphs. We show that this class captures important semantics of applications, constitutes a rich source of data redundancy, its implication problem can be decided in linear time, and facilitates the normalization of property graphs flexibly tailored to their labels and properties that are targeted by applications. We normalize property graphs into Boyce-Codd Normal Form without loss of data and dependencies whenever possible for the target labels and properties, but guarantee Third Normal Form in general. Experiments on real-world property graphs quantify and qualify various benefits of graph normalization: 1) removing redundant property values as sources of inconsistent data, 2) detecting inconsistency as violation of functional dependencies, 3) reducing update overheads by orders of magnitude, and 4) significant speed ups of aggregate queries.
APA, Harvard, Vancouver, ISO, and other styles
28

Fan, Kaixuan, and Meng Wang. "DAG-Based Formal Modeling of Spark Applications with MSVL." Information 14, no. 12 (December 12, 2023): 658. http://dx.doi.org/10.3390/info14120658.

Full text
Abstract:
Apache Spark is a high-speed computing engine for processing massive data. With its widespread adoption, there is a growing need to analyze its correctness and temporal properties. However, there is scarce research focused on the verification of temporal properties in Spark programs. To address this gap, we employ the code-level runtime verification tool UMC4M based on the Modeling, Simulation, and Verification Language (MSVL). To this end, a Spark program S has to be translated into an MSVL program M, and the negation of the property P specified by a Propositional Projection Temporal Logic (PPTL) formula that needs to be verified is also translated to an MSVL program M1; then, a new MSVL program “M and M1” can be compiled and executed. Whether program S violates the property P is determined by the existence of an acceptable execution of “M and M1”. Thus, the key issue lies in how to formalize model Spark programs using MSVL programs. We previously proposed a solution to this problem—using the MSVL functions to perform Resilient Distributed Datasets (RDD) operations and converting the Spark program into an MSVL program based on the Directed Acyclic Graph (DAG) of the Spark program. However, we only proposed this idea. Building upon this foundation, we implement the conversion from RDD operations to MSVL functions and propose, as well as implement, the rules for translating Spark programs to MSVL programs based on DAG. We confirm the feasibility of this approach and provide a viable method for verifying the temporal properties of Spark programs. Additionally, an automatic translation tool, S2M, is developed. Finally, a case study is presented to demonstrate this conversion process.
APA, Harvard, Vancouver, ISO, and other styles
29

Pietrusewicz, Krzysztof. "Metamodelling for Design of Mechatronic and Cyber-Physical Systems." Applied Sciences 9, no. 3 (January 22, 2019): 376. http://dx.doi.org/10.3390/app9030376.

Full text
Abstract:
The paper presents the issue of metamodeling of Domain-Specific Languages (DSL) for the purpose of designing complex mechatronics systems. Usually, one of the problems during the development of such projects is an interdisciplinary character of the team that is involved in this endeavour. The success of a complex machine project (e.g. Computer Numerically Controlled machine (CNC), loading crane, forestry crane) often depends on a proper communication between team members. The domain-specific modelling languages developed using one of the two approaches discussed in the work, lead to a machine design that can be carried out much more efficiently than with conventional approaches. Within the paper, the Meta-Object Facility (MOF) approach to metamodeling is presented; it is much more prevalent in modern modelling software tools than Graph-Object-Property-Relationship-Role (GOPRR). The main outcome of this work is the first presentation of researchML modelling language that is the result of more than twenty ambitious research and development projects. It is effectively used within new enterprises and leads to improved traceability of the project goals. It enables for fully-featured automatic code generation which is one of the main pillars of the agile management within mechatronic system design projects.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Zhijian, Ziyu Fan, Siyuan Shen, Min Wu, and Lei Deng. "MolMVC: Enhancing molecular representations for drug-related tasks through multi-view contrastive learning." Bioinformatics 40, Supplement_2 (September 1, 2024): ii190—ii197. http://dx.doi.org/10.1093/bioinformatics/btae386.

Full text
Abstract:
Abstract Motivation Effective molecular representation is critical in drug development. The complex nature of molecules demands comprehensive multi-view representations, considering 1D, 2D, and 3D aspects, to capture diverse perspectives. Obtaining representations that encompass these varied structures is crucial for a holistic understanding of molecules in drug-related contexts. Results In this study, we introduce an innovative multi-view contrastive learning framework for molecular representation, denoted as MolMVC. Initially, we use a Transformer encoder to capture 1D sequence information and a Graph Transformer to encode the intricate 2D and 3D structural details of molecules. Our approach incorporates a novel attention-guided augmentation scheme, leveraging prior knowledge to create positive samples tailored to different molecular data views. To align multi-view molecular positive samples effectively in latent space, we introduce an adaptive multi-view contrastive loss (AMCLoss). In particular, we calculate AMCLoss at various levels within the model to effectively capture the hierarchical nature of the molecular information. Eventually, we pre-train the encoders via minimizing AMCLoss to obtain the molecular representation, which can be used for various down-stream tasks. In our experiments, we evaluate the performance of our MolMVC on multiple tasks, including molecular property prediction (MPP), drug-target binding affinity (DTA) prediction and cancer drug response (CDR) prediction. The results demonstrate that the molecular representation learned by our MolMVC can enhance the predictive accuracy on these tasks and also reduce the computational costs. Furthermore, we showcase MolMVC’s efficacy in drug repositioning across a spectrum of drug-related applications. Availability and implementation The code and pre-trained model are publicly available at https://github.com/Hhhzj-7/MolMVC.
APA, Harvard, Vancouver, ISO, and other styles
31

Chub, Iryna, and Kateryna Demchenko. "Optimizing the productivity of solutions built with the help of ReactJS and D3 libraries." Bulletin of Kharkov National Automobile and Highway University 1, no. 104 (April 9, 2024): 15. http://dx.doi.org/10.30977/bul.2219-5548.2024.104.1.15.

Full text
Abstract:
Problem. Recently, the problem of the performance of web applications has become particularly acute. This is due to the complexity of the software and the simultaneous deterioration of the code quality. The work considers important aspects of performance of the client part of the web application, built using the ReactJS and D3 libraries. React is known to deliver a faster user interface than most competitors, thanks to its in-house, performance-oriented development approach. However, when your React application starts to scale, you may notice some performance issues or lag. Internal optimization methods may not be sufficient to support the growing traffic and complexity of a fast-paced, enterprise-level application. This is where the question arises: "How to improve performance in ReactJS?". Next, the article will consider important aspects of the performance of the client part of the web application, built using the ReactJS and D3 libraries. Goal. The purpose of the article is to develop a React component (optimized by a new method) in the form of a graph, whose data changes in real time. Methodology. Analytical research methods and functional programming methodology were used. Results. The paper considers the problem of optimizing a component built using ReactJS and D3. Its feature is the change of state in real time. A new code optimization method is proposed, which allows to minimize the number of renders. The proposed solutions will enable to more effectively use all the capabilities of the D3 functionality when designing a wide range of monitoring and billing systems and automated process management systems. Originality. A method is proposed that will help improve the performance of the React application and reduce the number of re-renders. The method is developed using additional React hooks and using a reference to the current graphic component from the parent. The method is based on the fundamental property of the D3 library - saving its own state this allows you to get rid of unnecessary renders. Practical value. The proposed solutions will allow you to use all the functionality of D3 when designing a wide range of monitoring and billing systems and automated process management systems. D3 has the flexibility to display a large layer of diverse data and allows data visualization using HTML, SVG and CSS.
APA, Harvard, Vancouver, ISO, and other styles
32

KRIEGER, WOLFGANG. "On subshift presentations." Ergodic Theory and Dynamical Systems 37, no. 4 (March 8, 2016): 1253–90. http://dx.doi.org/10.1017/etds.2015.82.

Full text
Abstract:
We consider partitioned graphs, by which we mean finite directed graphs with a partitioned edge set${\mathcal{E}}={\mathcal{E}}^{-}\cup {\mathcal{E}}^{+}$. Additionally given a relation${\mathcal{R}}$between the edges in${\mathcal{E}}^{-}$and the edges in${\mathcal{E}}^{+}$, and under the appropriate assumptions on${\mathcal{E}}^{-},{\mathcal{E}}^{+}$and${\mathcal{R}}$, denoting the vertex set of the graph by$\mathfrak{P}$, we speak of an${\mathcal{R}}$-graph${\mathcal{G}}_{{\mathcal{R}}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{+})$. From${\mathcal{R}}$-graphs${\mathcal{G}}_{{\mathcal{R}}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{+})$we construct semigroups (with zero)${\mathcal{S}}_{{\mathcal{R}}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{+})$that we call${\mathcal{R}}$-graph semigroups. We write a list of conditions on a topologically transitive subshift with property$(A)$that together are sufficient for the subshift to have an${\mathcal{R}}$-graph semigroup as its associated semigroup.Generalizing previous constructions, we describe a method of presenting subshifts by means of suitably structured finite labeled directed graphs$({\mathcal{V}},~\unicode[STIX]{x1D6F4},\unicode[STIX]{x1D706}~)$with vertex set${\mathcal{V}}$, edge set$\unicode[STIX]{x1D6F4}$, and a label map that assigns to the edges in$\unicode[STIX]{x1D6F4}$labels in an${\mathcal{R}}$-graph semigroup${\mathcal{S}}_{{\mathcal{R}}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{-})$. We denote the presented subshift by$X({\mathcal{V}},\unicode[STIX]{x1D6F4},\unicode[STIX]{x1D706})$and call$X({\mathcal{V}},\unicode[STIX]{x1D6F4},\unicode[STIX]{x1D706})$an${\mathcal{S}}_{{\mathcal{R}}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{-})$-presentation.We introduce a property$(B)$of subshifts that describes a relationship between contexts of admissible words of a subshift, and we introduce a property$(c)$of subshifts that in addition describes a relationship between the past and future contexts and the context of admissible words of a subshift. Property$(B)$and the simultaneous occurrence of properties$(B)$and$(c)$are invariants of topological conjugacy.We consider subshifts in which every admissible word has a future context that is compatible with its entire past context. Such subshifts we call right instantaneous. We introduce a property$RI$of subshifts, and we prove that this property is a necessary and sufficient condition for a subshift to have a right instantaneous presentation. We consider also subshifts in which every admissible word has a future context that is compatible with its entire past context, and also a past context that is compatible with its entire future context. Such subshifts we call bi-instantaneous. We introduce a property$BI$of subshifts, and we prove that this property is a necessary and sufficient condition for a subshift to have a bi-instantaneous presentation.We define a subshift as strongly bi-instantaneous if it has for every sufficiently long admissible word$a$an admissible word$c$, that is contained in both the future context of$a$and the past context of$a$, and that is such that the word$ca$is a word in the future context of$a$that is compatible with the entire past context of$a$, and the word$ac$is a word in the past context of$a$, that is compatible with the entire future context of$a$. We show that a topologically transitive subshift with property$(A)$, and associated semigroup a graph inverse semigroup${\mathcal{S}}$, has an${\mathcal{S}}$-presentation, if and only if it has properties$(c)$and$BI$, and a strongly bi-instantaneous presentation, if and only if it has properties$(c)$and$BI$, and all of its bi-instantaneous presentations are strongly bi-instantaneous.We construct a class of subshifts with property$(A)$, to which certain graph inverse semigroups${\mathcal{S}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{+})$are associated, that do not have${\mathcal{S}}(\mathfrak{P},{\mathcal{E}}^{-},{\mathcal{E}}^{+})$-presentations.We associate to the labeled directed graphs$({\mathcal{V}},\unicode[STIX]{x1D6F4},\unicode[STIX]{x1D706})$topological Markov chains and Markov codes, and we derive an expression for the zeta function of$X({\mathcal{V}},\unicode[STIX]{x1D6F4},\unicode[STIX]{x1D706})$in terms of the zeta functions of the topological Markov shifts and the generating functions of the Markov codes.
APA, Harvard, Vancouver, ISO, and other styles
33

Freedman, Michael H., and Matthew B. Hastings. "Quantum Systems on Non-$k$-Hyperfinite Complexes: a generalization of classical statistical mechanics on expander graphs." Quantum Information and Computation 14, no. 1&2 (January 2014): 144–80. http://dx.doi.org/10.26421/qic14.1-2-9.

Full text
Abstract:
We construct families of cell complexes that generalize expander graphs. These families are called non-$k$-hyperfinite, generalizing the idea of a non-hyperfinite (NH) family of graphs. Roughly speaking, such a complex has the property that one cannot remove a small fraction of points and be left with an object that looks $k-1$-dimensional at large scales. We then consider certain quantum systems on these complexes. A future goal is to construct a family of Hamiltonians such that every low energy state has topological order as part of an attempt to prove the quantum PCP conjecture. This goal is approached by constructing a toric code Hamiltonian with the property that every low energy state without vertex defects has topological order, a property that would not hold for any local system in any lattice $Z^d$ or indeed on any $1$-hyperfinite complex. Further, such NH complexes find application in quantum coding theory. The hypergraph product codes\cite{hpc} of Tillich and Z\'{e}mor are generalized using NH complexes.
APA, Harvard, Vancouver, ISO, and other styles
34

Ji, Xiujuan, Lei Liu, and Jingwen Zhu. "Code Clone Detection with Hierarchical Attentive Graph Embedding." International Journal of Software Engineering and Knowledge Engineering 31, no. 06 (June 2021): 837–61. http://dx.doi.org/10.1142/s021819402150025x.

Full text
Abstract:
Code clone serves as a typical programming manner that reuses the existing code to solve similar programming problems, which greatly facilitates software development but recurs program bugs and maintenance costs. Recently, deep learning-based detection approaches gradually present their effectiveness on feature representation and detection performance. Among them, deep learning approaches based on abstract syntax tree (AST) construct models relying on the node embedding technique. In AST, the semantic of nodes is obviously hierarchical, and the importance of nodes is quite different to determine whether the two code fragments are cloned or not. However, some approaches do not fully consider the hierarchical structure information of source code. Some approaches ignore the different importance of nodes when generating the features of source code. Thirdly, when the tree is very large and deep, many approaches are vulnerable to the gradient vanishing problem during training. In order to properly address these challenges, we propose a hierarchical attentive graph neural network embedding model-HAG for the code clone detection. Firstly, the attention mechanism is applied on nodes in AST to distinguish the importance of different nodes during the model training. In addition, the HAG adopts graph convolutional network (GCN) to propagate the code message on AST graph and then exploits a hierarchical differential pooling GCN to sufficiently capture the code semantics at different structure level. To evaluate the effectiveness of HAG, we conducted extensive experiments on public clone dataset and compared it with seven state-of-the-art clone detection models. The experimental results demonstrate that the HAG achieves superior detection performance compared with baseline models. Especially, in the detection of moderately Type-3 or Type-4 clones, the HAG particularly outperforms baselines, indicating the strong detection capability of HAG for semantic clones. Apart from that, the impacts of the hierarchical pooling, attention mechanism and critical model parameters are systematically discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Upadhyay, Sarvagya. "Review of Introduction To Property Testing by Oded Goldreich." ACM SIGACT News 51, no. 4 (December 14, 2020): 6–10. http://dx.doi.org/10.1145/3444815.3444818.

Full text
Abstract:
The area of property testing is concerned with designing methods to decide whether an input object possesses a certain property or not. Usually the problem is described as a promise problem: either the input object has the property or the input object is far from possessing the property. Here, the meaning of object being far from possessing the property is based on a specified and meaningful notion of distance. The main objective of property testing is accomplishing this decision making by developing a super efficient tester. A tester that reads through the entire object can easily determine whether the property is satisfied or not. However, one wishes the tester to probe the input at very few random locations and determine whether the property is satisfied. As such, randomness is a necessary ingredient for testing and having the tester erring on few instances is a necessary price to pay for designing highly efficient methodologies. Much of the literature on property testing has focused on two types of objects: functions and graphs. Naturally they form the major portion of the book: functions are discussed from Chapters 2 to 6 and graph properties are discussed from Chapters 8 to 10. The final three chapters focus on distribution testing, probabilistically checkable proofs (PCPs) and locally testable codes, and ramifications of property testing on other related topics in Computer Science and Statistics. A separate chapter is devoted to query lower bound techniques.
APA, Harvard, Vancouver, ISO, and other styles
36

Murtaza, M., I. Javaid, and M. Fazil. "Covering codes of a graph associated to a finite vector space." Ukrains’kyi Matematychnyi Zhurnal 72, no. 7 (July 15, 2020): 952–59. http://dx.doi.org/10.37863/umzh.v72i7.652.

Full text
Abstract:
UDC 512.5 In this paper, we investigate the problem of covering the vertices of a graph associated to a finite vector space as introduced by Das [Commun. Algebra, <strong>44</strong>, 3918 – 3926 (2016)], such that we can uniquely identify any vertex by examining the vertices that cover it. We use locating-dominating sets and identifying codes, which are closely related concepts for this purpose. We find the location-domination number and the identifying number of the graph and study the exchange property for locating-dominating sets and identifying codes.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Yuanbo, Kris Satya, and Qirun Zhang. "Efficient algorithms for dynamic bidirected Dyck-reachability." Proceedings of the ACM on Programming Languages 6, POPL (January 16, 2022): 1–29. http://dx.doi.org/10.1145/3498724.

Full text
Abstract:
Dyck-reachability is a fundamental formulation for program analysis, which has been widely used to capture properly-matched-parenthesis program properties such as function calls/returns and field writes/reads. Bidirected Dyck-reachability is a relaxation of Dyck-reachability on bidirected graphs where each edge u → ( i v labeled by an open parenthesis “( i ” is accompanied with an inverse edge v → ) i u labeled by the corresponding close parenthesis “) i ”, and vice versa. In practice, many client analyses such as alias analysis adopt the bidirected Dyck-reachability formulation. Bidirected Dyck-reachability admits an optimal reachability algorithm. Specifically, given a graph with n nodes and m edges, the optimal bidirected Dyck-reachability algorithm computes all-pairs reachability information in O ( m ) time. This paper focuses on the dynamic version of bidirected Dyck-reachability. In particular, we consider the problem of maintaining all-pairs Dyck-reachability information in bidirected graphs under a sequence of edge insertions and deletions. Dynamic bidirected Dyck-reachability can formulate many program analysis problems in the presence of code changes. Unfortunately, solving dynamic graph reachability problems is challenging. For example, even for maintaining transitive closure, the fastest deterministic dynamic algorithm requires O ( n 2 ) update time to achieve O (1) query time. All-pairs Dyck-reachability is a generalization of transitive closure. Despite extensive research on incremental computation, there is no algorithmic development on dynamic graph algorithms for program analysis with worst-case guarantees. Our work fills the gap and proposes the first dynamic algorithm for Dyck reachability on bidirected graphs. Our dynamic algorithms can handle each graph update ( i.e. , edge insertion and deletion) in O ( n ·α( n )) time and support any all-pairs reachability query in O (1) time, where α( n ) is the inverse Ackermann function. We have implemented and evaluated our dynamic algorithm on an alias analysis and a context-sensitive data-dependence analysis for Java. We compare our dynamic algorithms against a straightforward approach based on the O ( m )-time optimal bidirected Dyck-reachability algorithm and a recent incremental Datalog solver. Experimental results show that our algorithm achieves orders of magnitude speedup over both approaches.
APA, Harvard, Vancouver, ISO, and other styles
38

Polak, Monika, and Vasyl Ustimenko. "Algorithms for generation of Ramanujan graphs, other Expanders and related LDPC codes." Annales Universitatis Mariae Curie-Sklodowska, sectio AI – Informatica 15, no. 2 (October 11, 2015): 14. http://dx.doi.org/10.17951/ai.2015.15.2.14-21.

Full text
Abstract:
Expander graphs are highly connected sparse finite graphs. The property of being an expander seems significant in many of these mathematical, computational and physical contexts. For practical applications it is very important to construct expander and Ramanujan graphs with given regularity and order. In general, constructions of the best expander graphs with a given regularity and order is no easy task. In this paper we present algorithms for generation of Ramanujan graphs and other expanders. We describe properties of obtained graphs in comparison to previously known results. We present a method to obtain a new examples of irregular LDPC codes based on described graphs and we briefly describe properties of this codes.
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Sein, Namkyeong Lee, Junseok Lee, Dongmin Hyun, and Chanyoung Park. "Heterogeneous Graph Learning for Multi-Modal Medical Data Analysis." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 5141–50. http://dx.doi.org/10.1609/aaai.v37i4.25643.

Full text
Abstract:
Routine clinical visits of a patient produce not only image data, but also non-image data containing clinical information regarding the patient, i.e., medical data is multi-modal in nature. Such heterogeneous modalities offer different and complementary perspectives on the same patient, resulting in more accurate clinical decisions when they are properly combined. However, despite its significance, how to effectively fuse the multi-modal medical data into a unified framework has received relatively little attention. In this paper, we propose an effective graph-based framework called HetMed (Heterogeneous Graph Learning for Multi-modal Medical Data Analysis) for fusing the multi-modal medical data. Specifically, we construct a multiplex network that incorporates multiple types of non-image features of patients to capture the complex relationship between patients in a systematic way, which leads to more accurate clinical decisions. Extensive experiments on various real-world datasets demonstrate the superiority and practicality of HetMed. The source code for HetMed is available at https://github.com/Sein-Kim/Multimodal-Medical.
APA, Harvard, Vancouver, ISO, and other styles
40

de Albuquerque, Clarice Dias, Reginaldo Palazzo Jr., and Eduardo Brandani da Silva. "Families of codes of topological quantum codes from tessellations tessellations {4i+2,2i+1}, {4i,4i}, {8i-4,4} and ${12i-6,3}." Quantum Information and Computation 14, no. 15&16 (November 2014): 1424–40. http://dx.doi.org/10.26421/qic14.15-16-8.

Full text
Abstract:
In this paper we present some classes of topological quantum codes on surfaces with genus $g \geq 2$ derived from hyperbolic tessellations with a specific property. We find classes of codes with distance $d = 3$ and encoding rates asymptotically going to 1, $\frac{1}{2}$ and $\frac{1}{3}$, depending on the considered tessellation. Furthermore, these codes are associated with embedding of complete bipartite graphs. We also analyze the parameters of these codes, mainly its distance, in addition to show a class of codes with distance 4. We also present a class of codes achieving the quantum Singleton bound, possibly the only one existing under this construction.
APA, Harvard, Vancouver, ISO, and other styles
41

Osorio, Maximiliano, Carlos Buil-Aranda, Idafen Santana-Perez, and Daniel Garijo. "DockerPedia: A Knowledge Graph of Software Images and Their Metadata." International Journal of Software Engineering and Knowledge Engineering 32, no. 01 (January 2022): 71–89. http://dx.doi.org/10.1142/s0218194022500036.

Full text
Abstract:
An increasing amount of researchers use software images to capture the requirements and code dependencies needed to carry out computational experiments. Software images preserve the computational environment required to execute a scientific experiment and have become a crucial asset for reproducibility. However, software images are usually not properly documented and described, making it challenging for scientists to find, reuse and understand them. In this paper, we propose a framework for automatically describing software images in a machine-readable manner by (i) creating a vocabulary to describe software images; (ii) developing an annotation framework designed to automatically document the underlying environment of software images and (iii) creating DockerPedia, a Knowledge Graph with over 150,000 annotated software images, automatically described using our framework. We illustrate the usefulness of our approach in finding images with specific software dependencies, comparing similar software images, addressing versioning problems when running computational experiments; and flagging problems with vulnerable software dependencies.
APA, Harvard, Vancouver, ISO, and other styles
42

Dr. Karansinh Rathod. "Perceiving Genre with Special Reference to the Academic Writing." International Peer Reviewed E Journal of English Language & Literature Studies - ISSN: 2583-5963 2, no. 2 (December 10, 2020): 210–41. http://dx.doi.org/10.58213/ell.v2i2.29.

Full text
Abstract:
This study uses innovative computational rhetorical analysis tools to investigate the use of citations in a corpus of academic articles. As a result of genre theory, our study uses graph-theoretic diagrams to extract and amplify expected patterns of repeated moves that are linked with stable academic writing genres. There is evidence to suggest that our computational strategy is as good as qualitative researchers who code by hand, such as Karatsolis and colleagues, in properly detecting and classifying citation movements (this issue). Pairwise comparisons of advisor and advisee texts reveal further applications for automated computational analysis as formative feedback in a mentoring scenario.
APA, Harvard, Vancouver, ISO, and other styles
43

Qiang, Weizhong, Shizhen Wang, Hai Jin, and Jiangying Zhong. "Fine-Grained Control-Flow Integrity Based on Points-to Analysis for CPS." Security and Communication Networks 2018 (October 17, 2018): 1–11. http://dx.doi.org/10.1155/2018/3130652.

Full text
Abstract:
A cyber-physical system (CPS) is known as a mix system composed of computational and physical capabilities. The fast development of CPS brings new security and privacy requirements. Code reuse attacks that affect the correct behavior of software by exploiting memory corruption vulnerabilities and reusing existing code may also be threats to CPS. Various defense techniques are proposed in recent years as countermeasures to emerging code reuse attacks. However, they may fail to fulfill the security requirement well because they cannot protect the indirect function calls properly when it comes to dynamic code reuse attacks aiming at forward edges of control-flow graph (CFG). In this paper, we propose P-CFI, a fine-grained control-flow integrity (CFI) method, to protect CPS against memory-related attacks. We use points-to analysis to construct the legitimate target set for every indirect call cite and check whether the target of the indirect call cite is in the legitimate target set at runtime. We implement a prototype of P-CFI on LLVM and evaluate both its functionality and performance. Security analysis proves that P-CFI can mitigate the dynamic code reuse attack based on forward edges of CFG. Performance evaluation shows that P-CFI can protect CPS from dynamic code reuse attacks with trivial time overhead between 0.1% and 3.5% (Copyright © 2018 John Wiley & Sons, Ltd.).
APA, Harvard, Vancouver, ISO, and other styles
44

Hammer, Barbara, Alessio Micheli, and Alessandro Sperduti. "Universal Approximation Capability of Cascade Correlation for Structures." Neural Computation 17, no. 5 (May 1, 2005): 1109–59. http://dx.doi.org/10.1162/0899766053491878.

Full text
Abstract:
Cascade correlation (CC) constitutes a training method for neural networks that determines the weights as well as the neural architecture during training. Various extensions of CC to structured data have been proposed: recurrent cascade correlation (RCC) for sequences, recursive cascade correlation (RecCC) for tree structures with limited fan-out, and contextual recursive cascade correlation (CRecCC) for rooted directed positional acyclic graphs (DPAGs) with limited fan-in and fan-out. We show that these models possess the universal approximation property in the following sense: given a probability measure P on the input set, every measurable function from sequences into a real vector space can be approximated by a sigmoidal RCC up to any desired degree of accuracy up to inputs of arbitrary small probability. Every measurable function from tree structures with limited fan-out into a real vector space can be approximated by a sigmoidal RecCC with multiplicative neurons up to any desired degree of accuracy up to inputs of arbitrary small probability. For sigmoidal CRecCC networks with multiplicative neurons, we show the universal approximation capability for functions on an important subset of all DPAGs with limited fan-in and fan-out for which a specific linear representation yields unique codes. We give one sufficient structural condition for the latter property, which can easily be tested: the enumeration of ingoing and outgoing edges should becom patible. This property can be fulfilled for every DPAG with fan-in and fan-out two via reenumeration of children and parents, and for larger fan-in and fan-out via an expansion of the fan-in and fan-out and reenumeration of children and parents. In addition, the result can be generalized to the case of input-output isomorphic transductions of structures. Thus, CRecCC networks consti-tute the first neural models for which the universal approximation ca-pability of functions involving fairly general acyclic graph structures is proved.
APA, Harvard, Vancouver, ISO, and other styles
45

Kurtukova, Anna, and Alexander Romanov. "Identification Author of Source Code by Machine Learning Methods." SPIIRAS Proceedings 18, no. 3 (June 4, 2019): 742–66. http://dx.doi.org/10.15622/sp.2019.18.3.741-765.

Full text
Abstract:
The paper is devoted to the analysis of the problem of determining the source code author , which is of interest to researchers in the field of information security, computer forensics, assessment of the quality of the educational process, protection of intellectual property. The paper presents a detailed analysis of modern solutions to the problem. The authors suggest two new identification techniques based on machine learning algorithms: support vector machine, fast correlation filter and informative features; the technique based on hybrid convolutional recurrent neural network. The experimental database includes samples of source codes written in Java, C ++, Python, PHP, JavaScript, C, C # and Ruby. The data was obtained using a web service for hosting IT-projects – Github. The total number of source codes exceeds 150 thousand samples. The average length of each of them is 850 characters. The case size is 542 authors. The experiments were conducted with source codes written in the most popular programming languages. Accuracy of the developed techniques for different numbers of authors was assessed using 10-fold cross-validation. An additional series of experiments was conducted with the number of authors from 2 to 50 for the most popular Java programming language. The graphs of the relationship between identification accuracy and case size are plotted. The analysis of result showed that the method based on hybrid neural network gives 97% accuracy, and it’s at the present time the best-known result. The technique based on the support vector machine made it possible to achieve 96% accuracy. The difference between the results of the hybrid neural network and the support vector machine was approximately 5%.
APA, Harvard, Vancouver, ISO, and other styles
46

Yan, Zichao, William L. Hamilton, and Mathieu Blanchette. "Graph neural representational learning of RNA secondary structures for predicting RNA-protein interactions." Bioinformatics 36, Supplement_1 (July 1, 2020): i276—i284. http://dx.doi.org/10.1093/bioinformatics/btaa456.

Full text
Abstract:
Abstract Motivation RNA-protein interactions are key effectors of post-transcriptional regulation. Significant experimental and bioinformatics efforts have been expended on characterizing protein binding mechanisms on the molecular level, and on highlighting the sequence and structural traits of RNA that impact the binding specificity for different proteins. Yet our ability to predict these interactions in silico remains relatively poor. Results In this study, we introduce RPI-Net, a graph neural network approach for RNA-protein interaction prediction. RPI-Net learns and exploits a graph representation of RNA molecules, yielding significant performance gains over existing state-of-the-art approaches. We also introduce an approach to rectify an important type of sequence bias caused by the RNase T1 enzyme used in many CLIP-Seq experiments, and we show that correcting this bias is essential in order to learn meaningful predictors and properly evaluate their accuracy. Finally, we provide new approaches to interpret the trained models and extract simple, biologically interpretable representations of the learned sequence and structural motifs. Availability and implementation Source code can be accessed at https://www.github.com/HarveyYan/RNAonGraph. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
47

Duprat, François, Jean-Luc Ploix, and Gérard Dreyfus. "Can Graph Machines Accurately Estimate 13C NMR Chemical Shifts of Benzenic Compounds?" Molecules 29, no. 13 (July 1, 2024): 3137. http://dx.doi.org/10.3390/molecules29133137.

Full text
Abstract:
In the organic laboratory, the 13C nuclear magnetic resonance (NMR) spectrum of a newly synthesized compound remains an essential step in elucidating its structure. For the chemist, the interpretation of such a spectrum, which is a set of chemical-shift values, is made easier if he/she has a tool capable of predicting with sufficient accuracy the carbon-shift values from the structure he/she intends to prepare. As there are few open-source methods for accurately estimating this property, we applied our graph-machine approach to build models capable of predicting the chemical shifts of carbons. For this study, we focused on benzene compounds, building an optimized model derived from training a database of 10,577 chemical shifts originating from 2026 structures that contain up to ten types of non-carbon atoms, namely H, O, N, S, P, Si, and halogens. It provides a training root-mean-squared relative error (RMSRE) of 0.5%, i.e., a root-mean-squared error (RMSE) of 0.6 ppm, and a mean absolute error (MAE) of 0.4 ppm for estimating the chemical shifts of the 10k carbons. The predictive capability of the graph-machine model is also compared with that of three commercial packages on a dataset of 171 original benzenic structures (1012 chemical shifts). The graph-machine model proves to be very efficient in predicting chemical shifts, with an RMSE of 0.9 ppm, and compares favorably with the RMSEs of 3.4, 1.8, and 1.9 ppm computed with the ChemDraw v. 23.1.1.3, ACD v. 11.01, and MestReNova v. 15.0.1-35756 packages respectively. Finally, a Docker-based tool is proposed to predict the carbon chemical shifts of benzenic compounds solely from their SMILES codes.
APA, Harvard, Vancouver, ISO, and other styles
48

Josipović, Lana, Shabnam Sheikhha, Andrea Guerrieri, Paolo Ienne, and Jordi Cortadella. "Buffer Placement and Sizing for High-Performance Dataflow Circuits." ACM Transactions on Reconfigurable Technology and Systems 15, no. 1 (March 31, 2022): 1–32. http://dx.doi.org/10.1145/3477053.

Full text
Abstract:
Commercial high-level synthesis tools typically produce statically scheduled circuits. Yet, effective C-to-circuit conversion of arbitrary software applications calls for dataflow circuits, as they can handle efficiently variable latencies (e.g., caches), unpredictable memory dependencies, and irregular control flow. Dataflow circuits exhibit an unconventional property: registers (usually referred to as “buffers”) can be placed anywhere in the circuit without changing its semantics, in strong contrast to what happens in traditional datapaths. Yet, although functionally irrelevant, this placement has a significant impact on the circuit’s timing and throughput. In this work, we show how to strategically place buffers into a dataflow circuit to optimize its performance. Our approach extracts a set of choice-free critical loops from arbitrary dataflow circuits and relies on the theory of marked graphs to optimize the buffer placement and sizing. Our performance optimization model supports important high-level synthesis features such as pipelined computational units, units with variable latency and throughput, and if-conversion. We demonstrate the performance benefits of our approach on a set of dataflow circuits obtained from imperative code.
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Le, Zihang Liu, Tongyu Zhu, Leilei Sun, Bowen Du, and Weifeng Lv. "Predicting Temporal Sets with Simplified Fully Connected Networks." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 4835–44. http://dx.doi.org/10.1609/aaai.v37i4.25609.

Full text
Abstract:
Given a sequence of sets, where each set contains an arbitrary number of elements, temporal sets prediction aims to predict which elements will appear in the subsequent set. Existing methods for temporal sets prediction are developed on sophisticated components (e.g., recurrent neural networks, attention or gating mechanisms, and graph neural networks), which inevitably increase the model complexity due to more trainable parameters and higher computational costs. Moreover, the involved nonlinear activation may contribute little or even degrade the performance. In this paper, we present a succinct architecture that is solely built on the Simplified Fully Connected Networks (SFCNs) for temporal sets prediction to bring both effectiveness and efficiency together. In particular, given a user's sequence of sets, we employ SFCNs to derive representations of the user by learning inter-set temporal dependencies, intra-set element relationships, and intra-embedding channel correlations. Two families of general functions are introduced to preserve the permutation-invariant property of each set and the permutation-equivariant property of elements in each set. Moreover, we design a user representations adaptive fusing module to aggregate user representations according to each element for improving the prediction performance. Experiments on four benchmarks show the superiority of our approach over the state-of-the-art under both transductive and inductive settings. We also theoretically and empirically demonstrate that our model has lower space and time complexity than baselines. Codes and datasets are available at https://github.com/yule-BUAA/SFCNTSP.
APA, Harvard, Vancouver, ISO, and other styles
50

Conti, Roberto, Pierluca D’Adamio, Emanuele Galardi, Enrico Meli, Daniele Nocciolini, Luca Pugi, Andrea Rindi, Giulio Lo Presti, and Stefano Rossin. "Control design, simulation and validation of a turbo-machinery auxiliary plant." Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering 231, no. 4 (April 15, 2016): 849–63. http://dx.doi.org/10.1177/0954408916644003.

Full text
Abstract:
In the oil and gas industry, the testing of auxiliary lubrication plants represents an important preliminary activity before the whole turbo machinery train (including the auxiliary lubrication plant) can be put in operation. Therefore, the employment of both efficient and accurate plant models becomes mandatory to synthesize satisfactory control strategies both for testing and normal operation purposes. For this reason, this paper focuses on the development of innovative real-time models and control architectures to describe and regulate auxiliary lubrication plants. In particular, according to the Bond-Graph modelling strategy, a novel lumped parameter model of the lube oil unit has been developed to properly optimize the behaviour of this unit if it is controlled. The code has been compiled and uploaded on a commercial real-time platform, employed to control the pressure control valve of the physical plant, for which a new controller has been developed. The comparison between the data obtained from the simulated system and acquired from the physical plant shows good agreement and the good performance and reliability of the proposed model and control strategy. The modelling approach and the control strategy have been developed in collaboration with GE Nuovo Pignone S.p.a. while the experimental data were acquired in a plant located in Ptuj (Slovenia).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography