Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Low-code programming.

Zeitschriftenartikel zum Thema „Low-code programming“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Low-code programming" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Luong, Tran Thanh, und Le My Canh. „JAVASCRIPT ASYNCHRONOUS PROGRAMMING“. Hue University Journal of Science: Techniques and Technology 128, Nr. 2B (16.07.2019): 5–16. http://dx.doi.org/10.26459/hueuni-jtt.v128i2b.5104.

Der volle Inhalt der Quelle
Annotation:
JavaScript has become more and more popular in recent years because its wealthy features as being dynamic, interpreted and object-oriented with first-class functions. Furthermore, JavaScript is designed with event-driven and I/O non-blocking model that boosts the performance of overall application especially in the case of Node.js. To take advantage of these characteristics, many design patterns that implement asynchronous programming for JavaScript were proposed. However, choosing a right pattern and implementing a good asynchronous source code is a challenge and thus easily lead into less robust application and low quality source code. Extended from our previous works on exception handling code smells in JavaScript and exception handling code smells in JavaScript asynchronous programming with promise, this research aims at studying the impact of three JavaScript asynchronous programming patterns on quality of source code and application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Vazquez-Vilar, Marta, Diego Orzaez und Nicola Patron. „DNA assembly standards: Setting the low-level programming code for plant biotechnology“. Plant Science 273 (August 2018): 33–41. http://dx.doi.org/10.1016/j.plantsci.2018.02.024.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rabbani, Faqih Salban, und Oscar Karnalim. „Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment“. Journal of information and organizational sciences 41, Nr. 1 (16.06.2017): 105–23. http://dx.doi.org/10.31341/jios.41.1.7.

Der volle Inhalt der Quelle
Annotation:
Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling) in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach. 
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Zheng Dong, Kai He, Hai Tao Fang und Ru Xu Du. „Design of Embedded Controller with Flexible Programming for Industrial Robot“. Applied Mechanics and Materials 457-458 (Oktober 2013): 1390–95. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.1390.

Der volle Inhalt der Quelle
Annotation:
This paper presents an embedded controller of low cost and high performance for industrial robot. The ARM microprocessor is chosen as the main controller for the processor. Based on the 7-segment cubic spline interpolation algorithm, a real-time control for the robot is implemented. This proposed trajectory method can perfectly generates speed S-curve shape for a start-stop process of the robot. In this article a programming language, G-code, is developped for the robots motion control by using the editing interface we designed specially on PC platform. With all the features for 2-4 degree of freedom robot application, the editing interface has been developed for editing and compiling G-code, which can be downloaded into the microprocessor with the custom communication protocol through communication interface. The programming method of the G-code language is easy to learn and use for the non-professional users. The paper describes the design and implementation in detail for the controller, which was validated on our designed SCARA robot, and it worked reliably.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

De Pra, Yuri, und Federico Fontana. „Programming Real-Time Sound in Python“. Applied Sciences 10, Nr. 12 (19.06.2020): 4214. http://dx.doi.org/10.3390/app10124214.

Der volle Inhalt der Quelle
Annotation:
For its versatility, Python has become one of the most popular programming languages. In spite of its possibility to straightforwardly link native code with powerful libraries for scientific computing, the use of Python for real-time sound applications development is often neglected in favor of alternative programming languages, which are tailored to the digital music domain. This article introduces Python as a real-time software programming tool to interested readers, including Python developers who are new to the real time or, conversely, sound programmers who have not yet taken this language into consideration. Cython and Numba are proposed as libraries supporting agile development of efficient software running at machine level. Moreover, it is shown that refactoring few critical parts of the program under these libraries can dramatically improve the performances of a sound algorithm. Such improvements can be directly benchmarked within Python, thanks to the existence of appropriate code parsing resources. After introducing a simple sound processing example, two algorithms that are known from the literature are coded to show how Python can be effectively employed to program sound software. Finally, issues of efficiency are mainly discussed in terms of latency of the resulting applications. Overall, such issues suggest that the use of real-time Python should be limited to the prototyping phase, where the benefits of language flexibility prevail on low latency requirements, for instance, needed during computer music live performances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sandewall, Eric. „Knowledge-based systems, Lisp, and very high level implementation languages“. Knowledge Engineering Review 7, Nr. 2 (Juni 1992): 147–55. http://dx.doi.org/10.1017/s0269888900006263.

Der volle Inhalt der Quelle
Annotation:
AbstractIt is usually agreed that programming languages for implementing (other) programming languages, or ‘implementation languages’, should be simple low-level languages which are close to the machine code and to the operating system. In this paper it is argued that a very high level implementation language is a good idea, of particular importance for knowledge-based systems, and that Lisp (as a language and as a system) is very well suited to be a very high level implementation language. The significance of special-purpose programming languages is also discussed, and the requirements that they have for a very high level implementation language are considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bhatt, Sandeep, Marina Chen, James Cowie, Cheng-Yee Lin und Pangfeng Liu. „Object-Oriented Support for Adaptive Methods on Paranel Machines“. Scientific Programming 2, Nr. 4 (1993): 179–92. http://dx.doi.org/10.1155/1993/474972.

Der volle Inhalt der Quelle
Annotation:
This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent), and the low overhead of the resulting C++ code (over hand-crafted C code) supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer) is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yang, Chang, Hao Li, Peng Gao und Rong Chun Zhang. „Research on the 3-D Scene Quick Build Method of Virtual City and Implementation Method of Basic GIS Function“. Applied Mechanics and Materials 580-583 (Juli 2014): 2760–64. http://dx.doi.org/10.4028/www.scientific.net/amm.580-583.2760.

Der volle Inhalt der Quelle
Annotation:
<span><span lang="EN-US">Aiming at the common problems of virtual city modeling,such as low efficiency and large amount of data</span><span lang="EN-US">,</span><span lang="EN-US">this paper proposes a quick method to build a virtual city.The method realizes data read automatically through AutoCAD VBA programming, generated the VRML code automatically, buildthree-dimensional scene by the VRML code integration and realize the Web publishing of virual city via Web.</span>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Patnaik, Archana, und Neelamdhab Padhy. „A Hybrid Approach to Identify Code Smell Using Machine Learning Algorithms“. International Journal of Open Source Software and Processes 12, Nr. 2 (April 2021): 21–35. http://dx.doi.org/10.4018/ijossp.2021040102.

Der volle Inhalt der Quelle
Annotation:
Code smell aims to identify bugs that occurred during software development. It is the task of identifying design problems. The significant causes of code smell are complexity in code, violation of programming rules, low modelling, and lack of unit-level testing by the developer. Different open source systems like JEdit, Eclipse, and ArgoUML are evaluated in this work. After collecting the data, the best features are selected using recursive feature elimination (RFE). In this paper, the authors have used different anomaly detection algorithms for efficient recognition of dirty code. The average accuracy value of k-means, GMM, autoencoder, PCA, and Bayesian networks is 98%, 94%, 96%, 89%, and 93%. The k-means clustering algorithm is the most suitable algorithm for code detection. Experimentally, the authors proved that ArgoUML project is having better performance as compared to Eclipse and JEdit projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Alshaye, Ibrahim Abdullah, Nurul Farhana Jumaat und Zaidatun Tasir. „Programming Skills and the Relation in Fostering Students’ Higher Order Thinking“. Asian Social Science 14, Nr. 11 (22.10.2018): 76. http://dx.doi.org/10.5539/ass.v14n11p76.

Der volle Inhalt der Quelle
Annotation:
Programming skills (PS) refer to coding and debugging that are required for those who write a program in any programming language. Coding can be described as the implementation aspect of programming, whereas debugging can broadly be defined as fixing any incorrect code that is found after running the programming test. Higher-order thinking skills (HOTs) refer to the top three levels of Bloom&rsquo;s taxonomy which are Analysis, Synthesis, and Evaluation. This study aims to determine the relationship between PS and HOTs among secondary students. Many studies indicate that students who attend programming courses for first time have low levels of performance in PS. Coding and debugging skills reflect higher-order thinking levels. Therefore, an objective of this study was to investigate the effect of coding and debugging skills on their HOTs. The benefits of having PS among learners are that they are able to achieve HOTs. Indeed, these relationships may be explained by programmers need to apply all these HOTs throughout the three phases of the programming process. Students who have low levels of PS are able to achieve the analysis level, while students who have moderate levels of PS able to achieve the synthesis level, and finally, students with high levels of PS are able to achieve the evaluation level.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Varghese, Anish, Bob Edwards, Gaurav Mitra und Alistair P. Rendell. „Programming the Adapteva Epiphany 64-core network-on-chip coprocessor“. International Journal of High Performance Computing Applications 31, Nr. 4 (27.08.2015): 285–302. http://dx.doi.org/10.1177/1094342015599238.

Der volle Inhalt der Quelle
Annotation:
Energy efficiency is the primary impediment in the path to exascale computing. Consequently, the high-performance computing community is increasingly interested in low-power high-performance embedded systems as building blocks for large-scale high-performance systems. The Adapteva Epiphany architecture integrates low-power RISC cores on a 2D mesh network and promises up to 70 GFLOPS/Watt of theoretical performance. However, with just 32 KB of memory per eCore for storing both data and code, programming the Epiphany system presents significant challenges. In this paper we evaluate the performance of a 64-core Epiphany system with a variety of basic compute and communication micro-benchmarks. Further, we implemented two well known application kernels, 5-point star-shaped heat stencil with a peak performance of 65.2 GFLOPS and matrix multiplication with 65.3 GFLOPS in single precision across 64 Epiphany cores. We discuss strategies for implementing high-performance computing application kernels on such memory constrained low-power devices and compare the Epiphany with competing low-power systems. With future Epiphany revisions expected to house thousands of cores on a single chip, understanding the merits of such an architecture is of prime importance to the exascale initiative.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Couder-Castañeda, C., H. Barrios-Piña, I. Gitler und M. Arroyo. „Performance of a Code Migration for the Simulation of Supersonic Ejector Flow to SMP, MIC, and GPU Using OpenMP, OpenMP+LEO, and OpenACC Directives“. Scientific Programming 2015 (2015): 1–20. http://dx.doi.org/10.1155/2015/739107.

Der volle Inhalt der Quelle
Annotation:
A serial source code for simulating a supersonic ejector flow is accelerated using parallelization based on OpenMP and OpenACC directives. The purpose is to reduce the development costs and to simplify the maintenance of the application due to the complexity of the FORTRAN source code. This research follows well-proven strategies in order to obtain the best performance in both OpenMP and OpenACC. OpenMP has become the programming standard for scientific multicore software and OpenACC is one true alternative for graphics accelerators without the need of programming low level kernels. The strategies using OpenMP are oriented towards reducing the creation of parallel regions, tasks creation to handle boundary conditions, and a nested control of the loop time for the programming in offload mode specifically for the Xeon Phi. In OpenACC, the strategy focuses on maintaining the data regions among the executions of the kernels. Experiments for performance and validation are conducted here on a 12-core Xeon CPU, Xeon Phi 5110p, and Tesla C2070, obtaining the best performance from the latter. The Tesla C2070 presented an acceleration factor of 9.86X, 1.6X, and 4.5X compared against the serial version on CPU, 12-core Xeon CPU, and Xeon Phi, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Viklund, Lars, und Peter Fritzson. „ObjectMath – An Object-Oriented Language and Environment for Symbolic and Numerical Processing in Scientific Computing“. Scientific Programming 4, Nr. 4 (1995): 229–50. http://dx.doi.org/10.1155/1995/829697.

Der volle Inhalt der Quelle
Annotation:
ObjectMath is a language for scientific computing that integrates object-oriented constructs with features for symbolic and numerical computation. Using ObjectMath, complex mathematical models may be implemented in a natural way. The ObjectMath programming environment provides tools for generating efficient numerical code from such models. Symbolic computation is used to rewrite and simplify equations before code is generated. One novelty of the ObjectMath approach is that it provides a comman language and an integrated environment for this kind of mixed symbolic/numerical computation. The motivation for this work is the current low-level state of the art in programming for scientific computing. Much numerical software is still being developed the traditional way in Fortran. This is especially true in application areas such as machine elements analysis, where complex nonlinear problems are the norm. We believe that tools like ObjectMath can increase productivity and quality, thus enabling users to solve problems that are too complex to handle with traditional tools.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Al-Batah, Mohammad Subhi, Nouh Alhindawi, Rami Malkawi und Ahmad Al Zuraiqi. „Hybrid Technique for Complexity Analysis for Java Code“. International Journal of Software Innovation 7, Nr. 3 (Juli 2019): 118–33. http://dx.doi.org/10.4018/ijsi.2019070107.

Der volle Inhalt der Quelle
Annotation:
Software complexity can be defined as the degree of difficulty in analysis, testing, design and implementation of software. Typically, reducing model complexity has had a significant impact on maintenance activities. A lot of metrics have been used to measure the complexity of source code such as Halstead, McCabe Cyclomatic, Lines of Code, and Maintainability Index, etc. This article proposed a hybrid module which consists of two theories which are Halstead and McCabe, both theories will be used to analyze a code written in Java. The module provides a mechanism to better evaluate the proficiency level of programmers, and also provides a tool which enables the managers to evaluate the programming levels and their enhancements over time. This will be known by discovering the various differences between levels of complexity in the code. If the program complexity level is low, then of the programmer professionalism level is high, on the other hand, if the program complexity level is high, then the programmer professionalism level is almost low. The results of the conducted experiments show that the proposed approach give very high and accurate evaluation for the undertaken systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhereb, K. A. „Improving performance of Python code using rewriting rules technique“. PROBLEMS IN PROGRAMMING, Nr. 2-3 (September 2020): 115–25. http://dx.doi.org/10.15407/pp2020.02-03.115.

Der volle Inhalt der Quelle
Annotation:
Python is a popular programming language used in many areas, but its performance is significantly lower than many compiled languages. We propose an approach to increasing performance of Python code by transforming fragments of code to more efficient languages such as Cython and C++. We use high-level algebraic models and rewriting rules technique for semi-automated code transformation. Performance-critical fragments of code are transformed into a low-level syntax model using Python parser. Then this low-level model is further transformed into a high-level algebraic model that is language-independent and easier to work with. The transformation is automated using rewriting rules implemented in Termware system. We also improve the constructed high-level model by deducing additional information such as data types and constraints. From this enhanced high-level model of code we generate equivalent fragments of code using code generators for Cython and C++ languages. Cython code is seamlessly integrated with Python code, and for C++ code we generate a small utility file in Cython that also integrates this code with Python. This way, the bulk of program code can stay in Python and benefit from its facilities, but performance-critical fragments of code are transformed into more efficient equivalents, improving the performance of resulting program. Comparison of execution times between initial version of Python code, different versions of transformed code and using automatic tools such as Cython compiler and PyPy demonstrates the benefits of our approach – we have achieved performance gains of over 50x compared to the initial version written in Python, and over 2x compared to the best automatic tool we have tested.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Chorley, Martin J., David W. Walker und Martyn F. Guest. „Hybrid Message-Passing and Shared-Memory Programming in a Molecular Dynamics Application On Multicore Clusters“. International Journal of High Performance Computing Applications 23, Nr. 3 (02.06.2009): 196–211. http://dx.doi.org/10.1177/1094342009106188.

Der volle Inhalt der Quelle
Annotation:
Hybrid programming, whereby shared-memory and message-passing programming techniques are combined within a single parallel application, has often been discussed as a method for increasing code performance on clusters of symmetric multiprocessors (SMPs). This paper examines whether the hybrid model brings any performance benefits for clusters based on multicore processors. A molecular dynamics application has been parallelized using both MPI and hybrid MPI/OpenMP programming models. The performance of this application has been examined on two high-end multicore clusters using both Infiniband and Gigabit Ethernet interconnects. The hybrid model has been found to perform well on the higher-latency Gigabit Ethernet connection, but offers no performance benefit on low-latency Infiniband interconnects. The changes in performance are attributed to the differing communication profiles of the hybrid and MPI codes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Haveraaen, Magne. „Machine and Collection Abstractions for User-Implemented Data-Parallel Programming“. Scientific Programming 8, Nr. 4 (2000): 231–46. http://dx.doi.org/10.1155/2000/485607.

Der volle Inhalt der Quelle
Annotation:
Data parallelism has appeared as a fruitful approach to the parallelisation of compute-intensive programs. Data parallelism has the advantage of mimicking the sequential (and deterministic) structure of programs as opposed to task parallelism, where the explicit interaction of processes has to be programmed. In data parallelism data structures, typically collection classes in the form of large arrays, are distributed on the processors of the target parallel machine. Trying to extract distribution aspects from conventional code often runs into problems with a lack of uniformity in the use of the data structures and in the expression of data dependency patterns within the code. Here we propose a framework with two conceptual classes, Machine and Collection. The Machine class abstracts hardware communication and distribution properties. This gives a programmer high-level access to the important parts of the low-level architecture. The Machine class may readily be used in the implementation of a Collection class, giving the programmer full control of the parallel distribution of data, as well as allowing normal sequential implementation of this class. Any program using such a collection class will be parallelisable, without requiring any modification, by choosing between sequential and parallel versions at link time. Experiments with a commercial application, built using the Sophus library which uses this approach to parallelisation, show good parallel speed-ups, without any adaptation of the application program being needed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Steuwer, Michel, Michael Haidl, Stefan Breuer und Sergei Gorlatch. „High-Level Programming of Stencil Computations on Multi-GPU Systems Using the SkelCL Library“. Parallel Processing Letters 24, Nr. 03 (September 2014): 1441005. http://dx.doi.org/10.1142/s0129626414410059.

Der volle Inhalt der Quelle
Annotation:
The implementation of stencil computations on modern, massively parallel systems with GPUs and other accelerators currently relies on manually-tuned coding using low-level approaches like OpenCL and CUDA. This makes development of stencil applications a complex, time-consuming, and error-prone task. We describe how stencil computations can be programmed in our SkelCL approach that combines high-level programming abstractions with competitive performance on multi-GPU systems. SkelCL extends the OpenCL standard by three high-level features: 1) pre-implemented parallel patterns (a.k.a. skeletons); 2) container data types for vectors and matrices; 3) automatic data (re)distribution mechanism. We introduce two new SkelCL skeletons which specifically target stencil computations – MapOverlap and Stencil – and we describe their use for particular application examples, discuss their efficient parallel implementation, and report experimental results on systems with multiple GPUs. Our evaluation of three real-world applications shows that stencil code written with SkelCL is considerably shorter and offers competitive performance to hand-tuned OpenCL code.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Cheng, Xiao, Haoyu Wang, Jiayi Hua, Guoai Xu und Yulei Sui. „DeepWukong“. ACM Transactions on Software Engineering and Methodology 30, Nr. 3 (Mai 2021): 1–33. http://dx.doi.org/10.1145/3436877.

Der volle Inhalt der Quelle
Annotation:
Static bug detection has shown its effectiveness in detecting well-defined memory errors, e.g., memory leaks, buffer overflows, and null dereference. However, modern software systems have a wide variety of vulnerabilities. These vulnerabilities are extremely complicated with sophisticated programming logic, and these bugs are often caused by different bad programming practices, challenging existing bug detection solutions. It is hard and labor-intensive to develop precise and efficient static analysis solutions for different types of vulnerabilities, particularly for those that may not have a clear specification as the traditional well-defined vulnerabilities. This article presents D eep W ukong , a new deep-learning-based embedding approach to static detection of software vulnerabilities for C/C++ programs. Our approach makes a new attempt by leveraging advanced recent graph neural networks to embed code fragments in a compact and low-dimensional representation, producing a new code representation that preserves high-level programming logic (in the form of control- and data-flows) together with the natural language information of a program. Our evaluation studies the top 10 most common C/C++ vulnerabilities during the past 3 years. We have conducted our experiments using 105,428 real-world programs by comparing our approach with four well-known traditional static vulnerability detectors and three state-of-the-art deep-learning-based approaches. The experimental results demonstrate the effectiveness of our research and have shed light on the promising direction of combining program analysis with deep learning techniques to address the general static code analysis challenges.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Papadimitriou, Stergios, und Lefteris Moussiades. „The design of JVM and native libraries in ScalaLab for efficient scientific computation“. International Journal of Modeling, Simulation, and Scientific Computing 09, Nr. 05 (Oktober 2018): 1850037. http://dx.doi.org/10.1142/s179396231850037x.

Der volle Inhalt der Quelle
Annotation:
ScalaLab is a MATLAB-like environment for the Java Virtual Machine (JVM). ScalaLab is based on the Scala programming language. It utilizes an extensive set of Java and Scala scientific libraries and also has access to many native C/C[Formula: see text] scientific libraries by using mainly the Java Native Interface (JNI). The performance of the JVM platform is continuously improved at a fast pace. Today JVM can effectively support demanding high-performance computing and scales well on multicore platforms. However, sometimes optimized native C/[Formula: see text] code can yield even better performance, by exploiting low-level programming issues, such as optimization of caches and architecture-dependent instruction sets. The present work reports some of the experiences that we gained with experiments with both Just in Time (JIT) JVM code and native code. We compare some aspects of Scala and C[Formula: see text] that concern the requirements of scientific computing and highlight some strong features of the Scala language that facilitate the implementation of scientific scripting. This paper describes how ScalaLab tries to combine the best features of the JVM with those of the C/C[Formula: see text] technology, in order to implement an effective scientific computing environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Kempka, Thomas. „Verification of a Python-based TRANsport Simulation Environment for density-driven fluid flow and coupled transport of heat and chemical species“. Advances in Geosciences 54 (14.10.2020): 67–77. http://dx.doi.org/10.5194/adgeo-54-67-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Numerical simulation has become an inevitable tool for improving the understanding on coupled processes in the geological subsurface and its utilisation. However, most of the available open source and commercial modelling codes do not come with flexible chemical modules or simply do not offer a straight-forward way to couple third-party chemical libraries. For that reason, the simple and efficient TRANsport Simulation Environment (TRANSE) has been developed based on the Finite Difference Method in order to solve the density-driven formulation of the Darcy flow equation, coupled with the equations for transport of heat and chemical species. Simple explicit, weighted semi-implicit or fully-implicit numerical schemes are available for the solution of the system of partial differential equations, whereby the entire numerical code is composed of less than 1000 lines of Python code, only. A diffusive flux-corrected advection scheme can be employed in addition to pure upwinding to minimise numerical diffusion in advection-dominated transport problems. The objective of the present study is to verify the numerical code implementation by means of benchmarks for density-driven fluid flow and advection-dominated transport. In summary, TRANSE exhibits a very good agreement with established numerical simulation codes for the benchmarks investigated here. Consequently, its applicability to numerical density-driven flow and transport problems is proven. The main advantage of the presented numerical code is that the implementation of complex problem-specific couplings between flow, transport and chemical reactions becomes feasible without substantial investments in code development using a low-level programming language, but the easy-to-read and -learn Python programming language.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Zhang, Lei, Da Zhao Zhu und Zhi Yong An. „Testing System for Low Power Laser Beam Divergence Angle of Visible - Near Infrared Optoelectronic Sights Based on Virtual Instrument“. Key Engineering Materials 552 (Mai 2013): 384–88. http://dx.doi.org/10.4028/www.scientific.net/kem.552.384.

Der volle Inhalt der Quelle
Annotation:
According to current situation of testing beam divergence angle of low power laser indicator, this paper is designed to develop a testing system for beam divergence angle of low power laser indicator based on virtual instrument. A sport utility is used in this system - data acquisition card for the displacement data acquisition and motion control and a video card for images acquisition. The use of labview graphical programming language to develop host computer program including Man-machine Interface and functional code, realizes Data acquisition, process, display, storage and motion control; ensures the system accurate and efficient.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Li, Xusheng, Zhisheng Hu, Haizhou Wang, Yiwei Fu, Ping Chen, Minghui Zhu und Peng Liu. „DeepReturn: A deep neural network can learn how to detect previously-unseen ROP payloads without using any heuristics“. Journal of Computer Security 28, Nr. 5 (28.09.2020): 499–523. http://dx.doi.org/10.3233/jcs-191368.

Der volle Inhalt der Quelle
Annotation:
Return-oriented programming (ROP) is a code reuse attack that chains short snippets of existing code to perform arbitrary operations on target machines. Existing detection methods against ROP exhibit unsatisfactory detection accuracy and/or have high runtime overhead. In this paper, we present DeepReturn, which innovatively combines address space layout guided disassembly and deep neural networks to detect ROP payloads. The disassembler treats application input data as code pointers and aims to find any potential gadget chains, which are then classified by a deep neural network as benign or malicious. Our experiments show that DeepReturn has high detection rate (99.3%) and a very low false positive rate (0.01%). DeepReturn successfully detects all of the 100 real-world ROP exploits that are collected in-the-wild, created manually or created by ROP exploit generation tools. DeepReturn is non-intrusive and does not incur any runtime overhead to the protected program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Jain, Manish, und Dinesh Gopalani. „Aspect-Oriented Approach for Testing Software Applications and Automatic Aspect Creation“. International Journal of Software Engineering and Knowledge Engineering 29, Nr. 10 (Oktober 2019): 1379–402. http://dx.doi.org/10.1142/s0218194019500438.

Der volle Inhalt der Quelle
Annotation:
The existing techniques for software testing can be used to perform only a particular type of testing, and moreover proficiency is required to write the automation test scripts using these techniques. This paper proposes a novel software testing approach using Aspect-Oriented Programming (AOP) that alone suffices for carrying out most of the types of software testing and thus obliterates the need of using distinctive tools for different types of testing. Nevertheless, AOP is a new programming paradigm and not all testers have the proficiency of working with it. Hence, a domain-specific language named Testing Aspect Generator Language (TAGL) was developed which has got a very low learning curve. Using TAGL, testers can write the testing code in the form of natural language-like statements. Further, the lexical analyzer and parser, written using lex and yacc, convert the TAGL statements into actual testing code in the form of AOP. The proposed approach was applied for the testing of widely used open source projects and remarkable bugs were detected into them. A detailed comparison as to how our approach is effective than the conventional testing techniques is provided.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Boutnaru, Shlomi, und Arnon Hershkovitz. „Software Quality and Security in Teachers' and Students' Codes When Learning a New Programming Language“. Interdisciplinary Journal of e-Skills and Lifelong Learning 11 (2015): 123–47. http://dx.doi.org/10.28945/2292.

Der volle Inhalt der Quelle
Annotation:
In recent years, schools (as well as universities) have added cyber security to their computer science curricula. This topic is still new for most of the current teachers, who would normally have a standard computer science background. Therefore the teachers are trained and then teaching their students what they have just learned. In order to explore differences in both populations’ learning, we compared measures of software quality and security between high-school teachers and students. We collected 109 source files, written in Python by 18 teachers and 31 students, and engineered 32 features, based on common standards for software quality (PEP 8) and security (derived from CERT Secure Coding Standards). We use a multi-view, data-driven approach, by (a) using hierarchical clustering to bottom-up partition the population into groups based on their code-related features and (b) building a decision tree model that predicts whether a student or a teacher wrote a given code (resulting with a LOOCV kappa of 0.751). Overall, our findings suggest that the teachers’ codes have a better quality than the students’ – with a sub-group of the teachers, mostly males, demonstrate better coding than their peers and the students – and that the students’ codes are slightly better secured than the teachers’ codes (although both populations show very low security levels). The findings imply that teachers might benefit from their prior knowledge and experience, but also emphasize the lack of continuous involvement of some of the teachers with code-writing. Therefore, findings shed light on computer science teachers as lifelong learners. Findings also highlight the difference between quality and security in today’s programming paradigms. Implications for these findings are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Mehr, S. Hessam M., Matthew Craven, Artem I. Leonov, Graham Keenan und Leroy Cronin. „A universal system for digitization and automatic execution of the chemical synthesis literature“. Science 370, Nr. 6512 (01.10.2020): 101–8. http://dx.doi.org/10.1126/science.abc2986.

Der volle Inhalt der Quelle
Annotation:
Robotic systems for chemical synthesis are growing in popularity but can be difficult to run and maintain because of the lack of a standard operating system or capacity for direct access to the literature through natural language processing. Here we show an extendable chemical execution architecture that can be populated by automatically reading the literature, leading to a universal autonomous workflow. The robotic synthesis code can be corrected in natural language without any programming knowledge and, because of the standard, is hardware independent. This chemical code can then be combined with a graph describing the hardware modules and compiled into platform-specific, low-level robotic instructions for execution. We showcase automated syntheses of 12 compounds from the literature, including the analgesic lidocaine, the Dess-Martin periodinane oxidation reagent, and the fluorinating agent AlkylFluor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Mhaidat, Khaldoon M., Mohammad I. Alali und Inad A. Aljarrah. „Efficient Low-Power Compact Hardware Units for Real-Time Image Processing“. International Journal of Information Technology and Web Engineering 9, Nr. 4 (Oktober 2014): 24–39. http://dx.doi.org/10.4018/ijitwe.2014100103.

Der volle Inhalt der Quelle
Annotation:
This paper presents efficient low-power compact hardware designs for common image processing functions including the median filter, smoothing filter, motion blurring, emboss filter, sharpening, Sobel, Roberts, and Canny edge detection. The designs were described in Verilog HDL. Xilinx ISE design suite was used for code simulation, synthesis, implementation, and chip programming. The designs were all evaluated in terms of speed, area (number of LUTs and registers), and power consumption. Post placement and routing (Post-PAR) results show that they need very small area and consume very little power while achieving good frame per second rate even for HDTV high resolution frames. This makes them suitable for real-time applications with stringent area and power budgets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Malviya, Vikas K., und Prashant K. Jain. „Computations for Simplification of Canned Cycles in CNC Programming for Turning Operations“. Advanced Materials Research 383-390 (November 2011): 965–71. http://dx.doi.org/10.4028/www.scientific.net/amr.383-390.965.

Der volle Inhalt der Quelle
Annotation:
Canned cycles are predefined machine instructions stored permanently in the machine controller. These are used while writing part program to perform machining operations that are of repetitive nature. Whenever G & M code for these canned cycles appeared in part program, the controller of the machine calls already stored instructions and executes. However all CNC machines may not have facility to run part programs with canned cycles due to lack of computational facilities. The developed utility in this program allows part programmers to write part program using canned cycles and later on convert them in simplified programs to run on the low cost CNC machines. Present work deals with computations required to convert a canned cycle into linear interpolation based on the machining parameters assigned in that canned cycles. Developed code reads the CNC part program and extracts the desired points from presented canned cycles and perform desired geometric computations to converts it into simple part programs with elimination of canned cycles and writes in another text file in the form of a part program. Developed system is user friendly and made available online for users at http://virtualcnc.iiitdmj.ac.in. It implies a web based online client-server architecture which facilitates user to send their own CNC part programs with canned cycles at server site via internet and get the simplified part program which contains only linear interpolation at their end. Therefore the developed utility in this work can also provide an interface/exchange program for part programs between various CNC machines of different make.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Chen, Xiao Dong, Hong Xi Wang und Qi Cheng Lao. „Development of Calculating Grinding Wheel-Profile Software Applied to the Sharping Grinder Machine“. Advanced Materials Research 411 (November 2011): 135–39. http://dx.doi.org/10.4028/www.scientific.net/amr.411.135.

Der volle Inhalt der Quelle
Annotation:
The grinding wheel-profile is mostly formed by adjusting the angle of cylindrical profile model when regrinding gear hob with the large helix flute. Repeat trial cutting is needed with this method, which results in low efficiency. In order to improve the efficiency of processing, a mathematical model of grinding wheel profile was establish in according to forming principle of the rake face of gear hob and accurately calculate grinding wheel-profile. A automatic programming system of calculating grinding wheel-profile and automatically generate NC code is developed with the developing tool of VC++.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Liu, Xiao Fei, Shu Mei Cui, Wei Feng Gao, Shu Mei Cui und Shi Ming Xu. „Software Development of On-Board Power Electronics Equipment Using Model-Based Design Methodology“. Applied Mechanics and Materials 494-495 (Februar 2014): 1524–28. http://dx.doi.org/10.4028/www.scientific.net/amm.494-495.1524.

Der volle Inhalt der Quelle
Annotation:
Model-Based Design (MBD) method is applied to power electronics device software development to overcome the problem of low efficiency in manual programming. The concept of Model-Based Design and several common development platforms are introduced. Based on tools in Simulink, an on-board charger control software is developed. Meanwhile the hardware platform, the model building, the model validation and the automatic code generation are also described. Experiments are carried out in the hardware platform to verify the correctness and feasibility of the codes.They are helpful for the software development of power electronics equipments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Fourtounis, Georgios, Nikolaos Papaspyrou und Panagiotis Theofilopoulos. „Modular polymorphic defunctionalization“. Computer Science and Information Systems 11, Nr. 4 (2014): 1417–34. http://dx.doi.org/10.2298/csis130923030f.

Der volle Inhalt der Quelle
Annotation:
Defunctionalization is generally considered a whole-program transformation and thus incompatible with separate compilation. In this paper, we formalize a modular variant of defunctionalization which can support separate compilation for a functional programming language with parametric polymorphism. Our technique allows modules in a Haskell-like language to be separately defunctionalized and compiled, then linked together to generate an executable program. We provide a prototype implementation of our modular defunctionalization technique and we discuss the experiences of its application in compiling a large subset of Haskell to low-level C code, based on the intensional transformation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Zou, Jiu Peng, Yu Qiang Dai, Xue Wu Liu, Li Ming Zhang und Feng Xia Liu. „The Programming Algorithm Based on Embedded System for the Output Conversion of the Humidity & Temperature Sensor SHTxx“. Advanced Engineering Forum 6-7 (September 2012): 294–98. http://dx.doi.org/10.4028/www.scientific.net/aef.6-7.294.

Der volle Inhalt der Quelle
Annotation:
In order to avoid the digital humidity & temperature sensor SHTxx’s output values conversion consume many quantity of storage location, and spend more operation time, a group new type of conversion polynomials, and corresponding programming algorithm were deduced and tested. The polynomials are precise equivalent to the conversion formulas provided by the manufacturers, but contain only Binary fixed-point integer, fractional part, and 2N. Using fixed-point calculations and shift operations instead of floating-point calculations, the results of program code reduction amount of 60 percent, and computing speed faster nearly 4 times than the original algorithm are obtained. Furthermore, a kind of speedy and unified CRC algorithm for read-out data of the sensor is proposed. The novel programming algorithm makes the output conversion more simplified, so it could pave the way for the low-end embedded applications of SHTxx.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Hemdani, Chabane, Rachida Aoudjit, Mustapha Lalam und Khaled Slimani. „Software and Hardware for managing Scratch Pad Memory“. International Journal of Reconfigurable and Embedded Systems (IJRES) 6, Nr. 2 (28.05.2018): 69. http://dx.doi.org/10.11591/ijres.v6.i2.pp69-81.

Der volle Inhalt der Quelle
Annotation:
<p>This paper proposes a low-cost architecture to improve the management SPM (Scratch Pad Memory) in dynamic and multitasking modes. In this context, our management strategy SPM based on Programmable Automaton implemented in Xilinx Vertex-5 FPGA is entirely different from prior research works. SPM is generally managed by software (by a strong programming logic or by compilation). But our Programmable Automaton facilitates access to SPM in order to move code or data and liberates space in SPM. After this step, software takes over content management of SPM (what part of code or data should be placed in SPM, locates spaces of Heap and Stack). So the performance of the programs is actually improved thanks to minimization of the access latency at the DRAM (Dynamic Random Access Memory or Main Memory).</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Wu, Ming Ming, Jian Ming Zhan und Jian Bo Zhang. „Study on Compliant Control by Cutter Radius Compensating for Rotating Curved Surfaces Polishing on NC Lathes“. Key Engineering Materials 416 (September 2009): 289–94. http://dx.doi.org/10.4028/www.scientific.net/kem.416.289.

Der volle Inhalt der Quelle
Annotation:
. Lots of new automatic polishing methods and equipments have been developed to solve the problem of low efficiency and quality in the traditional handwork curved surfaces polishing. However, these methods and equipments are complex and difficult to be applied in the industrial production. This paper proposes a new technology of rotate surface compliant polishing based on NC lathe cutter radius compensation. An efficient and compact rotate surfaces compliant polishing system is developed, which consists of a NC lathe, flexible polishing tool, workpiece, fixture and also the automatic programming software MASTERCAM. The polishing tool-path and NC code can be created in MASTERCAM. The value of the cutter radius compensation can also be set in NC code, so that the rotated surfaces can be polished by a traditional NC lathe. Experiment of polishing the aluminum sphere workpiece is taken, and the satisfying surface quality is achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Pančík, Juraj, und Pavel Maxera. „Control of Hydraulic Pulse System Based on the PLC and State Machine Programming“. Designs 2, Nr. 4 (20.11.2018): 48. http://dx.doi.org/10.3390/designs2040048.

Der volle Inhalt der Quelle
Annotation:
In this paper, we deal with a simple embedded electronic system for an industrial pneumatic–hydraulic system, based on a low-cost programmable logic controller (PLC) and industrial electronic parts with 24 V logic. The developed system is a hydraulic pulse system and generates a series of high-pressure hydraulic pulses with up to a max. 200 bar output pressure level and with up to a max. 2 Hz output hydraulic pulses frequency. In this paper we are describing requirements, the concept of the embedded control system in a diagram, security features and its industrial network connectivity (CAN bus, MODBUS). In description of the software solution we describe the implementation of the program threads approach in this low-cost PLC. The PLC programming with threads generate two layers of services—physical and application layer, and as a result, the threads create the main control state machine. In conclusion, we describe the calibration method of the system and the calibration curves. For further study we offer readers the full programming code written in sequential function charts to be used as PLC language. The cost of the described industrial networked control system with industry standard optoelectronic insulated interfaces and certified industrial safety relay does not exceed €1000 Euros.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Weissman, Jon B., Andrew S. Grimshaw und R. D. Ferraro. „Parallel Object-Oriented Computation Applied to a Finite Element Problem“. Scientific Programming 2, Nr. 4 (1993): 133–44. http://dx.doi.org/10.1155/1993/859092.

Der volle Inhalt der Quelle
Annotation:
The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes and extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the programming tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used. The vehicle for our demonstration is a 2D electromagnetic finite element scattering code we have implemented in Mentat, an object-oriented parallel processing system. We briefly describe the application. Mentat, the implementation, and present performance results for both a Mentat and a hand-coded parallel Fortran version.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Jacobs, C. T., und M. D. Piggott. „Firedrake-Fluids v0.1: numerical modelling of shallow water flows using an automated solution framework“. Geoscientific Model Development 8, Nr. 3 (09.03.2015): 533–47. http://dx.doi.org/10.5194/gmd-8-533-2015.

Der volle Inhalt der Quelle
Annotation:
Abstract. This model description paper introduces a new finite element model for the simulation of non-linear shallow water flows, called Firedrake-Fluids. Unlike traditional models that are written by hand in static, low-level programming languages such as Fortran or C, Firedrake-Fluids uses the Firedrake framework to automatically generate the model's code from a high-level abstract language called Unified Form Language (UFL). By coupling to the PyOP2 parallel unstructured mesh framework, Firedrake can then target the code towards a desired hardware architecture to enable the efficient parallel execution of the model over an arbitrary computational mesh. The description of the model includes the governing equations, the methods employed to discretise and solve the governing equations, and an outline of the automated solution process. The verification and validation of the model, performed using a set of well-defined test cases, is also presented along with a road map for future developments and the solution of more complex fluid dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Jacobs, C. T., und M. D. Piggott. „Firedrake-Fluids v0.1: numerical modelling of shallow water flows using a performance-portable automated solution framework“. Geoscientific Model Development Discussions 7, Nr. 4 (27.08.2014): 5699–738. http://dx.doi.org/10.5194/gmdd-7-5699-2014.

Der volle Inhalt der Quelle
Annotation:
Abstract. This model description paper introduces a new finite element model for the simulation of non-linear shallow water flows, called Firedrake-Fluids. Unlike traditional models that are written by hand in static, low-level programming languages such as Fortran or C, Firedrake-Fluids uses the Firedrake framework to automatically generate the model's code from a high-level abstract language called UFL. By coupling to the PyOP2 parallel unstructured mesh framework, Firedrake can then target the code in a performance-portable manner towards a desired hardware architecture to enable the efficient parallel execution of the model over an arbitrary computational mesh. The description of the model includes the governing equations, the methods employed to discretise and solve the governing equations, and an outline of the automated solution process. The verification and validation of the model, performed using a set of well-defined test cases, is also presented along with a roadmap for future developments and the solution of more complex fluid dynamical systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Kamar, Sara, Abdelmoniem Fouda, Abdelhalim Zekry und Abdelmoniem Elmahdy. „FPGA implementation of RS codec with interleaver in DVB-T using VHDL“. International Journal of Engineering & Technology 6, Nr. 4 (28.11.2017): 171. http://dx.doi.org/10.14419/ijet.v6i4.8205.

Der volle Inhalt der Quelle
Annotation:
Digital television (DTV) provides a huge amount of information to many users at low cost. Recently, it can be packaged and fully integrated into completely digital transmission networks. Reed-Solomon code (RS) is one type of error correcting codes that can be used to enhance the performance of DTV. Interleaving/deinterleaving process enhances the performance of channel errors by spreading out random errors, very high-speed hardware description language (VHDL) is used in electronic design automation. It can be used as a general-purpose parallel programming language.This paper presents VHDL program for Reed-Solomoncodec (204, 188) and convolutional interleaver/deinterleaver, used in Digital Video Broadcasting-terrestrial system (DVB-T), according to ETSI EN 300 744 V1.5.1 standard. The VHDL programs are implemented on Xilinx 12.3 ISE and then simulated and tested via ISE simulator then the code is synthesized on FPGA device the results are compared with IP core for Xilinx 12.3 ISE, which gives the same results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

BANSAL, ARVIND K. „AN ASSOCIATIVE DATA PARALLEL COMPILATION MODEL FOR TIGHT INTEGRATION OF HIGH PERFORMANCE KNOWLEDGE RETRIEVAL AND COMPUTING“. International Journal on Artificial Intelligence Tools 03, Nr. 01 (März 1994): 97–125. http://dx.doi.org/10.1142/s0218213094000078.

Der volle Inhalt der Quelle
Annotation:
Associative Computation is characterized by intertwining of search by content and data parallel computation. An algebra for associative computation is described. A compilation based model and a novel abstract machine for associative logic programming are presented. The model uses loose coupling of left hand side of the program, treated as data, and right hand side of the program, treated as low level code. This representation achieves efficiency by associative computation and data alignment during goal reduction and during execution of low level abstract instructions. Data alignment reduces the overhead of data movement. Novel schemes for associative manipulation of aliased uninstantiated variables, data parallel goal reduction in the presence multiple occurrences of the same variables in a goal. The architecture, behavior, and performance evaluation of the model are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Nakao, Masahiro, Tetsuya Odajima, Hitoshi Murai, Akihiro Tabuchi, Norihisa Fujita, Toshihiro Hanawa, Taisuke Boku und Mitsuhisa Sato. „Evaluation of XcalableACC with tightly coupled accelerators/InfiniBand hybrid communication on accelerated cluster“. International Journal of High Performance Computing Applications 33, Nr. 5 (03.01.2019): 869–84. http://dx.doi.org/10.1177/1094342018821163.

Der volle Inhalt der Quelle
Annotation:
Accelerated clusters, which are cluster systems equipped with accelerators, are one of the most common systems in parallel computing. In order to exploit the performance of such systems, it is important to reduce communication latency between accelerator memories. In addition, there is also a need for a programming language that facilitates the maintenance of high performance by such systems. The goal of the present article is to evaluate XcalableACC (XACC), a parallel programming language, with tightly coupled accelerators/InfiniBand (TCAs/IB) hybrid communication on an accelerated cluster. TCA/IB hybrid communication is a combination of low-latency communication with TCA and high bandwidth with IB. The XACC language, which is a directive-based language for accelerated clusters, enables programmers to use TCA/IB hybrid communication with ease. In order to evaluate the performance of XACC with TCA/IB hybrid communication, we implemented the lattice quantum chromodynamics (LQCD) mini-application and evaluated the application on our accelerated cluster using up to 64 compute nodes. We also implemented the LQCD mini-application using a combination of CUDA and MPI (CUDA + MPI) and that of OpenACC and MPI (OpenACC + MPI) for comparison with XACC. Performance evaluation revealed that the performance of XACC with TCA/IB hybrid communication is 9% better than that of CUDA + MPI and 18% better than that of OpenACC + MPI. Furthermore, the performance of XACC was found to further increase by 7% by new expansion to XACC. Productivity evaluation revealed that XACC requires much less change from the serial LQCD code to implement the parallel LQCD code than CUDA + MPI and OpenACC + MPI. Moreover, since XACC can perform parallelization while maintaining the sequential code image, XACC is highly readable and shows excellent portability due to its directive-based approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Houshmand, Farzin, Mohsen Lesani und Keval Vora. „Grafs: declarative graph analytics“. Proceedings of the ACM on Programming Languages 5, ICFP (22.08.2021): 1–32. http://dx.doi.org/10.1145/3473588.

Der volle Inhalt der Quelle
Annotation:
Graph analytics elicits insights from large graphs to inform critical decisions for business, safety and security. Several large-scale graph processing frameworks feature efficient runtime systems; however, they often provide programming models that are low-level and subtly different from each other. Therefore, end users can find implementation and specially optimization of graph analytics error-prone and time-consuming. This paper regards the abstract interface of the graph processing frameworks as the instruction set for graph analytics, and presents Grafs, a high-level declarative specification language for graph analytics and a synthesizer that automatically generates efficient code for five high-performance graph processing frameworks. It features novel semantics-preserving fusion transformations that optimize the specifications and reduce them to three primitives: reduction over paths, mapping over vertices and reduction over vertices. Reductions over paths are commonly calculated based on push or pull models that iteratively apply kernel functions at the vertices. This paper presents conditions, parametric in terms of the kernel functions, for the correctness and termination of the iterative models, and uses these conditions as specifications to automatically synthesize the kernel functions. Experimental results show that the generated code matches or outperforms handwritten code, and that fusion accelerates execution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Schmitt, Christian, Moritz Schmid, Sebastian Kuckuk, Harald Köstler, Jürgen Teich und Frank Hannig. „Reconfigurable Hardware Generation of Multigrid Solvers with Conjugate Gradient Coarse-Grid Solution“. Parallel Processing Letters 28, Nr. 04 (Dezember 2018): 1850016. http://dx.doi.org/10.1142/s0129626418500160.

Der volle Inhalt der Quelle
Annotation:
Not only in the field of high-performance computing (HPC), field programmable gate arrays (FPGAs) are a soaringly popular accelerator technology. However, they use a completely different programming paradigm and tool set compared to central processing units (CPUs) or even graphics processing units (GPUs), adding extra development steps and requiring special knowledge, hindering widespread use in scientific computing. To bridge this programmability gap, domain-specific languages (DSLs) are a popular choice to generate low-level implementations from an abstract algorithm description. In this work, we demonstrate our approach for the generation of numerical solver implementations based on the multigrid method for FPGAs from the same code base that is also used to generate code for CPUs using a hybrid parallelization of MPI and OpenMP. Our approach yields in a hardware design that can compute up to 11 V-cycles per second with an input grid size of 4096[Formula: see text]4096 and solution on the coarsest using the conjugate gradient (CG) method on a mid-range FPGA, beating vectorized, multi-threaded execution on an Intel Xeon processor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Rahate, Shubham R. „Design of Optimized Trading Strategies with Web Assembly“. International Journal for Research in Applied Science and Engineering Technology 9, Nr. VI (30.06.2021): 4997–5001. http://dx.doi.org/10.22214/ijraset.2021.36049.

Der volle Inhalt der Quelle
Annotation:
Web Assembly is growing and also the most widely studied area which interests many developers when it comes to performance and speed to make web development fast as ever. When it comes to speed and performance algorithms can perform faster computations. Algorithmic trading executes trade at a faster speed. It can buy and sell stocks within a fraction of milliseconds. However, selecting the right tools and technologies is extremely important in algorithmic trading. There are trading strategies which we can use to optimize our trade and increase the return gained on buying and selling stocks. But, choosing an efficient programming language is substantially important. A programming language with a low latency can leverage the trade. Most commonly used languages for algorithmic trading are C/C++, Java, C#, Python. Speed and performance are an essential factor in algorithmic trading. The main purpose of introducing web Assembly in trading as discussed above is speed and performance. Web Assembly is a low-level binary instruction which can execute any program on the web and it can deliver native like performance on the internet. Using Web Assembly, we can compile any code written in languages like C/C++, C#, Java, and python to wasm (Web Assembly executable file) and run on the browser. Web Assembly was developed by W3C, Mozilla Corporation, and Google.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Waysi, Diyar W. N., und Adnan M. A. Brifcani. „Enhanced Image Coding Scheme Based on Modified Embedded Zerotree Wavelet Transform (DMEZW)“. Science Journal of University of Zakho 5, Nr. 4 (30.12.2017): 324. http://dx.doi.org/10.25271/2017.5.4.414.

Der volle Inhalt der Quelle
Annotation:
In this paper the proposed scheme uses different processing methods by applying Integer Lifting Wavelet Transform (ILWT) on gray scale image generating four subband is presented. The low frequency subbands is compressed losslessly by the Developed Modified Embedded Zerotree Wavelet Transform (DMEZW) directly. The high and middle frequency subbands are compressed lossyly by applying first to single stage Vector Quantization (VQ) then to DMEZW, finally generating two vectors ready for entropy coding and it is presented as Arithmetic Coding (AC) to produce a bit stream to be stored or transmitted. The main improvements of DMEZW is done by modifying the scanning strategy of the wavelet coefficients and the quantization threshold. The high and low frequency subbands are manipulated separately. The experimental results show that the developed method can improve the quality of the recovered image and the encoding efficiency. The proposed scheme programming code has achieved high Compression Ratio (CR) and remarkable Peak Signal to Noise Ratio (PSNR).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sharma, Pratiksha, und Er Arshpreet Kaur. „Design of testing framework for code smell detection (OOPS) using BFO algorithm“. International Journal of Engineering & Technology 7, Nr. 2.27 (06.08.2018): 161. http://dx.doi.org/10.14419/ijet.v7i2.27.14635.

Der volle Inhalt der Quelle
Annotation:
Detection of bad smells refers to any indication in the program code of a execution that perhaps designate a issue, maintain the software and software evolution. Code Smell detection is a main challenging for software developers and their informal classification direct to the designing of various smell detection methods and software tools. It appraises 4 code smell detection tool in software like as a in Fusion, JDeodorant, PMD and Jspirit. In this research proposes a method for detection the bad code smells in software is called as code smell. Bad smell detection in software, OOSMs are used to identify the Source Code whereby Plug-in were implemented for code detection in which position of program initial code the bad smell appeared so that software refactoring can then acquire position. Classified the code smell, as a type of codes: long method, PIH, LPL, LC, SS and GOD class etc. Detection of the code smell and as a result applying the correct detection phases when require is significant to enhance the Quality of the code or program. The various tool has been proposed for detection of the code smell each one featured by particular properties. The main objective of this research work described our proposed method on using various tools for code smell detection. We find the major differences between them and dissimilar consequences we attained. The major drawback of current research work is that it focuses on one particular language which makes them restricted to one kind of programs only. These tools fail to detect the smelly code if any kind of change in environment is encountered. The base paper compares the most popular code smell detection tools on basis of various factors like accuracy, False Positive Rate etc. which gives a clear picture of functionality these tools possess. In this paper, a unique technique is designed to identify CSs. For this purpose, various object-oriented programming (OOPs)-based-metrics with their maintainability index are used. Further, code refactoring and optimization technique are applied to obtain low maintainability Index. Finally, the proposed scheme is evaluated to achieve satisfactory results. The results of the BFOA test defined that the lazy class caused framework defects in DLS, DR, and SE. However, the LPL caused no framework defects what so ever. The consequences of the connection rules test searched that the LCCS (Lazy Class Code Smell) caused structured defects in DE and DLS, which corresponded to the consequences of the BFOA test. In this research work, a proposed method is designed to verify the code smell. For this purpose, different OOPs based Software Metrics with their MI (Maintainability Index) are utilized. Further Code refactoring and optimization method id applied to attained the less maintainability index and evaluated to achieved satisfactory results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lapillonne, Xavier, und Oliver Fuhrer. „Using Compiler Directives to Port Large Scientific Applications to GPUs: An Example from Atmospheric Science“. Parallel Processing Letters 24, Nr. 01 (März 2014): 1450003. http://dx.doi.org/10.1142/s0129626414500030.

Der volle Inhalt der Quelle
Annotation:
For many scientific applications, Graphics Processing Units (GPUs) can be an interesting alternative to conventional CPUs as they can deliver higher memory bandwidth and computing power. While it is conceivable to re-write the most execution time intensive parts using a low-level API for accelerator programming, it may not be feasible to do it for the entire application. But, having only selected parts of the application running on the GPU requires repetitively transferring data between the GPU and the host CPU, which may lead to a serious performance penalty. In this paper we assess the potential of compiler directives, based on the OpenACC standard, for porting large parts of code and thus achieving a full GPU implementation. As an illustrative and relevant example, we consider the climate and numerical weather prediction code COSMO (Consortium for Small Scale Modeling) and focus on the physical parametrizations, a part of the code which describes all physical processes not accounted for by the fundamental equations of atmospheric motion. We show, by porting three of the dominant parametrization schemes, the radiation, microphysics and turbulence parametrizations, that compiler directives are an efficient tool both in terms of final execution time as well as implementation effort. Compiler directives enable to port large sections of the existing code with minor modifications while still allowing for further optimizations for the most performance critical parts. With the example of the radiation parametrization, which contains the solution of a block tri-diagonal linear system, the required code modifications and key optimizations are discussed in detail. Performance tests for the three physical parametrizations show a speedup of between 3× and 7× for execution time obtained on a GPU and on a multi-core CPU of an equivalent generation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Xing, Cui Fang, Feng Qin Wang und Yang Sun. „A New Web Application Framework Based on SSH“. Applied Mechanics and Materials 713-715 (Januar 2015): 2203–7. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.2203.

Der volle Inhalt der Quelle
Annotation:
The existing web frameworks focused on the encapsulation of bottom technical, could not commendably support large granularity and lack corresponding mechanism in terms of reusability and maneuverability.In the current study, a new web application framework NSSH was developed from by SSH frameworks.The new framework with characteristic of large granularity and high level reusability separated and encapsulated the presentation layer,business layer and data persistence layer,simplified the development of Web application,improved the usability, reliability and extendibility.Based on the design of NSSH,the application development platform was put forward which make the web application developer free from heavy low level code programming works and be able to pay more attention to the description of business logic and definition of user interface.The practice had proved that NSSH was applicable for developing large-scale web application by enhancing the efficiency of those complex systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Blasi, Danilo, Luca Mainetti, Luigi Patrono und Maria Laura Stefanizzi. „Implementation and Validation of a New Protocol Stack Architecture for Embedded Systems“. Journal of Communications Software and Systems 9, Nr. 3 (23.09.2013): 157. http://dx.doi.org/10.24138/jcomss.v9i3.145.

Der volle Inhalt der Quelle
Annotation:
The worldwide spreading of Internet, in combination with the development of new low power and low cost embedded devices, has enabled the so-called Internet of Things vision. Wireless Sensor Networks represent an invaluable resource for realizing such scenario, inside which new and innovative applications could be developed. However, the low availability of resources and the reduced processing capacity of the target embedded platforms make the development of the next-generation applications very challenging. This paper proposes an innovative system architecture, called STarch, able to simplify the development of new applications and protocols for resource-constrained objects. It is meant to follow the software engineering principles and to support a wide range of applications, making both the programming easier and the code portable over multiple hardware platforms. STarch simplifies the network configuration process, through the use of an automatic mechanism based on the XML language and it runs properly on different operating systems, including FreeRTOS and Contiki. The feasibility of the proposed architecture has been proved by using a test bed approach, while an extensive performance analysis have been carried out in order to demonstrate its effectiveness in terms of memory requirements and processing delays.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

DAHLEM, MARC, ANOOP BHAGYANATH und KLAUS SCHNEIDER. „Optimal Scheduling for Exposed Datapath Architectures with Buffered Processing Units by ASP“. Theory and Practice of Logic Programming 18, Nr. 3-4 (Juli 2018): 438–51. http://dx.doi.org/10.1017/s1471068418000170.

Der volle Inhalt der Quelle
Annotation:
AbstractConventional processor architectures are restricted in exploiting instruction level parallelism (ILP) due to the relatively low number of programmer-visible registers. Therefore, more recent processor architectures expose their datapaths so that the compiler (1) can schedule parallel instructions to different processing units and (2) can make effective use of local storage of the processing units. Among these architectures, the Synchronous Control Asynchronous Dataflow (SCAD) architecture is a new exposed datapath architecture whose processing units are equipped with first-in first-out (FIFO) buffers at their input and output ports.In contrast to register-based machines, the optimal code generation for SCAD is still a matter of research. In particular, SAT and SMT solvers were used to generate optimal resource constrained and optimal time constrained schedules for SCAD, respectively. As Answer Set Programming (ASP) offers better flexibility in handling such scheduling problems, we focus in this paper on using an answer set solver for both resource and time constrained optimal SCAD code generation. As a major benefit of using ASP, we are able to generatealloptimal schedules for a given program which allows one to study their properties. Furthermore, the experimental results of this paper demonstrate that the answer set solver can compete with SAT solvers and outperforms SMT solvers.This paper is under consideration for acceptance in TPLP.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie