Journal articles on the topic 'Multi-Version Execution'

To see the other types of publications on this topic, follow the link: Multi-Version Execution.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-Version Execution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Qiang, Weizhong, Feng Chen, Laurence T. Yang, and Hai Jin. "MUC: Updating cloud applications dynamically via multi-version execution." Future Generation Computer Systems 74 (September 2017): 254–64. http://dx.doi.org/10.1016/j.future.2015.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dr. Vivek Jaglan, Ms Swati, Dr Shalini Bhaskar Bajaj,. "A NOVEL MULTI GRANULARITY LOCKING SCHEME BASED ON CONCURRENT MULTI -VERSION HIERARCHICAL STRUCTURE." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 1 (March 15, 2021): 932–47. http://dx.doi.org/10.17762/itii.v9i1.221.

Full text
Abstract:
We present an efficient locking scheme in a hierarchical data structure. The existing multi-granularity locking mechanism works on two extremes: fine-grained locking through which concurrency is being maximized, and coarse grained locking that is being applied to minimize the locking cost. Between the two extremes, there lies several pare to-optimal options that provide a trade-off between the concurrency that can be attained. In this work, we present a locking technique, Collaborative Granular Version Locking (CGVL) which selects an optimal locking combination to serve locking requests in a hierarchical structure. In CGVL a series of version is being maintained at each granular level which allows the simultaneous execution of read and write operation on the data item. Our study reveals that in order to achieve optimal performance the lock manager explore various locking options by converting certain non-supporting locking modes into supporting locking modes thereby improving the existing compatibility matrix of multiple granularity locking protocol. Our claim is being quantitatively validated by using a Java Sun JDK environment, which shows that our CGVL perform better compared to the state-of-the-art existing MGL methods. In particular, CGVL attains 20% reduction in execution time for the locking operation that are being carried out by considering, the following parameters: i) The number of threads ii) The number of locked object iii) The duration of critical section (CPU Cycles) which significantly supports the achievement of enhanced concurrency in terms of the number of concurrent read accesses.
APA, Harvard, Vancouver, ISO, and other styles
3

Saramud, Mikhail V., Igor V. Kovalev, Vasiliy V. Losev, Mariam O. Petrosyan, and Dmitriy I. Kovalev. "Multi-version approach to improve the reliability of processing data of the earth remote sensing in the real-time." E3S Web of Conferences 75 (2019): 01005. http://dx.doi.org/10.1051/e3sconf/20197501005.

Full text
Abstract:
The article describes the use of a multi-version approach to improve the accuracy of the classification of images when solving the problem of image analysis for Earth remote sensing. The implementation of this approach makes it possible to reduce the classification error and, consequently, to increase the reliability of processing remote sensing data. A practical study was carried out in a multi-version real-time execution environment, which makes it possible to organize image processing on board of an unmanned vehicle. The results confirm the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramos, Alfredo S., Pablo A. Miranda-Gonzalez, Samuel Nucamendi-Guillén, and Elias Olivares-Benitez. "A Formulation for the Stochastic Multi-Mode Resource-Constrained Project Scheduling Problem Solved with a Multi-Start Iterated Local Search Metaheuristic." Mathematics 11, no. 2 (January 9, 2023): 337. http://dx.doi.org/10.3390/math11020337.

Full text
Abstract:
This research introduces a stochastic version of the multi-mode resource-constrained project scheduling problem (MRCPSP) and its mathematical model. In addition, an efficient multi-start iterated local search (MS-ILS) algorithm, capable of solving the deterministic MRCPSP, is adapted to deal with the proposed stochastic version of the problem. For its deterministic version, the MRCPSP is an NP-hard optimization problem that has been widely studied. The problem deals with a trade-off between the amount of resources that each project activity requires and its duration. In the case of the proposed stochastic formulation, the execution times of the activities are uncertain. Benchmark instances of projects with 10, 20, 30, and 50 activities from well-known public libraries were adapted to create test instances. The adapted algorithm proved to be capable and efficient for solving the proposed stochastic problem.
APA, Harvard, Vancouver, ISO, and other styles
5

Švancara, Jiří, Marek Vlk, Roni Stern, Dor Atzmon, and Roman Barták. "Online Multi-Agent Pathfinding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7732–39. http://dx.doi.org/10.1609/aaai.v33i01.33017732.

Full text
Abstract:
Multi-agent pathfinding (MAPF) is the problem of moving a group of agents to a set of target destinations while avoiding collisions. In this work, we study the online version of MAPF where new agents appear over time. Several variants of online MAPF are defined and analyzed theoretically, showing that it is not possible to create an optimal online MAPF solver. Nevertheless, we propose effective online MAPF algorithms that balance solution quality, runtime, and the number of plan changes an agent makes during execution.
APA, Harvard, Vancouver, ISO, and other styles
6

Jena, Swagat Kumar, Satyabrata Das, and Satya Prakash Sahoo. "Design and Development of a Parallel Lexical Analyzer for C Language." International Journal of Knowledge-Based Organizations 8, no. 1 (January 2018): 68–82. http://dx.doi.org/10.4018/ijkbo.2018010105.

Full text
Abstract:
Future of computing is rapidly moving towards massively multi-core architecture because of its power and cost advantages. Almost everywhere Multi-core processors are being used now-a-days and number of cores per chip is also relatively increasing. To exploit full potential offered by multi-core architecture, the system software like compilers should be designed for parallelized execution. In the past, various significant works have been made to change the design of traditional compiler to take advantages of the future multi-core platform. This paper focuses on adapting parallelism in the lexical analysis phase of the compilation process. The main objective of our proposal is to do the lexical analysis i.e., finding the tokens in an input stream in parallel. We use the parallel constructs available in OpenMP to achieve parallelism in the lexical analysis process for multi-core machines. The experimental result of our proposal shows a significant performance improvement in the parallel lexical analysis phase as compared to sequential version in terms of time of execution.
APA, Harvard, Vancouver, ISO, and other styles
7

Serttaş, Sevil, and Veysel Harun Şahin. "PBench: A Parallel, Real-Time Benchmark Suite." Academic Perspective Procedia 1, no. 1 (November 9, 2018): 178–86. http://dx.doi.org/10.33793/acperpro.01.01.37.

Full text
Abstract:
Real-time systems are widely used from the automotive industry to the aerospace industry. The scientists, researchers, and engineers who develop real-time platforms, worst-case execution time analysis methods and tools need to compare their solutions to alternatives. For this purpose, they use benchmark applications. Today many of our computing systems are multicore and/or multiprocessor systems. Therefore, to be able to compare the effectiveness of real-time platforms, worst-case execution time analysis methods and tools, the research community need multi-threaded benchmark applications which scale on multicore and/or multiprocessor systems. In this paper, we present the first version of PBench, a parallel, real-time benchmark suite. PBench includes different types of multi-threaded applications which implement various algorithms from searching to sorting, matrix multiplication to probability distribution calculation. In addition, PBench provides single-threaded versions of all programs to allow side by side comparisons.
APA, Harvard, Vancouver, ISO, and other styles
8

Okumura, Keisuke, and Xavier Défago. "Solving Simultaneous Target Assignment and Path Planning Efficiently with Time-Independent Execution." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 270–78. http://dx.doi.org/10.1609/icaps.v32i1.19810.

Full text
Abstract:
Real-time planning for a combined problem of target assignment and path planning for multiple agents, also known as the unlabeled version of Multi-Agent Path Finding (MAPF), is crucial for high-level coordination in multi-agent systems, e.g., pattern formation by robot swarms. This paper studies two aspects of unlabeled-MAPF: (1) offline scenario: solving large instances by centralized approaches with small computation time, and (2) online scenario: executing unlabeled-MAPF despite timing uncertainties of real robots. For this purpose, we propose TSWAP, a novel sub-optimal complete algorithm, which takes an arbitrary initial target assignment then repeats one-timestep path planning with target swapping. TSWAP can adapt to both offline and online scenarios. We empirically demonstrate that Offline TSWAP is highly scalable; providing near-optimal solutions while reducing runtime by orders of magnitude compared to existing approaches. In addition, we present the benefits of Online TSWAP, such as delay tolerance, through real-robot demos.
APA, Harvard, Vancouver, ISO, and other styles
9

Vianna, Dalessandro Soares, José Elias Claudio Arroyo, Pedro Sampaio Vieira, and Thiago Ribeiro de Azeredo. "Parallel strategies for a multi-criteria GRASP algorithm." Production 17, no. 1 (April 2007): 84–93. http://dx.doi.org/10.1590/s0103-65132007000100006.

Full text
Abstract:
This paper proposes different strategies of parallelizing a multi-criteria GRASP (Greedy Randomized Adaptive Search Problem) algorithm. The parallel GRASP algorithm is applied to the multi-criteria minimum spanning tree problem, which is NP-hard. In this problem, a vector of costs is defined for each edge of the graph and the goal is to find all the efficient or Pareto optimal spanning trees (Pareto-optimal solutions). Each process finds a subset of efficient solutions. These subsets are joined using different strategies to obtain the final set of efficient solutions. The multi-criteria GRASP algorithm with the different parallel strategies are tested on complete graphs with n = 20, 30 and 50 nodes and r = 2 and 3 criteria. The computational results show that the proposed parallel algorithms reduce the execution time and the results obtained by the sequential version were improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Cavus, Mustafa, Mohammed Shatnawi, Resit Sendag, and Augustus K. Uht. "Fast Key-Value Lookups with Node Tracker." ACM Transactions on Architecture and Code Optimization 18, no. 3 (June 2021): 1–26. http://dx.doi.org/10.1145/3452099.

Full text
Abstract:
Lookup operations for in-memory databases are heavily memory bound, because they often rely on pointer-chasing linked data structure traversals. They also have many branches that are hard-to-predict due to random key lookups. In this study, we show that although cache misses are the primary bottleneck for these applications, without a method for eliminating the branch mispredictions only a small fraction of the performance benefit is achieved through prefetching alone. We propose the Node Tracker (NT), a novel programmable prefetcher/pre-execution unit that is highly effective in exploiting inter key-lookup parallelism to improve single-thread performance. We extend NT with branch outcome streaming (BOS) to reduce branch mispredictions and show that this achieves an extra 3× speedup. Finally, we evaluate the NT as a pre-execution unit and demonstrate that we can further improve the performance in both single- and multi-threaded execution modes. Our results show that, on average, NT improves single-thread performance by 4.1× when used as a prefetcher; 11.9× as a prefetcher with BOS; 14.9× as a pre-execution unit and 18.8× as a pre-execution unit with BOS. Finally, with 24 cores of the latter version, we achieve a speedup of 203× and 11× over the single-core and 24-core baselines, respectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Middelburg, Cornelis. "Program Algebra for Random Access Machine Programs." Scientific Annals of Computer Science XXXII, no. 2 (December 7, 2022): 285–319. http://dx.doi.org/10.7561/sacs.2022.2.285.

Full text
Abstract:
This paper presents an algebraic theory of instruction sequences with instructions for a random access machine (RAM) as basic instructions, the behaviours produced by the instruction sequences concerned under execution, and the interaction between such behaviours and RAM memories. This theory provides a setting for the development of theory in areas such as computational complexity and analysis of algorithms that distinguishes itself by offering the possibility of equational reasoning to establish whether an instruction sequence computes a given function and being more general than the setting provided by any known version of the RAM model of computation. In this setting, a semi-realistic version of the RAM model of computation and a bit-oriented time complexity measure for this version are introduced. Under the time measure concerned, semi-realistic RAMs can be simulated by multi-tape Turing machines with quadratic time overhead.
APA, Harvard, Vancouver, ISO, and other styles
12

Mahammed, Nadir, and Sidi Mohamed Benslimane. "An Evolutionary Algorithm Based Approach for Business Process Multi-Criteria Optimization." International Journal of Organizational and Collective Intelligence 7, no. 2 (April 2017): 34–53. http://dx.doi.org/10.4018/ijoci.2017040103.

Full text
Abstract:
Business processes design optimization is known as the problem of creating feasible business processes while optimizing their criteria such as resource cost and execution time. In this paper, the authors propose an evolutionary multi-criteria approach based on a modified evolutionary algorithm for generating optimized business processes. The main contribution of this work is a framework capable of (i) generating business processes using an enhanced version of evolutionary algorithm NSGAII, (ii) verifying the feasibility of each business process created employing an effective algorithm, and (iii) selecting Pareto optimal solutions in a multi criteria optimization environment up to three criteria, with use of an effectual fitness function. The experimental results showed that the authors' proposal generates efficient business processes with high quality in terms of qualitative parameters compared with existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
13

Kannimuthu, Marimuthu, Benny Raphael, Palaneeswaran Ekambaram, and Ananthanarayanan Kuppuswamy. "Comparing optimization modeling approaches for the multi-mode resource-constrained multi-project scheduling problem." Engineering, Construction and Architectural Management 27, no. 4 (November 11, 2019): 893–916. http://dx.doi.org/10.1108/ecam-03-2019-0156.

Full text
Abstract:
Purpose Construction firms keep minimal resources to maintain productive working capital. Hence, resources are constrained and have to be shared among multiple projects in an organization. Optimal allocation of resources is a key challenge in such situations. Several approaches and heuristics have been proposed for this task. The purpose of this paper is to compare two approaches for multi-mode resource-constrained project scheduling in a multi-project environment. These are the single-project approach (portfolio optimization) and the multi-project approach (each project is optimized individually, and then heuristic rules are used to satisfy the portfolio constraint). Design/methodology/approach A direct search algorithm called Probabilistic Global Search Lausanne is used for schedule optimization. Multiple solutions are generated that achieve different trade-offs among the three criteria, namely, time, cost and quality. Good compromise solutions among these are identified using a multi-criteria decision making method, Relaxed Restricted Pareto Version 4. The solutions obtained using the single-project and multi-project approaches are compared in order to evaluate their advantages and disadvantages. Data from two sources are used for the evaluation: modified multi-mode resource-constrained project scheduling problem data sets from the project scheduling problem library (PSPLIB) and three real case study projects in India. Findings Computational results prove the superiority of the single-project approach over heuristic priority rules (multi-project approach). The single-project approach identifies better solutions compared to the multi-project approach. However, the multi-project approach involves fewer optimization variables and is faster in execution. Research limitations/implications It is feasible to adopt the single-project approach in practice; realistic resource constraints can be incorporated in a multi-objective optimization formulation; and good compromise solutions that achieve acceptable trade-offs among the conflicting objectives can be identified. Originality/value An integer programming model was developed in this research to optimize the multiple objectives in a multi-project environment considering explicit resource constraints and maximum daily costs constraints. This model was used to compare the performance of the two multi-project environment approaches. Unlike existing work in this area, the model used to predict the quality of activity execution modes is based on data collected from real construction projects.
APA, Harvard, Vancouver, ISO, and other styles
14

Molina-Pérez, Daniel, Edgar Alfredo Portilla-Flores, Eduardo Vega-Alvarado, Maria Bárbara Calva-Yañez, and Gabriel Sepúlveda-Cervantes. "A Novel Multi-Objective Harmony Search Algorithm with Pitch Adjustment by Genotype." Applied Sciences 11, no. 19 (September 25, 2021): 8931. http://dx.doi.org/10.3390/app11198931.

Full text
Abstract:
In this work, a new version of the Harmony Search algorithm for solving multi-objective optimization problems is proposed, MOHSg, with pitch adjustment using genotype. The main contribution consists of adjusting the pitch using the crowding distance by genotype; that is, the distancing in the search space. This adjustment automatically regulates the exploration–exploitation balance of the algorithm, based on the distribution of the harmonies in the search space during the formation of Pareto fronts. Therefore, MOHSg only requires the presetting of the harmony memory accepting rate and pitch adjustment rate for its operation, avoiding the use of a static bandwidth or dynamic parameters. MOHSg was tested through the execution of diverse test functions, and it was able to produce results similar or better than those generated by algorithms that constitute search variants of harmonies, representative of the state-of-the-art in multi-objective optimization with HS.
APA, Harvard, Vancouver, ISO, and other styles
15

HAMANO, MASAHIRO, and PHILIP SCOTT. "On geometry of interaction for polarized linear logic." Mathematical Structures in Computer Science 28, no. 10 (September 28, 2017): 1639–94. http://dx.doi.org/10.1017/s0960129517000196.

Full text
Abstract:
We present Geometry of Interaction (GoI) models for Multiplicative Polarized Linear Logic, MLLP, which is the multiplicative fragment of Olivier Laurent's Polarized Linear Logic. This is done by uniformly adding multi-points to various categorical models of GoI. Multi-points are shown to play an essential role in semantically characterizing the dynamics of proof networks in polarized proof theory. For example, they permit us to characterize the key feature of polarization, focusing, as well as being fundamental to our construction of concrete polarized GoI models.Our approach to polarized GoI involves following two independent studies, based on different categorical perspectives of GoI: (i)Inspired by the work of Abramsky, Haghverdi and Scott, a polarized GoI situation is defined in which multi-points are added to a traced monoidal category equipped with a reflexive object U. Using this framework, categorical versions of Girard's execution formula are defined, as well as the GoI interpretation of MLLP proofs. Running the execution formula is shown to characterize the focusing property (and thus polarities) as well as the dynamics of cut elimination.(ii)The Int construction of Joyal–Street–Verity is another fundamental categorical structure for modelling GoI. Here, we investigate it in a multi-pointed setting. Our presentation yields a compact version of Hamano–Scott's polarized categories, and thus denotational models of MLLP. These arise from a contravariant duality between monoidal categories of positive and negative objects, along with an appropriate bimodule structure (representing ‘non-focused proofs’) between them.Finally, as a special case of (ii) above, a compact model of MLLP is also presented based on Rel (the category of sets and relations) equipped with multi-points.
APA, Harvard, Vancouver, ISO, and other styles
16

Fuangkhon, Piyabute. "Parallel Multi-Class Contour Preserving Classification." Journal of Intelligent Systems 26, no. 1 (January 1, 2017): 109–21. http://dx.doi.org/10.1515/jisys-2015-0038.

Full text
Abstract:
AbstractSerial multi-class contour preserving classification can improve the representation of the contour of the data to improve the levels of classification accuracy for feed-forward neural network (FFNN). The algorithm synthesizes fundamental multi-class outpost vector (FMCOV) and additional multi-class outpost vector (AMCOV) at the decision boundary between consecutive classes of data to narrow the space of data. Both FMCOVs and AMCOVs will assist the FFNN to place the hyper-planes in such a way that can classify the data more accurately. However, the technique was designed to utilize only one processor. As a result, the execution time of the algorithm is significantly long. This article presents an improved version of the serial multi-class contour preserving classification that overcomes its time deficiency by utilizing thread-level parallelism to support parallel computing on multi-processor or multi-core system. The parallel algorithm distributes the data set and the processing of the FMCOV and AMCOV generators to be operated on available threads to increase the CPU utilization and the speedup factors of the FMCOV and AMCOV generators. The technique has been carefully designed to avoid data dependency issue. The experiments were conducted on both synthetic and real-world data sets. The experimental results confirm that the parallel multi-class contour preserving classification clearly outperforms the serial multi-class contour preserving classification in terms of CPU utilization and speedup factor.
APA, Harvard, Vancouver, ISO, and other styles
17

Weeks, Nathan T., Glenn R. Luecke, Brandon M. Groth, Marina Kraeva, Li Ma, Luke M. Kramer, James E. Koltes, and James M. Reecy. "High-performance epistasis detection in quantitative trait GWAS." International Journal of High Performance Computing Applications 32, no. 3 (July 12, 2016): 321–36. http://dx.doi.org/10.1177/1094342016658110.

Full text
Abstract:
epiSNP is a program for identifying pairwise single nucleotide polymorphism (SNP) interactions (epistasis) in quantitative-trait genome-wide association studies (GWAS). A parallel MPI version (EPISNPmpi) was created in 2008 to address this computationally expensive analysis on large data sets with many quantitative traits and SNP markers. However, the falling cost of genotyping has led to an explosion of large-scale GWAS data sets that challenge EPISNPmpi’s ability to compute results in a reasonable amount of time. Therefore, we optimized epiSNP for modern multi-core and highly parallel many-core processors to efficiently handle these large data sets. This paper describes the serial optimizations, dynamic load balancing using MPI-3 RMA operations, and shared-memory parallelization with OpenMP to further enhance load balancing and allow execution on the Intel Xeon Phi coprocessor (MIC). For a large GWAS data set, our optimizations provided a 38.43× speedup over EPISNPmpi on 126 nodes using 2 MICs on TACC’s Stampede Supercomputer. We also describe a Coarray Fortran (CAF) version that demonstrates the suitability of PGAS languages for problems with this computational pattern. We show that the Coarray version performs competitively with the MPI version on the NERSC Edison Cray XC30 supercomputer. Finally, the performance benefits of hyper-threading for this application on Edison (average 1.35× speedup) are demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
18

Shang, Yizi, Guiming Lu, Ling Shang, and Guangqian Wang. "Parallel processing on block-based Gauss-Jordan algorithm for desktop grid." Computer Science and Information Systems 8, no. 3 (2011): 739–59. http://dx.doi.org/10.2298/csis100907026s.

Full text
Abstract:
Two kinds of parallel possibilities exist in the block-based Gauss-Jordan (BbGJ) algorithm, which are intra-step and inter-steps based parallelism. But the existing parallel paradigm of BbGJ algorithm just aiming at the intra-step based parallelism, can?t meet the requirement of dispatching simultaneously as many tasks as possible to computing nodes of desktop grid platform exploiting thousands of volunteer computing resources. To overcome the problem described above, this paper presents a hybrid parallel paradigm for desktop grid platform, exploiting all the possible parallelizable parts of the BbGJ algorithm. As well known to us all, volatility is the key issue of desktop grid platform and faults are unavoidable during the process of program execution. So the adapted version of block BbGJ algorithm for desktop grid platform should take the volatility into consideration. To solve the problem presented above, the paper adopts multi-copy distribution strategy and multi-queue based task preemption method to ensure the key tasks can be executed on time, thus ensure the whole tasks can be finished in shorter period of time.
APA, Harvard, Vancouver, ISO, and other styles
19

Abdesselam, A. "A Multi-Resolution Texture Image Retrieval Using Fast Fourier Transform." Journal of Engineering Research [TJER] 7, no. 2 (December 1, 2010): 48. http://dx.doi.org/10.24200/tjer.vol7iss2pp48-58.

Full text
Abstract:
Texture is an important visual property that characterizes a wide range of natural and artificial images which makes it a useful feature for retrieving images. Several approaches have been proposed to describe the texture contents of an image. In early research works, such as edge histograms-based techniques and co-occurrence-based approaches, texture descriptors were mainly extracted from the spatial domain. Later on, dual spaces (transform of spatial domain) such as frequency space or spaces resulting from Gabor or wavelet transforms were explored for texture characterization. Recent physiological studies showed that human visual system can be modeled as a set of independent channels of various orientations and scales, this finding motivated the proliferation of multi-resolution methods for describing texture images. Most of these methods are either wavelet-based or Gabor-based. This paper summarizes our recent study of the use of Fourier-based techniques for characterizing image textures. At first, a singleresolution Fourier-based technique is proposed and its performance is compared against the performance of some classical Fourier-based methods. The proposed technique is then extended into a multi-resolution version. Performance of the modified technique is compared against those of the single-resolution approach and some other multi-resolution approaches recently described in literature. Two performance indicators were used in this comparison: retrieval accuracy and execution time of the techniques.
APA, Harvard, Vancouver, ISO, and other styles
20

Berkman, Sophie, Giuseppe Cerati, Kyle Knoepfel, Marc Mengel, Allison Reinsvold Hall, Michael Wang, Brian Gravelle, and Boyana Norris. "Optimizing the hit finding algorithm for liquid argon TPC neutrino detectors using parallel architectures." Journal of Instrumentation 17, no. 01 (January 1, 2022): P01026. http://dx.doi.org/10.1088/1748-0221/17/01/p01026.

Full text
Abstract:
Abstract Neutrinos are particles that interact rarely, so identifying them requires large detectors which produce lots of data. Processing this data with the computing power available is becoming even more difficult as the detectors increase in size to reach their physics goals. Liquid argon time projection chamber (LArTPC) neutrino experiments are expected to grow in the next decade to have 100 times more wires than in currently operating experiments, and modernization of LArTPC reconstruction code, including parallelization both at data- and instruction-level, will help to mitigate this challenge. The LArTPC hit finding algorithm is used across multiple experiments through a common software framework. In this paper we discuss a parallel implementation of this algorithm. Using a standalone setup we find speedup factors of two times from vectorization and 30–100 times from multi-threading on Intel architectures. The new version has been incorporated back into the framework so that it can be used by experiments. On a serial execution, the integrated version is about 10 times faster than the previous one and, once parallelization is enabled, further speedups comparable to the standalone program are achieved.
APA, Harvard, Vancouver, ISO, and other styles
21

Barták, Roman, Jiří Švancara, and Ivan Krasičenko. "MAPF Scenario: Software for Evaluating MAPF Plans on Real Robots." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 09 (April 3, 2020): 13602–3. http://dx.doi.org/10.1609/aaai.v34i09.7088.

Full text
Abstract:
Multi-Agent Path Finding (MAPF) deals with finding collision free paths for a set of agents (robots) moving on a graph. The interest in MAPF in the research community started to increase recently partly due to practical applications in areas such as warehousing and computer games. However, the academic community focuses mostly on solving the abstract version of the problem (moving of agents on the graph) with only a few results on real robots. The presented software MAPF Scenario provides a tool for specifying MAPF problems on grid maps, solving the problems using various abstractions (for example, assuming rotation actions or not), simulating execution of plans, and translating the abstract plans to control programs for small robots Ozobots. The tool is intended as a research platform for evaluating abstract MAPF plans on real robots and as an educational and demonstration tool bridging the areas of artificial intelligence and robotics.
APA, Harvard, Vancouver, ISO, and other styles
22

Aragon-Angel, Angela, Adria Rovira-Garcia, Enrique Arcediano-Garrido, and Deimos Ibáñez-Segura. "Galileo Ionospheric Correction Algorithm Integration into the Open-Source GNSS Laboratory Tool Suite (gLAB)." Remote Sensing 13, no. 2 (January 7, 2021): 191. http://dx.doi.org/10.3390/rs13020191.

Full text
Abstract:
Users of the global navigation satellite system (GNSS) operating with a single-frequency receiver must use an ionospheric correction algorithm (ICA) to account for the delay introduced on radio waves by the upper atmosphere. Galileo, the European GNSS, uses an ICA named NeQuick-G. In an effort to foster the adoption of NeQuick-G by final users, two implementations in C language have been recently made available to the public by the European Space Agency (ESA) and the Joint Research Centre (JRC) of the European Commission (EC), respectively. The aim of the present contribution is to compare the slant total electron content (STEC) predictions of the two aforementioned implementations of NeQuick-G. For this purpose, we have used actual multi-constellation and multi-frequency data for several hundreds of stations distributed worldwide belonging to the Multi GNSS Experiment (MGEX) network of the International GNSS Service (IGS). For each first day of the month during year 2019, the STECs of the two NeQuick-G versions were compared in terms of accuracy, consistency, availability, and execution time. Our study concludes that both implementations of NeQuick-G perform equivalently. Indeed, in over 99.998% of the 2125 million STECs computed, the output is exactly coincident. In contrast, 0.002% of the whole set of STECs for those rays are tangent to the Earth, the behavior of both implementations differs. We confirmed the discrepancy by processing radio-occultation actual measurements from a COSMIC-2 low Earth orbit satellite. We selected the JRC version of the Galileo ICA to be integrated into the GNSS LABoratory (gLAB) tool suite, because its open license and its processing speed (it is 13.88% faster than the ESA version). NeQuick-G outperforms the GPS ICA in STEC residuals up to 12.15 TECUs (percentile 96.23th) and in the 3D position errors, up to 5.76 m (percentile 99.18th) for code-pseudorange positioning.
APA, Harvard, Vancouver, ISO, and other styles
23

Aragon-Angel, Angela, Adria Rovira-Garcia, Enrique Arcediano-Garrido, and Deimos Ibáñez-Segura. "Galileo Ionospheric Correction Algorithm Integration into the Open-Source GNSS Laboratory Tool Suite (gLAB)." Remote Sensing 13, no. 2 (January 7, 2021): 191. http://dx.doi.org/10.3390/rs13020191.

Full text
Abstract:
Users of the global navigation satellite system (GNSS) operating with a single-frequency receiver must use an ionospheric correction algorithm (ICA) to account for the delay introduced on radio waves by the upper atmosphere. Galileo, the European GNSS, uses an ICA named NeQuick-G. In an effort to foster the adoption of NeQuick-G by final users, two implementations in C language have been recently made available to the public by the European Space Agency (ESA) and the Joint Research Centre (JRC) of the European Commission (EC), respectively. The aim of the present contribution is to compare the slant total electron content (STEC) predictions of the two aforementioned implementations of NeQuick-G. For this purpose, we have used actual multi-constellation and multi-frequency data for several hundreds of stations distributed worldwide belonging to the Multi GNSS Experiment (MGEX) network of the International GNSS Service (IGS). For each first day of the month during year 2019, the STECs of the two NeQuick-G versions were compared in terms of accuracy, consistency, availability, and execution time. Our study concludes that both implementations of NeQuick-G perform equivalently. Indeed, in over 99.998% of the 2125 million STECs computed, the output is exactly coincident. In contrast, 0.002% of the whole set of STECs for those rays are tangent to the Earth, the behavior of both implementations differs. We confirmed the discrepancy by processing radio-occultation actual measurements from a COSMIC-2 low Earth orbit satellite. We selected the JRC version of the Galileo ICA to be integrated into the GNSS LABoratory (gLAB) tool suite, because its open license and its processing speed (it is 13.88% faster than the ESA version). NeQuick-G outperforms the GPS ICA in STEC residuals up to 12.15 TECUs (percentile 96.23th) and in the 3D position errors, up to 5.76 m (percentile 99.18th) for code-pseudorange positioning.
APA, Harvard, Vancouver, ISO, and other styles
24

Filho, Edson Ramiro Lucas, Lambros Odysseos, Yang Lun, Fu Kebo, and Herodotos Herodotou. "DITIS: A Distributed Tiered Storage Simulator." Infocommunications journal 14, no. 4 (2022): 18–25. http://dx.doi.org/10.36244/icj.2022.4.3.

Full text
Abstract:
This paper presents DITIS, a simulator for distributed and tiered file-based storage systems. In particular, DITIS can model a distributed storage system with up to three levels of storage tiers and up to three additional levels of caches. Each tier and cache can be configured with different number and type of storage media devices (e.g., HDD, SSD, NVRAM, DRAM), each with their own performance characteristics. The simulator utilizes the provided characteristics in fine-grained performance cost models (which are distinct for each device type) in order to compute the duration time of each I/O request processed on each tier. At the same time, DITIS simulates the overall flow of requests through the different layers and storage nodes of the system using numerous pluggable policies that control every aspect of execution, ranging from request routing and data redundancy to cache and tiering strategies. For performing the simulation, DITIS adapts an extended version of the Actor Model, during which key components of the system exchange asynchronous messages with each other, much like a real distributed multi-threaded system. The ability to simulate the execution of a workload in such an accurate and realistic way brings multiple benefits for its users, since DITIS can be used to better understand the behavior of the underlying file system as well as evaluate different storage setups and policies.
APA, Harvard, Vancouver, ISO, and other styles
25

Blanco-Cuaresma, Sergi, and Emeline Bolmont. "What can the programming language Rust do for astrophysics?" Proceedings of the International Astronomical Union 12, S325 (October 2016): 341–44. http://dx.doi.org/10.1017/s1743921316013168.

Full text
Abstract:
AbstractThe astrophysics community uses different tools for computational tasks such as complex systems simulations, radiative transfer calculations or big data. Programming languages like Fortran, C or C++ are commonly present in these tools and, generally, the language choice was made based on the need for performance. However, this comes at a cost: safety. For instance, a common source of error is the access to invalid memory regions, which produces random execution behaviors and affects the scientific interpretation of the results.In 2015, Mozilla Research released the first stable version of a new programming language named Rust. Many features make this new language attractive for the scientific community, it is open source and it guarantees memory safety while offering zero-cost abstraction.We explore the advantages and drawbacks of Rust for astrophysics by re-implementing the fundamental parts of Mercury-T, a Fortran code that simulates the dynamical and tidal evolution of multi-planet systems.
APA, Harvard, Vancouver, ISO, and other styles
26

León-Paredes, Gabriel A., Liliana I. Barbosa-Santillán, and Juan J. Sánchez-Escobar. "A Heterogeneous System Based on Latent Semantic Analysis Using GPU and Multi-CPU." Scientific Programming 2017 (2017): 1–19. http://dx.doi.org/10.1155/2017/8131390.

Full text
Abstract:
Latent Semantic Analysis (LSA) is a method that allows us to automatically index and retrieve information from a set of objects by reducing the term-by-document matrix using the Singular Value Decomposition (SVD) technique. However, LSA has a high computational cost for analyzing large amounts of information. The goals of this work are (i) to improve the execution time of semantic space construction, dimensionality reduction, and information retrieval stages of LSA based on heterogeneous systems and (ii) to evaluate the accuracy and recall of the information retrieval stage. We present a heterogeneous Latent Semantic Analysis (hLSA) system, which has been developed using General-Purpose computing on Graphics Processing Units (GPGPUs) architecture, which can solve large numeric problems faster through the thousands of concurrent threads on multiple CUDA cores of GPUs and multi-CPU architecture, which can solve large text problems faster through a multiprocessing environment. We execute the hLSA system with documents from the PubMed Central (PMC) database. The results of the experiments show that the acceleration reached by the hLSA system for large matrices with one hundred and fifty thousand million values is around eight times faster than the standard LSA version with an accuracy of 88% and a recall of 100%.
APA, Harvard, Vancouver, ISO, and other styles
27

Jebamalar, A., and Anbuselvi Anbuselvi. "Survey on Nearest Keyword Set Search in Multi-dimensional Datasets." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 8 (August 30, 2017): 64. http://dx.doi.org/10.23956/ijarcsse.v7i8.23.

Full text
Abstract:
Keyword query in multi-dimensional datasets is a noteworthy application in information mining. It is normal that the articles in a spatial database (e.g. eateries/inns) are connected with keyword(s) to demonstrate their organizations/administrations/highlights. An interesting issue known as neighboring Keywords inquiry is to question objects, called catchphrase cover, which together cover an agreement of question watchwords and have the base among items remove. As of late, we watch the increasing accessibility and significance of catchword rating in protest assessment for the better basic leadership. This propels us to study a non-particular version of Closest Keywords search called Best Keyword Cover which considers among items remove and also the watchword evaluation of articles. The baseline algorithm is enlivened by the strategies for Closest Keywords search which depends on comprehensively joining objects from various query keywords to produce contestant catchphrase covers. At the point when the quantity of query keywords builds, the execution of the baseline algorithm drops extensively as a consequence of enormous competitor catchphrase covers produced. To assault this downside, this work proposes an a great deal more adaptable algorithm called catchphrase nearest neighbor expansion (watchword NNE). Contrasted with the baseline algorithm, watchword NNE algorithm fundamentally decreases the amount of applicant catchphrase covers formed. The surrounded by and out investigation and broad examinations on genuine information sets have legitimized the prevalence of our watchword NNE algorithm.
APA, Harvard, Vancouver, ISO, and other styles
28

Martínez, Héctor, Sergio Barrachina, Maribel Castillo, Joaquín Tárraga, Ignacio Medina, Joaquín Dopazo, and Enrique S. Quintana-Ortí. "A framework for genomic sequencing on clusters of multicore and manycore processors." International Journal of High Performance Computing Applications 32, no. 3 (June 22, 2016): 393–406. http://dx.doi.org/10.1177/1094342016653243.

Full text
Abstract:
The advances in genomic sequencing during the past few years have motivated the development of fast and reliable software for DNA/RNA sequencing on current high performance architectures. Most of these efforts target multicore processors, only a few can also exploit graphics processing units, and a much smaller set will run in clusters equipped with any of these multi-threaded architecture technologies. Furthermore, the examples that can be used on clusters today are all strongly coupled with a particular aligner. In this paper we introduce an alignment framework that can be leveraged to coordinately run any “single-node” aligner, taking advantage of the resources of a cluster without having to modify any portion of the original software. The key to our transparent migration lies in hiding the complexity associated with the multi-node execution (such as coordinating the processes running in the cluster nodes) inside the generic-aligner framework. Moreover, following the design and operation in our Message Passing Interface (MPI) version of HPG Aligner RNA BWT, we organize the framework into two stages in order to be able to execute different aligners in each one of them. With this configuration, for example, the first stage can ideally apply a fast aligner to accelerate the process, while the second one can be tuned to act as a refinement stage that further improves the global alignment process with little cost.
APA, Harvard, Vancouver, ISO, and other styles
29

Fedulov, Alexander S., Yaroslav A. Fedulov, and Anastasiya S. Fedulova. "Joint use of OpenMP and MPI technologies on the nodes of the computational cluster." Journal Of Applied Informatics 16, no. 92 (April 30, 2021): 60–71. http://dx.doi.org/10.37791/2687-0649-2021-16-2-60-71.

Full text
Abstract:
This work is devoted to the problem of implementing an efficient parallel program that solves the asigned task using the maximum available amount of computing cluster resources in order to obtain the corresponding gain in performance with respect to the sequential version of the algorithm. The main objective of the work was to study the possibilities of joint use of the parallelization technologies OpenMP and MPI, considering the characteristics and features of the problems being solved, to increase the performance of executing parallel algorithms and programs on a computing cluster. This article provides a brief overview of approaches to calculating the sequential programs complexity functions. To determine the parallel programs complexity, an approach based on operational analysis was used. The features of the sequential programs parallelization technologies OpenMP and MPI are described. The main software and hardware factors affecting the execution speed of parallel programs on the nodes of a computing cluster are presented. The main attention in this paper is paid to the study of the impact on performance of computational and exchange operations number ratio in programs. To implement the research, parallel OpenMP and MPI testing programs were developed, in which the total number of operations and the correlation between computational and exchange operations are set. A computing cluster consisting of several nodes was used as a hardware and software platform. Experimental studies have made it possible to confirm the effectiveness of the hybrid model of a parallel program in multi-node systems with heterogeneous memory using OpenMP in shared memory subsystems, and MPI in a distributed memory subsystems
APA, Harvard, Vancouver, ISO, and other styles
30

García, José, Paola Moraga, Broderick Crawford, Ricardo Soto, and Hernan Pinto. "Binarization Technique Comparisons of Swarm Intelligence Algorithm: An Application to the Multi-Demand Multidimensional Knapsack Problem." Mathematics 10, no. 17 (September 3, 2022): 3183. http://dx.doi.org/10.3390/math10173183.

Full text
Abstract:
In order to minimize execution times, improve the quality of solutions, and address more extensive target situations, optimization techniques, particularly metaheuristics, are continually improved. Hybridizing procedures are one of these noteworthy strategies due to their wide range of applications. This article describes a hybrid algorithm that combines the k-means method to produce a binary version of the cuckoo search and sine cosine algorithms. The binary algorithms are applied on the NP-hard multi-demand multidimensional knapsack problem. This problem is of particular interest because it has two types of constraints. The first group of constraints is related to the capacity of the knapsacks, and a second type is associated with the demand that must be met. Experiments were undertaken to acquire insight into the contribution of the k-means technique and the local search operator to the final results. Additionally, a comparison is made with two other types of binarization, the first based on a random method and the second based on the percentile concept. The results reveal that the k-means hybrid algorithm consistently provides superior results in most cases studied. In particular, incorporating the local search operator improved the results by an average of 0.23%. On the other hand, when comparing the results with 100 items and 30-30 restrictions, k-means was 1.06% better on average than the random operator.
APA, Harvard, Vancouver, ISO, and other styles
31

Primavera, Leonardo, and Emilia Florio. "A Hybrid MPI-OpenMP Parallel Algorithm for the Assessment of the Multifractal Spectrum of River Networks." Water 13, no. 21 (November 5, 2021): 3122. http://dx.doi.org/10.3390/w13213122.

Full text
Abstract:
The possibility to create a flood wave in a river network depends on the geometric properties of the river basin. Among the models that try to forecast the Instantaneous Unit Hydrograph (IUH) of rainfall precipitation, the so-called Multifractal Instantaneous Unit Hydrograph (MIUH) rather successfully connects the multifractal properties of the river basin to the observed IUH. Such properties can be assessed through different types of analysis (fixed-size algorithm, correlation integral, fixed-mass algorithm, sandbox algorithm, and so on). The fixed-mass algorithm is the one that produces the most precise estimate of the properties of the multifractal spectrum that are relevant for the MIUH model. However, a disadvantage of this method is that it requires very long computational times to produce the best possible results. In a previous work, we proposed a parallel version of the fixed-mass algorithm, which drastically reduced the computational times almost proportionally to the number of Central Processing Unit (CPU) cores available on the computational machine by using the Message Passing Interface (MPI), which is a standard for distributed memory clusters. In the present work, we further improved the code in order to include the use of the Open Multi-Processing (OpenMP) paradigm to facilitate the execution and improve the computational speed-up on single processor, multi-core workstations, which are much more common than multi-node clusters. Moreover, the assessment of the multifractal spectrum has also been improved through a direct computation method. Currently, to the best of our knowledge, this code represents the state-of-the-art for a fast evaluation of the multifractal properties of a river basin, and it opens up a new scenario for an effective flood forecast in reasonable computational times.
APA, Harvard, Vancouver, ISO, and other styles
32

Акимова, Е. Н., В. Е. Мисилов, А. Ф. Скурыдина, and А. И. Третьяков. "Gradient methods for solving inverse gravimetry and magnetometry problems on the Uran supercomputer." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 1 (April 2, 2015): 155–64. http://dx.doi.org/10.26089/nummet.v16r116.

Full text
Abstract:
Для решения трехмерных структурных обратных задач гравиметрии и магнитометрии о нахождении поверхностей раздела слоев постоянной плотности либо намагниченности для модели многослойной среды предложен линеаризованный модифицированный метод наискорейшего спуска с весовыми множителями. Построен линеаризованный метод сопряженных градиентов и его модифицированный вариант с весовыми множителями для решения задач гравиметрии и магнитометрии в многослойной среде. На основе модифицированных методов градиентного типа разработаны эффективные параллельные алгоритмы, численно реализованные на многоядерном процессоре Intel и графических процессорах NVIDIA. Для модельной задачи проведено сравнение параллельных итерационных алгоритмов по относительной погрешности, числу итераций и времени счета. A modified linearized steepest descent method with variable weight factors is proposed to solve three-dimensional structural inverse gravimetry and magnetometry problems of finding the interfaces between constant density or magnetization layers in a multilayer medium. A linearized conjugate gradient method and its modified version with weight factors for solving the gravimetry and magnetometry problems in a multilayer medium is constructed. On the basis of the modified gradient-type methods, a number of efficient parallel algorithms are numerically implemented on an Intel multi-core processor and NVIDIA GPUs. The developed parallel iterative algorithms are compared for a model problem in terms of the relative error, the number of iterations, and the execution time.
APA, Harvard, Vancouver, ISO, and other styles
33

Ghannoum, Salim, Waldir Leoncio Netto, Damiano Fantini, Benjamin Ragan-Kelley, Amirabbas Parizadeh, Emma Jonasson, Anders Ståhlberg, Hesso Farhan, and Alvaro Köhn-Luque. "DIscBIO: A User-Friendly Pipeline for Biomarker Discovery in Single-Cell Transcriptomics." International Journal of Molecular Sciences 22, no. 3 (January 30, 2021): 1399. http://dx.doi.org/10.3390/ijms22031399.

Full text
Abstract:
The growing attention toward the benefits of single-cell RNA sequencing (scRNA-seq) is leading to a myriad of computational packages for the analysis of different aspects of scRNA-seq data. For researchers without advanced programing skills, it is very challenging to combine several packages in order to perform the desired analysis in a simple and reproducible way. Here we present DIscBIO, an open-source, multi-algorithmic pipeline for easy, efficient and reproducible analysis of cellular sub-populations at the transcriptomic level. The pipeline integrates multiple scRNA-seq packages and allows biomarker discovery with decision trees and gene enrichment analysis in a network context using single-cell sequencing read counts through clustering and differential analysis. DIscBIO is freely available as an R package. It can be run either in command-line mode or through a user-friendly computational pipeline using Jupyter notebooks. We showcase all pipeline features using two scRNA-seq datasets. The first dataset consists of circulating tumor cells from patients with breast cancer. The second one is a cell cycle regulation dataset in myxoid liposarcoma. All analyses are available as notebooks that integrate in a sequential narrative R code with explanatory text and output data and images. R users can use the notebooks to understand the different steps of the pipeline and will guide them to explore their scRNA-seq data. We also provide a cloud version using Binder that allows the execution of the pipeline without the need of downloading R, Jupyter or any of the packages used by the pipeline. The cloud version can serve as a tutorial for training purposes, especially for those that are not R users or have limited programing skills. However, in order to do meaningful scRNA-seq analyses, all users will need to understand the implemented methods and their possible options and limitations.
APA, Harvard, Vancouver, ISO, and other styles
34

Kumar, Rajeev, Laxman Singh, and Rajdev Tiwari. "Path planning for the autonomous robots using modified grey wolf optimization approach." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 9453–70. http://dx.doi.org/10.3233/jifs-201926.

Full text
Abstract:
Path planning for robots plays a vital role to seek the most feasible path due to power requirement, environmental factors and other limitations. The path planning for the autonomous robots is tedious task as the robot needs to locate a suitable path to move between the source and destination points with multifaceted nature. In this paper, we introduced a new technique named modified grey wolf optimization (MGWO) algorithm to solve the path planning problem for multi-robots. MGWO is modified version of conventional grey wolf optimization (GWO) that belongs to the category of metaheuristic algorithms. This has gained wide popularity for an optimization of different parameters in the discrete search space to solve various problems. The prime goal of the proposed methodology is to determine the optimal path while maintaining a sufficient distance from other objects and moving robots. In MGWO method, omega wolves are treated equally as those of delta wolves in exploration process that helps in escalating the convergence speed and minimizing the execution time. The simulation results show that MGWO gives satisfactory performance than other state of art methods for path planning of multiple mobile robots. The performance of the proposed method is compared with the standard evolutionary algorithms viz., Particle Swarm Optimization (PSO), Intelligent BAT Algorithm (IBA), Grey Wolf Optimization (GWO), and Variable Weight Grey Wolf Optimization (VW-GWO) and yielded better results than all of these.
APA, Harvard, Vancouver, ISO, and other styles
35

Alhakami, Wajdi, Abdullah Baz, Hosam Alhakami, Abhishek Pandey, and Raees Khan. "Symmetrical Model of Smart Healthcare Data Management: A Cybernetics Perspective." Symmetry 12, no. 12 (December 16, 2020): 2089. http://dx.doi.org/10.3390/sym12122089.

Full text
Abstract:
Issues such as maintaining the security and integrity of data in digital healthcare are growing day-by-day in terms of size and cost. The healthcare industry needs to work on effective mechanisms to manage these concerns and prevent any debilitating crisis that might affect patients as well as the overall health management. To tackle such critical issues in a simple, feasible, and symmetrical manner, the authors considered the ideology of cybernetics. Working towards this intent, this paper proposes a symmetrical model that illustrates a compact version of the adopted ideology as a pathway for future researchers. Furthermore, the proposed ideology of cybernetics specifically focuses on how to plan the entire design concept more effectively. It is important for the designer to prepare for the future and manage the design structure from a product perspective. Therefore, the proposed ideology provides a symmetric mechanism that includes a variety of estimation and evaluation techniques as well as their management. The proposed model generates a symmetric, variety-issue, reduced infrastructure that can produce highly effective results due to an efficient usability, operatability, and symmetric operation execution which are the benefits of the proposed model. Furthermore, the study also performed a performance simulation assessment by adopting a multi-criteria decision-making approach that helped the authors compare the various existing and proposed models based on their levels of effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
36

Di Modica, Giuseppe, Luca Evangelisti, Luca Foschini, Assimo Maris, and Sonia Melandri. "Testing the Scalability of the HS-AUTOFIT Tool in a High-Performance Computing Environment." Electronics 10, no. 18 (September 13, 2021): 2251. http://dx.doi.org/10.3390/electronics10182251.

Full text
Abstract:
In the last years, the development of broadband chirped-pulse Fourier transform microwave spectrometers has revolutionized the field of rotational spectroscopy. Currently, it is possible to experimentally obtain a large quantity of spectra that would be difficult to analyze manually due to two main reasons. First, recent instruments allow obtaining a considerable amount of data in very short times, and second, it is possible to analyze complex mixtures of molecules that all contribute to the density of the spectra. AUTOFIT is a spectral assignment software application that was developed in 2013 to support and facilitate the analysis. Notwithstanding the benefits AUTOFIT brings in terms of automation of the analysis of the accumulated data, it still does not guarantee a good performance in terms of execution time because it leverages the computing power of a single computing machine. To cater to this requirement, we developed a parallel version of AUTOFIT, called HS-AUTOFIT, capable of running on high-performance computing (HPC) clusters to shorten the time to explore and analyze spectral big data. In this paper, we report some tests conducted on a real HPC cluster aimed at providing a quantitative assessment of HS-AUTOFIT’s scaling capabilities in a multi-node computing context. The collected results demonstrate the benefits of the proposed approach in terms of a significant reduction in computing time.
APA, Harvard, Vancouver, ISO, and other styles
37

Zelenchuk, Nikita, Ekaterina Pristavka, and Aleksandr Maliavko. "Compiler from El programing language: improvement and development." Digital Technology Security, no. 2 (June 25, 2021): 9–19. http://dx.doi.org/10.17212/2782-2230-2021-2-9-19.

Full text
Abstract:
The implementation of the new multi-paradigm (functionally- imperative) programming language El, developed at the Department of Computer Science of the Novosibirsk State Technical University, in the form of a compiler is associated with the need to find ways to solve a number of complex problems. The current version of the compiler does implement only partially functionality of the language and generates far from optimal executable codes. In this paper, we consider the problem of an efficient compilation of an El-program, taking into account the need to implement new high-level data structures (two-sided lists, vectors with special forms of access, and a number of others) and control structures of the language, which make it possible to uniformly define cyclic and branching computational processes, as well as those laid down in the language a mechanism for explicitly controlling the mutability of variables. The tasks of improving and developing a compiler organized according to the classical multi-platform scheme are briefly considered, in which the front-end (lexical, syntactic, and semantic analyzers) converts the program to be translated into pseudocode of a single format, and used efficient infrastructure for building LLVM compilers as a back-end that turns pseudocode into executable code for different platforms. Execution of all possible operations on elements of high-level data structures (lists, tuples, vectors), as well as on arbitrary-precision numbers, has been moved to the runtime support library and, accordingly, can be deeply optimized. For this structure, the outlined ways of solving the problem of developing and improving the compiler by deep reforming and optimization of the chain of transformations of the translated program implemented by the front-end are formulated. At the initial stage, it is planned to implement a new compiler for two platforms: Linux and Windows.
APA, Harvard, Vancouver, ISO, and other styles
38

Schrag, Deborah, Martin Weiser, Leonard Saltz, Harvey Mamon, Marc Gollub, Ethan Basch, Alan Venook, and Qian Shi. "Challenges and solutions in the design and execution of the PROSPECT Phase II/III neoadjuvant rectal cancer trial (NCCTG N1048/Alliance)." Clinical Trials 16, no. 2 (January 28, 2019): 165–75. http://dx.doi.org/10.1177/1740774518824539.

Full text
Abstract:
Background More than half of the 40,000 incident rectal cancer patients in the United States each year are diagnosed at clinical stage II and III (locally advanced stage). For this group, high rates of cure can be achieved with the combination of pelvic radiation and sensitizing 5-fluorouracil (chemoradiation), surgery and chemotherapy, but treatment is long, arduous and toxicities are substantial. The PROSPECT trial (N1048, NCT01515787) was designed to determine whether neoadjuvant chemotherapy with 5-fluorouracil and oxaliplatin (FOLFOX) could be used as an alternative to neoadjuvant chemoradiation without compromising treatment outcomes and to spare these patients excess toxicity. The statistical design balanced the twin co-primary goals of achieving low local and distant recurrence rates. Study design features contended with the need for stringent safeguards given limited phase II data, the need for straightforward criteria to facilitate both accrual and protocol fidelity and the importance of patients’ perspectives on symptom burden and treatment toxicity. Methods PROSPECT is an ongoing multi-site two-group seamless phase II/III randomized trial comparing standard neoadjuvant chemoradiation versus neoadjuvant chemotherapy with selective use of chemoradiation for patients with locally advanced rectal cancer. Challenges addressed in the design and conduct of PROSPECT have included the following: (1) setting safety thresholds given limited single-center phase II data, (2) establishing workable eligibility criteria, (3) balancing competing time to local and distant recurrence as co-primary endpoints and (4) obtaining reliable and complete data for patients’ symptom burden. The design and implementation challenges, choices, modifications and their implications for the design of future national cooperative group clinical trials are presented. Results PROSPECT incorporated stringent thresholds for both complete surgical resection (R0) and the time to local recurrence as early stopping rules. When predetermined stopping criteria were not met after evaluation of the first 366 participants in the randomized phase II, the study transitioned seamlessly to phase III with cumulative accrual of over 1000 participants. Eligibility criteria stipulating rectal tumor location based on distance from the anal verge were unworkable, and the protocol was amended to a more pragmatic approach that assigned surgeons with primary responsibility for determining eligibility. Central radiology review was feasible and in some cases prompted discontinuation of protocol treatment. Participation in toxicity reporting using the National Cancer Institute’s Patient-Reported Outcomes Version of the Common Terminology Criteria for Adverse Events was uniformly high and was well accepted by participants from over 200 sites in the United States, Canada and Switzerland. Conclusion The strategies used to overcome these obstacles may inform the design of other studies that involve multi-modality treatment interventions, particularly trials where implementation of consistent criteria for eligibility and outcomes across hundreds of practice settings is necessary.
APA, Harvard, Vancouver, ISO, and other styles
39

Barrios, Yubal, Alfonso Rodríguez, Antonio Sánchez, Arturo Pérez, Sebastián López, Andrés Otero, Eduardo de la Torre, and Roberto Sarmiento. "Lossy Hyperspectral Image Compression on a Reconfigurable and Fault-Tolerant FPGA-Based Adaptive Computing Platform." Electronics 9, no. 10 (September 26, 2020): 1576. http://dx.doi.org/10.3390/electronics9101576.

Full text
Abstract:
This paper describes a novel hardware implementation of a lossy multispectral and hyperspectral image compressor for on-board operation in space missions. The compression algorithm is a lossy extension of the Consultative Committee for Space Data Systems (CCSDS) 123.0-B-1 lossless standard that includes a bit-rate control stage, which in turn manages the losses the compressor may introduce to achieve higher compression ratios without compromising the recovered image quality. The algorithm has been implemented using High-Level Synthesis (HLS) techniques to increase design productivity by raising the abstraction level. The proposed lossy compression solution is deployed onto ARTICo3, a dynamically reconfigurable multi-accelerator architecture, obtaining a run-time adaptive solution that enables user-selectable performance (i.e., load more hardware accelerators to transparently increase throughput), power consumption, and fault tolerance (i.e., group hardware accelerators to transparently enable hardware redundancy). The whole compression solution is tested on a Xilinx Zynq UltraScale+ Field-Programmable Gate Array (FPGA)-based MPSoC using different input images, from multispectral to ultraspectral. For images acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), the proposed implementation renders an execution time of approximately 36 s when 8 accelerators are compressing concurrently at 100 MHz, which in turn uses around 20% of the LUTs and 17% of the dedicated memory blocks available in the target device. In this scenario, a speedup of 15.6× is obtained in comparison with a pure software version of the algorithm running in an ARM Cortex-A53 processor.
APA, Harvard, Vancouver, ISO, and other styles
40

Bagui, Sikha, and Evorell Fridge. "A Comparison of Fair Sharing Algorithms for Regulating Search as a Service API." Transactions on Networks and Communications 8, no. 6 (February 11, 2021): 16–34. http://dx.doi.org/10.14738/tnc.86.9633.

Full text
Abstract:
Providers of a Search as a Service (SaaS) environment must ensure that their users will not monopolize the service or use more than their fair share of resources. Fair sharing algorithms have long been used in computer networking to balance access to a router or switch, and some of these algorithms have also been applied to the control of queries submitted to search engine APIs. If a search query’s execution cost can be reliably estimated, fair sharing algorithms can be applied to the input of a SaaS API to ensure everyone has equitable access to the search engine. The novelty of this paper lies in presenting a Single-Server Max-Min Fair Deficit Round Robin algorithm, a modified version of the Multi-Server Max-Min Fair Deficit Round Robin algorithm. The Single-Server Max-Min Fair Deficit Round Robin algorithm is compared to three other fair sharing algorithms, token-bucket, Deficit Round Robin (DRR), and Peng and Plale’s [1] Modified Deficit Round Robin (MDRR) in terms of three different usage scenarios, balanced usage, unbalanced usage as well as an idle client usage, to determine which is the most suitable fair sharing algorithm for use in regulating traffic to a SaaS API. This research demonstrated that the Single-Server Max-Min Fair DRR algorithm provided the highest throughput of traffic to the search engine while also maintaining a fair balance of resources among clients by re-allocating unused throughput to clients with saturated queues so a max-min allocation was achieved.
APA, Harvard, Vancouver, ISO, and other styles
41

Bebié, Pascal, Robert Becker, Volker Commichau, Jan Debus, Günther Dissertori, Lubomir Djambazov, Afroditi Eleftheriou, et al. "SAFIR-I: Design and Performance of a High-Rate Preclinical PET Insert for MRI." Sensors 21, no. 21 (October 23, 2021): 7037. http://dx.doi.org/10.3390/s21217037.

Full text
Abstract:
(1) Background: Small Animal Fast Insert for MRI detector I (SAFIR-I) is a preclinical Positron Emission Tomography (PET) insert for the Bruker BioSpec 70/30 Ultra Shield Refrigerated (USR) preclinical 7T Magnetic Resonance Imaging (MRI) system. It is designed explicitly for high-rate kinetic studies in mice and rats with injected activities reaching 500MBq, enabling truly simultaneous quantitative PET and Magnetic Resonance (MR) imaging with time frames of a few seconds in length. (2) Methods: SAFIR-I has an axial field of view of 54.2mm and an inner diameter of 114mm. It employs Lutetium Yttrium OxyorthoSilicate (LYSO) crystals and Multi Pixel Photon Counter (MPPC) arrays. The Position-Energy-Timing Application Specific Integrated Circuit, version 6, Single Ended (PETA6SE) digitizes the MPPC signals and provides time stamps and energy information. (3) Results: SAFIR-I is MR-compatible. The system’s Coincidence Resolving Time (CRT) and energy resolution are between separate-uncertainty 209.0(3)ps and separate-uncertainty 12.41(02) Full Width at Half Maximum (FWHM) at low activity and separate-uncertainty 326.89(12)ps and separate-uncertainty 20.630(011) FWHM at 550MBq, respectively. The peak sensitivity is ∼1.6. The excellent performance facilitated the successful execution of first in vivo rat studies beyond 300MBq. Based on features visible in the acquired images, we estimate the spatial resolution to be ∼2mm in the center of the Field Of View (FOV). (4) Conclusion: The SAFIR-I PET insert provides excellent performance, permitting simultaneous in vivo small animal PET/MR image acquisitions with time frames of a few seconds in length at activities of up to 500MBq.
APA, Harvard, Vancouver, ISO, and other styles
42

CURIEN, PIERRE-LOUIS. "Abstract Böhm trees." Mathematical Structures in Computer Science 8, no. 6 (December 1998): 559–91. http://dx.doi.org/10.1017/s0960129598002631.

Full text
Abstract:
We present a formalism of trees with pointers, called abstract Böhm trees, that provide a suitable abstract framework in which various cut-free proofs or normal terms of several λ-calculus based languages (including PCF and Parigot's λμ-calculus) can be faithfully encoded. A simple abstract machine called the View Abstract Machine (VAM) allows us to compute over abstract Böhm trees. The VAM is closely related to Coquand's interaction sequences and debates. The VAM execution over finite abstract Böhm trees always terminates. We next introduce an abstract notion of type that fits the purpose of guaranteeing that the VAM cannot go into deadlock, i.e., that it always reaches a satisfactory final state. Typed abstract Böhm trees can be turned into a category – more naturally a ‘multi-category’ where the domains of arrows are sets of named objects or records. We then go from the abstract to the concrete by giving examples. Our sets of abstract (typed) Böhm trees are relative to an alphabet and a set of types. By instantiating these two parameter sets appropriately, we recover, successively: (η-long) typed Böhm trees; PCF trees as considered in the game models of Hyland–Ong or of Abramsky–Jagadeesan–Malacaria; a notion of classical Böhm tree due to Herbelin that provides a classical version of PCF trees in the style of λμ-calculus; and, finally, cut-free proofs in Novikov's infinitary propositional logic as investigated by Coquand. In a companion paper, we investigate the operational aspects of (untyped) Böhm trees in more depth.
APA, Harvard, Vancouver, ISO, and other styles
43

Jones, Laurel, Michael Tynes, and Paul Smith. "Prediction of models for ordered solvent in macromolecular structures by a classifier based upon resolution-independent projections of local feature data." Acta Crystallographica Section D Structural Biology 75, no. 8 (July 30, 2019): 696–717. http://dx.doi.org/10.1107/s2059798319008933.

Full text
Abstract:
Current software tools for the automated building of models for macromolecular X-ray crystal structures are capable of assembling high-quality models for ordered macromolecule and small-molecule scattering components with minimal or no user supervision. Many of these tools also incorporate robust functionality for modelling the ordered water molecules that are found in nearly all macromolecular crystal structures. However, no current tools focus on differentiating these ubiquitous water molecules from other frequently occurring multi-atom solvent species, such as sulfate, or the automated building of models for such species. PeakProbe has been developed specifically to address the need for such a tool. PeakProbe predicts likely solvent models for a given point (termed a `peak') in a structure based on analysis (`probing') of its local electron density and chemical environment. PeakProbe maps a total of 19 resolution-dependent features associated with electron density and two associated with the local chemical environment to a two-dimensional score space that is independent of resolution. Peaks are classified based on the relative frequencies with which four different classes of solvent (including water) are observed within a given region of this score space as determined by large-scale sampling of solvent models in the Protein Data Bank. Designed to classify peaks generated from difference density maxima, PeakProbe also incorporates functionality for identifying peaks associated with model errors or clusters of peaks likely to correspond to multi-atom solvent, and for the validation of existing solvent models using solvent-omit electron-density maps. When tasked with classifying peaks into one of four distinct solvent classes, PeakProbe achieves greater than 99% accuracy for both peaks derived directly from the atomic coordinates of existing solvent models and those based on difference density maxima. While the program is still under development, a fully functional version is publicly available. PeakProbe makes extensive use of cctbx libraries, and requires a PHENIX licence and an up-to-date phenix.python environment for execution.
APA, Harvard, Vancouver, ISO, and other styles
44

Nikolić, Dragan D. "Parallelisation of equation-based simulation programs on heterogeneous computing systems." PeerJ Computer Science 4 (August 13, 2018): e160. http://dx.doi.org/10.7717/peerj-cs.160.

Full text
Abstract:
Numerical solutions of equation-based simulations require computationally intensive tasks such as evaluation of model equations, linear algebra operations and solution of systems of linear equations. The focus in this work is on parallel evaluation of model equations on shared memory systems such as general purpose processors (multi-core CPUs and manycore devices), streaming processors (Graphics Processing Units and Field Programmable Gate Arrays) and heterogeneous systems. The current approaches for evaluation of model equations are reviewed and their capabilities and shortcomings analysed. Since stream computing differs from traditional computing in that the system processes a sequential stream of elements, equations must be transformed into a data structure suitable for both types. The postfix notation expression stacks are recognised as a platform and programming language independent method to describe, store in computer memory and evaluate general systems of differential and algebraic equations of any size. Each mathematical operation and its operands are described by a specially designed data structure, and every equation is transformed into an array of these structures (a Compute Stack). Compute Stacks are evaluated by a stack machine using a Last In First Out queue. The stack machine is implemented in the DAE Tools modelling software in the C99 language using two Application Programming Interface (APIs)/frameworks for parallelism. The Open Multi-Processing (OpenMP) API is used for parallelisation on general purpose processors, and the Open Computing Language (OpenCL) framework is used for parallelisation on streaming processors and heterogeneous systems. The performance of the sequential Compute Stack approach is compared to the direct C++ implementation and to the previous approach that uses evaluation trees. The new approach is 45% slower than the C++ implementation and more than five times faster than the previous one. The OpenMP and OpenCL implementations are tested on three medium-scale models using a multi-core CPU, a discrete GPU, an integrated GPU and heterogeneous computing setups. Execution times are compared and analysed and the advantages of the OpenCL implementation running on a discrete GPU and heterogeneous systems are discussed. It is found that the evaluation of model equations using the parallel OpenCL implementation running on a discrete GPU is up to twelve times faster than the sequential version while the overall simulation speed-up gained is more than three times.
APA, Harvard, Vancouver, ISO, and other styles
45

Homocianu, Daniel, and Dinu Airinei. "PCDM and PCDM4MP: New Pairwise Correlation-Based Data Mining Tools for Parallel Processing of Large Tabular Datasets." Mathematics 10, no. 15 (July 29, 2022): 2671. http://dx.doi.org/10.3390/math10152671.

Full text
Abstract:
The paper describes PCDM and PCDM4MP as new tools and commands capable of exploring large datasets. They select variables based on identifying the absolute values of Pearson’s pairwise correlation coefficients between a chosen response variable and any other existing in the dataset. In addition, for each pair, they also report the corresponding significance and the number of non-null intersecting observations, and all this reporting is performed in a record-oriented manner (both source and output). Optionally, using threshold values for these three as parameters of PCDM, any user can select the most correlated variables based on high magnitude, significance, and support criteria. The syntax is simple, and the tools show the exploration progress in real-time. In addition, PCDM4MP can trigger different instances of Stata, each using a distinct class of variables belonging to the same dataset and resulting after simple name filtering (first letter). Moreover, this multi-processing (MP) version overcomes the parallelization limitations of the existing parallel module, and this is accomplished by using vertical instead of horizontal partitions of large flat datasets, dynamic generation of the task pattern, tasks, and logs, all within a single execution of this second command, and the existing qsub module to automatically and continuously allocate the tasks to logical processors and thereby emulating with fewer resources a cluster environment. In addition, any user can perform further selections based on the results printed in the console. The paper contains examples of using these tools for large datasets such as the one belonging to the World Values Survey and based on a simple variable naming practice. This article includes many recorded simulations and presents performance results. They depend on different resources and hardware configurations used, including cloud vs. on-premises, large vs. small amounts of RAM and processing cores, and in-memory vs. traditional storage.
APA, Harvard, Vancouver, ISO, and other styles
46

Michibata, Takuro, Kentaroh Suzuki, Tomoo Ogura, and Xianwen Jing. "Incorporation of inline warm rain diagnostics into the COSP2 satellite simulator for process-oriented model evaluation." Geoscientific Model Development 12, no. 10 (October 10, 2019): 4297–307. http://dx.doi.org/10.5194/gmd-12-4297-2019.

Full text
Abstract:
Abstract. The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP) is used to diagnose model performance and physical processes via an apple-to-apple comparison to satellite measurements. Although the COSP provides useful information about clouds and their climatic impact, outputs that have a subcolumn dimension require large amounts of data. This can cause a bottleneck when conducting sets of sensitivity experiments or multiple model intercomparisons. Here, we incorporate two diagnostics for warm rain microphysical processes into the latest version of the simulator (COSP2). The first one is the occurrence frequency of warm rain regimes (i.e., non-precipitating, drizzling, and precipitating) classified according to CloudSat radar reflectivity, putting the warm rain process diagnostics into the context of the geographical distributions of precipitation. The second diagnostic is the probability density function of radar reflectivity profiles normalized by the in-cloud optical depth, the so-called contoured frequency by optical depth diagram (CFODD), which illustrates how the warm rain processes occur in the vertical dimension using statistics constructed from CloudSat and MODIS simulators. The new diagnostics are designed to produce statistics online along with subcolumn information during the COSP execution, eliminating the need to output subcolumn variables. Users can also readily conduct regional analysis tailored to their particular research interest (e.g., land–ocean differences) using an auxiliary post-process package after the COSP calculation. The inline diagnostics are applied to the MIROC6 general circulation model (GCM) to demonstrate how known biases common among multiple GCMs relative to satellite observations are revealed. The inline multi-sensor diagnostics are intended to serve as a tool that facilitates process-oriented model evaluations in a manner that reduces the burden on modelers for their diagnostics effort.
APA, Harvard, Vancouver, ISO, and other styles
47

Prades, Javier, Baldomero Imbernón, Carlos Reaño, Jorge Peña-García, Jose Pedro Cerón-Carrasco, Federico Silla, and Horacio Pérez-Sánchez. "Maximizing resource usage in multifold molecular dynamics with rCUDA." International Journal of High Performance Computing Applications 34, no. 1 (June 20, 2019): 5–19. http://dx.doi.org/10.1177/1094342019857131.

Full text
Abstract:
The full-understanding of the dynamics of molecular systems at the atomic scale is of great relevance in the fields of chemistry, physics, materials science, and drug discovery just to name a few. Molecular dynamics (MD) is a widely used computer tool for simulating the dynamical behavior of molecules. However, the computational horsepower required by MD simulations is too high to obtain conclusive results in real-world scenarios. This is mainly motivated by two factors: (1) the long execution time required by each MD simulation (usually in the nanoseconds and microseconds scale, and beyond) and (2) the large number of simulations required in drug discovery to study the interactions between a large library of compounds and a given protein target. To deal with the former, graphics processing units (GPUs) have come up into the scene. The latter has been traditionally approached by launching large amounts of simulations in computing clusters that may contain several GPUs on each node. However, GPUs are targeted as a single node that only runs one MD instance at a time, which translates into low GPU occupancy ratios and therefore low throughput. In this work, we propose a strategy to increase the overall throughput of MD simulations by increasing the GPU occupancy through virtualized GPUs. We use the remote CUDA (rCUDA) middleware as a tool to decouple GPUs from CPUs, and thus enabling multi-tenancy of the virtual GPUs. As a working test in the drug discovery field, we studied the binding process of a novel flavonol to DNA with the GROningen MAchine for Chemical Simulations (GROMACS) MD package. Our results show that the use of rCUDA provides with a 1.21× speed-up factor compared to the CUDA counterpart version while requiring a similar power budget.
APA, Harvard, Vancouver, ISO, and other styles
48

Boiko, L., and I. Liashenko. "Comparison of the ant colony optimization algorithm and its two modifications." System technologies 2, no. 139 (March 30, 2022): 40–52. http://dx.doi.org/10.34185/1562-9945-2-139-2022-05.

Full text
Abstract:
The ant optimization algorithm is one of the effective modern algorithms for finding ap-proximate solutions of the salesman problem and similar problems of finding routes on graphs. The first version of this metaheuristic optimization algorithm was proposed by Marco Dorigo in 1992 [1]. After some time, several modifications of this algorithm have been proposed in the literature. The aim of the study is to conduct a comparative analysis of the ant optimization algo-rithm (Ant Colony Optimization, ASO) [1] and its most successful modifications: Ant Colony System (ACS) [2] and Max-Min Ant System (MMAS) [3]. To do this, the system features of information exchange in the ant colony during the search for food are analyzed. A step-by-step algorithm that simulates the natural behavior of forage ants in finding the shortest path to deliver food to the anthill is presented. A software implementation of the three listed ant algorithms for solving the travelling salesman problem has been developed. Through the interface window, you can enter the number of cities, the number of ants, and the maximum number of iterations, fix the settings of the algorithm and select any of the three algorithms. The program randomly locates cities and selects the starting city for each ant. The software product is multi-threaded, i.e. during the calculations the interface is not blocked, which allows you to control the process of program execution, namely: start, pause, stop, resume work. The results of the program are: vis-ualization of the shortest route found, the length of this route and the smallest iteration number, which achieves the shortest route. Comparative analysis of the results allowed us to draw the following conclusions: 1) With well-chosen algorithm settings, iterative methods usually give a result close to optimal, however, the number of iterations required for this may differ significantly. 2) The study of the travelling salesman problem by ant algorithms is experimental rather than theoretical. The result very much depends on the parameters of the algorithm settings, but the theoretical study of these dependencies remains relevant and unresolved.
APA, Harvard, Vancouver, ISO, and other styles
49

Jansari, Ashok S., Alex Devlin, Rob Agnew, Katarina Akesson, Lesley Murphy, and Tony Leadbetter. "Ecological Assessment of Executive Functions: A New Virtual Reality Paradigm." Brain Impairment 15, no. 2 (September 2014): 71–87. http://dx.doi.org/10.1017/brimp.2014.14.

Full text
Abstract:
Acquired brain injury (ABI) can lead to a constellation of higher-order executive problems, which can impact significantly on everyday behaviour. While some neuropsychological assessments are able to measure these impairments objectively, increasingly, clinicians are finding that a subset of their patients passes these tests while still exhibiting difficulties in day-to-day living. Calls have therefore been made to develop assessments that are more sensitive and that are more ecologically valid. In this study, in Experiment 1, a multiple errands task (MET) based around a business office was created to assess concurrently nine aspects of executive functioning (planning, prioritisation, selective-thinking, creative-thinking, adaptive-thinking, multi-tasking, action-based prospective memory (PM), event-based PM and time-based PM). This new paradigm, the Jansari assessment of Executive Functions (JEF©) showed a significant difference between six individuals with ABI and matched healthy controls; further, it showed that across the nine constructs there was a range of performance. In Experiment 2, JEF©was recreated in a virtual environment resembling a computer game, and it was found that this version significantly differentiated between 17 individuals with ABI and 30 healthy controls. These results suggest that the virtual version of JEF©could be used as a new assessment of executive function. The profile of performance across the nine constructs for each individual provides a wealth of objective information that could potentially inform targeted rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
50

Urgese, Gianvito, Francesco Barchi, Emanuele Parisi, Evelina Forno, Andrea Acquaviva, and Enrico Macii. "Benchmarking a Many-Core Neuromorphic Platform With an MPI-Based DNA Sequence Matching Algorithm." Electronics 8, no. 11 (November 14, 2019): 1342. http://dx.doi.org/10.3390/electronics8111342.

Full text
Abstract:
SpiNNaker is a neuromorphic globally asynchronous locally synchronous (GALS) multi-core architecture designed for simulating a spiking neural network (SNN) in real-time. Several studies have shown that neuromorphic platforms allow flexible and efficient simulations of SNN by exploiting the efficient communication infrastructure optimised for transmitting small packets across the many cores of the platform. However, the effectiveness of neuromorphic platforms in executing massively parallel general-purpose algorithms, while promising, is still to be explored. In this paper, we present an implementation of a parallel DNA sequence matching algorithm implemented by using the MPI programming paradigm ported to the SpiNNaker platform. In our implementation, all cores available in the board are configured for executing in parallel an optimised version of the Boyer-Moore (BM) algorithm. Exploiting this application, we benchmarked the SpiNNaker platform in terms of scalability and synchronisation latency. Experimental results indicate that the SpiNNaker parallel architecture allows a linear performance increase with the number of used cores and shows better scalability compared to a general-purpose multi-core computing platform.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography