Journal articles on the topic 'Concurrent/parallel systems and technologies'

To see the other types of publications on this topic, follow the link: Concurrent/parallel systems and technologies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Concurrent/parallel systems and technologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shiau, Liejune. "Exploring Quasi-Concurrency in Introductory Computer Science." Journal of Educational Computing Research 15, no. 1 (July 1996): 53–66. http://dx.doi.org/10.2190/7ldf-va2r-vk66-qq8d.

Full text
Abstract:
Most programming courses taught today are focused on managing batch-oriented problems. It is primarily because parallel computers are not commonly available, therefore problems with concurrent nature could not be explored. This consequence, at the same time, causes student's under preparation to meet the challenge of modern multi-process computation technologies. This article demonstrates an easy solution for implementing concurrent programming projects in computer labs. This solution does not require special hardware support or special programming languages. The goal is to facilitate a means to deal with the concept and usefulness of multi-process software systems in the early stage of computer science curriculum. We also include detailed descriptions on a few creative and interesting concurrent examples to illustrate this idea.
APA, Harvard, Vancouver, ISO, and other styles
2

Chaudhary, Renu, and Gagangeet Singh. "A NOVEL TECHNIQUE IN NoSQL DATA EXTRACTION." International Journal of Research -GRANTHAALAYAH 1, no. 1 (August 31, 2014): 51–58. http://dx.doi.org/10.29121/granthaalayah.v1.i1.2014.3086.

Full text
Abstract:
NoSQL databases (commonly interpreted by developers as „not only SQL databases‟ and not „no SQL‟) is an emerging alternative to the most widely used relational databases. As the name suggests, it does not completely replace SQL but compliments it in such a way that they can co-exist. In this paper we will be discussing the NoSQL data model, types of NoSQL data stores, characteristics and features of each data store, query languages used in NoSQL, advantages and disadvantages of NoSQL over RDBMS and the future prospects of NoSQL. Motivation/Background:NoSQL systems exhibit the ability to store and index arbitrarily big data sets while enabling a large amount of concurrent user requests. Method:Many people think NoSQL is a derogatory term created to poke at SQL. In reality, the term means Not Only SQL. The idea is that both technologies can coexist and each has its place. Results:Large-scale data processing (parallel processing over distributed systems); Embedded IR (basic machine-to-machine information look-up & retrieval); Exploratory analytics on semi-structured data (expert level); Large volume data storage (unstructured, semi-structured, small-packet structured). Conclusions:This study report motivation to provide an independent understanding of the strengths and weaknesses of various NoSQL database approaches to supporting applications that process huge volumes of data; as well as to provide a global overview of this non-relational NoSQL databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaur Dhaliwal, Japman, Mohd Naseem, Aadil Ahamad Lawaye, and Ehtesham Husain Abbasi. "Fibonacci Series based Virtual Machine Selection for Load Balancing in Cloud Computing." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 1071. http://dx.doi.org/10.14419/ijet.v7i3.12.17634.

Full text
Abstract:
The rapid advancement of the internet has given birth to many technologies. Cloud computing is one of the most emerging technology which aim to process large scale data by using the computational capabilities of shared resources. It gives support to the distributed parallel processing. Using cloud computing, we can process data by paying according to its uses which eliminates the requirement of device by individual users. As cloud computing grows, more users get attracted towards it. However, providing an efficient execution time and load distribution is a major challenging issue in the distributed systems. In our approach, weighted round robin algorithm is used and benefits of Fibonacci sequence is combined which results in better execution time than static round robin. Relevant virtual machines are chosen and jobs are assigned to them. Also, number of resources being utilized concurrently is reduced, which leads to resource saving thereby reducing the cost. There is no need to deploy new resources as resources such as virtual machines are already available.
APA, Harvard, Vancouver, ISO, and other styles
4

Thompson, P. "Concurrent Interconnect for Parallel Systems." Computer Journal 36, no. 8 (August 1, 1993): 778–84. http://dx.doi.org/10.1093/comjnl/36.8.778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Massei, Giovanna, and Dave Cowan. "Fertility control to mitigate human–wildlife conflicts: a review." Wildlife Research 41, no. 1 (2014): 1. http://dx.doi.org/10.1071/wr13141.

Full text
Abstract:
As human populations grow, conflicts with wildlife increase. Concurrently, concerns about the welfare, safety and environmental impacts of conventional lethal methods of wildlife management restrict the options available for conflict mitigation. In parallel, there is increasing interest in using fertility control to manage wildlife. The present review aimed at analysing trends in research on fertility control for wildlife, illustrating developments in fertility-control technologies and delivery methods of fertility-control agents, summarising the conclusions of empirical and theoretical studies of fertility control applied at the population level and offering criteria to guide decisions regarding the suitability of fertility control to mitigate human–wildlife conflicts. The review highlighted a growing interest in fertility control for wildlife, underpinned by increasing numbers of scientific studies. Most current practical applications of fertility control for wild mammals use injectable single-dose immunocontraceptive vaccines mainly aimed at sterilising females, although many of these vaccines are not yet commercially available. One oral avian contraceptive, nicarbazin, is commercially available in some countries. Potential new methods of remote contraceptive delivery include bacterial ghosts, virus-like particles and genetically modified transmissible and non-transmissible organisms, although none of these have yet progressed to field testing. In parallel, new species-specific delivery systems have been developed. The results of population-level studies of fertility control indicated that this approach may increase survival and affect social and spatial behaviour of treated animals, although the effects are species- and context-specific. The present studies suggested that a substantial initial effort is generally required to reduce population growth if fertility control is the sole wildlife management method. However, several empirical and field studies have demonstrated that fertility control, particularly of isolated populations, can be successfully used to limit population growth and reduce human–wildlife conflicts. In parallel, there is growing recognition of the possible synergy between fertility control and disease vaccination to optimise the maintenance of herd immunity in the management of wildlife diseases. The review provides a decision tree that can be used to determine whether fertility control should be employed to resolve specific human–wildlife conflicts. These criteria encompass public consultation, considerations about animal welfare and feasibility, evaluation of population responses, costs and sustainability.
APA, Harvard, Vancouver, ISO, and other styles
6

Stotts, P. David, and William Pugh. "Parallel finite automata for modeling concurrent software systems." Journal of Systems and Software 27, no. 1 (October 1994): 27–43. http://dx.doi.org/10.1016/0164-1212(94)90112-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Taylor, Stephen, Shmuel Safra, and Ehud Shapiro. "A parallel implementation of Flat Concurrent Prolog." International Journal of Parallel Programming 15, no. 3 (June 1986): 245–75. http://dx.doi.org/10.1007/bf01414556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Malyshkin, Victor. "Parallel computing technologies 2020." Journal of Supercomputing 78, no. 4 (October 4, 2021): 6056–59. http://dx.doi.org/10.1007/s11227-021-04014-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Malyshkin, Victor. "Parallel computing technologies 2020." Journal of Supercomputing 78, no. 4 (October 4, 2021): 6056–59. http://dx.doi.org/10.1007/s11227-021-04014-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Malyshkin, Victor E. "Parallel computing technologies 2018." Journal of Supercomputing 75, no. 12 (November 20, 2019): 7747–49. http://dx.doi.org/10.1007/s11227-019-03014-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chang, Yen-Jung, and Vijay K. Garg. "A parallel algorithm for global states enumeration in concurrent systems." ACM SIGPLAN Notices 50, no. 8 (December 18, 2015): 140–49. http://dx.doi.org/10.1145/2858788.2688520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pryadko, S. A., A. Yu Troshin, V. D. Kozlov, and A. E. Ivanov. "Parallel programming technologies on computer complexes." Radio industry (Russia) 30, no. 3 (September 8, 2020): 28–33. http://dx.doi.org/10.21778/2413-9599-2020-30-3-28-33.

Full text
Abstract:
The article describes various options for speeding up calculations on computer systems. These features are closely related to the architecture of these complexes. The objective of this paper is to provide necessary information when selecting the capability for the speeding process of solving the computation problem. The main features implemented using the following models are described: programming in systems with shared memory, programming in systems with distributed memory, and programming on graphics accelerators (video cards). The basic concept, principles, advantages, and disadvantages of each of the considered programming models are described. All standards for writing programs described in the article can be used both on Linux and Windows operating systems. The required libraries are available and compatible with the C/C++ programming language. The article concludes with recommendations on the use of a particular technology, depending on the type of task to be solved.
APA, Harvard, Vancouver, ISO, and other styles
13

Albers, Susanne, and Torben Hagerup. "Improved Parallel Integer Sorting without Concurrent Writing." Information and Computation 136, no. 1 (July 1997): 25–51. http://dx.doi.org/10.1006/inco.1997.2632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chong, Ka Wong, Yijie Han, and Tak Wah Lam. "Concurrent threads and optimal parallel minimum spanning trees algorithm." Journal of the ACM 48, no. 2 (March 2001): 297–323. http://dx.doi.org/10.1145/375827.375847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Grando, María Adela. "Owicki-Gries Theory: A Possible Way of Relating Grammar Systems to Concurrent Programs." Triangle, no. 8 (June 29, 2018): 19. http://dx.doi.org/10.17345/triangle8.19-41.

Full text
Abstract:
The aim of this paper is to show how grammar systems and concurrent programs might be viewed as related models for distributed and cooperating computation. We argue that it is possible to translate a grammar system into a concurrent program, where the Owicki-Gries theory and other tools available in the programming framework can be used. The converse translation is also possible and this turns out to be useful when we are looking for a grammar system that can generate a given language. In order to show this we use tools from concurrent programming theory to prove that Lcd = {anbmcndm | n,m ≥ 1} can be generated by a non-returning Parallel Communicating grammar system with three regular components. We show that this strategy can be helpful in the construction of grammar systems that generate strings in less time and more eciently. We also discuss the absence of strategies in the concurrent programming theory to prove that Lcd can be generated by any Parallel Communicating grammar system with two regular components.
APA, Harvard, Vancouver, ISO, and other styles
16

Kostenetskii, P. S., A. V. Lepikhov, and L. V. Sokolinskii. "Technologies of parallel database systems for hierarchical multiprocessor environments." Automation and Remote Control 68, no. 5 (May 2007): 847–59. http://dx.doi.org/10.1134/s0005117907050116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ritter, M. B., Y. Vlasov, J. A. Kash, and A. Benner. "Optical technologies for data communication in large parallel systems." Journal of Instrumentation 6, no. 01 (January 5, 2011): C01012. http://dx.doi.org/10.1088/1748-0221/6/01/c01012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pol, Urmila. "Design and Development of Apriori Algorithm for Sequential to concurrent mining using MPI." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 10, no. 7 (October 23, 2013): 1785–90. http://dx.doi.org/10.24297/ijct.v10i7.7026.

Full text
Abstract:
Owing to the conception of big data and massive data processing there are increasing owes related to the temporal aspects of the data processing. In order to address these issues a continuous progression in data collection, storage technologies, designing and implementing large-scale parallel algorithm for Data mining is seen to be emerging in a rapid pace. In this regards, the Apriori algorithms have a great impact for finding frequent item sets using candidate generation. This paper presents highlights on parallel algorithm for mining association rules using MPI for passing message base in the Master-Slave based structural model.
APA, Harvard, Vancouver, ISO, and other styles
19

Hagerup, Torben. "Parallel Preprocessing for Path Queries without Concurrent Reading." Information and Computation 158, no. 1 (April 2000): 18–28. http://dx.doi.org/10.1006/inco.1999.2814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kafura, Dennis, Greg Lavender, and Doug Schmidt. "Workshop on design patterns for concurrent, parallel, and distributed object-oriented systems." ACM SIGPLAN OOPS Messenger 6, no. 4 (October 1995): 128–31. http://dx.doi.org/10.1145/260111.260266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lohstroh, Marten, Christian Menard, Soroush Bateni, and Edward A. Lee. "Toward a Lingua Franca for Deterministic Concurrent Systems." ACM Transactions on Embedded Computing Systems 20, no. 4 (June 2021): 1–27. http://dx.doi.org/10.1145/3448128.

Full text
Abstract:
Many programming languages and programming frameworks focus on parallel and distributed computing. Several frameworks are based on actors, which provide a more disciplined model for concurrency than threads. The interactions between actors, however, if not constrained, admit nondeterminism. As a consequence, actor programs may exhibit unintended behaviors and are less amenable to rigorous testing. We show that nondeterminism can be handled in a number of ways, surveying dataflow dialects, process networks, synchronous-reactive models, and discrete-event models. These existing approaches, however, tend to require centralized control, pose challenges to modular system design, or introduce a single point of failure. We describe “reactors,” a new coordination model that combines ideas from several of these approaches to enable determinism while preserving much of the style of actors. Reactors promote modularity and allow for distributed execution. By using a logical model of time that can be associated with physical time, reactors also provide control over timing. Reactors also expose parallelism that can be exploited on multicore machines and in distributed configurations without compromising determinacy.
APA, Harvard, Vancouver, ISO, and other styles
22

Kyparisis, George J., and Christos Koulamas. "Assembly-Line Scheduling with Concurrent Operations and Parallel Machines." INFORMS Journal on Computing 14, no. 1 (February 2002): 68–80. http://dx.doi.org/10.1287/ijoc.14.1.68.7708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Гурьева, Я. Л., and В. П. Ильин. "On acceleration technologies of parallel decomposition methods." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 1 (April 2, 2015): 146–54. http://dx.doi.org/10.26089/nummet.v16r115.

Full text
Abstract:
Одним из главных препятствий масштабированному распараллеливанию алгебраических методов декомпозиции для решения сверхбольших разреженных систем линейных алгебраических уравнений (СЛАУ) является замедление скорости сходимости аддитивного итерационного алгоритма Шварца в подпространствах Крылова при увеличении количества подобластей. Целью настоящей статьи является сравнительный экспериментальный анализ различных приeмов ускорения итераций: параметризованное пересечение подобластей, использование специальных интерфейсных условий на границах смежных подобластей, а также применение грубосеточной коррекции (агрегации, или редукции) исходной СЛАУ для построения дополнительного предобусловливателя. Распараллеливание алгоритмов осуществляется на двух уровнях программными средствами для распределeнной и общей памяти. Тестовые СЛАУ получаются при помощи конечно-разностных аппроксимаций задачи Дирихле для диффузионно-конвективного уравнения с различными значениями конвективных коэффициентов на последовательности сгущающихся сеток. One of the main obstacles to the scalable parallelization of the algebraic decomposition methods for solving large sparse systems of linear algebraic equations consists in slowing the convergence rate of the additive iterative Schwarz algorithm in the Krylov subspaces when the number of subdomains increases. The aim of this paper is a comparative experimental analysis of various ways to accelerate the iterations: a parametrized intersection of subdomains, the usage of interface conditions at the boundaries of adjacent subdomains, and the application of a coarse grid correction (aggregation, or reduction) for the original linear system to build an additional preconditioner. The parallelization of algorithms is performed on two levels by programming tools for the distributed and shared memory. The benchmark linear systems under study are formed using the finite difference approximations of the Dirichlet problem for the diffusion-convection equation with various values of the convection coefficients and on a sequence of condensing grids.
APA, Harvard, Vancouver, ISO, and other styles
24

Cronje, G. A., and W. H. Steeb. "Genetic Algorithms in a Distributed Computing Environment Using PVM." International Journal of Modern Physics C 08, no. 02 (April 1997): 327–44. http://dx.doi.org/10.1142/s012918319700028x.

Full text
Abstract:
The Parallel Virtual Machine (PVM) is a software system that enables a collection of heterogeneous computer systems to be used as a coherent and flexible concurrent computation resource. We show that genetic algorithms can be implemented using a Parallel Virtual Machine and C++. Problems with constraints are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Shum, Chong, Wing-Hong Lau, Tian Mao, Henry Shu-Hung Chung, Norman Chung-Fai Tse, Kim-Fung Tsang, and Loi Lei Lai. "DecompositionJ: Parallel and Deterministic Simulation of Concurrent Java Executions in Cyber-Physical Systems." IEEE Access 6 (2018): 21991–2010. http://dx.doi.org/10.1109/access.2018.2825254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Kuokka, Daniel R., and Larry T. Harada. "Communication infrastructure for concurrent engineering." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 9, no. 4 (September 1995): 283–97. http://dx.doi.org/10.1017/s0890060400002833.

Full text
Abstract:
AbstractIntegrating multiple engineering perspectives is critical to designing ever more complex products, but this introduces great potential for miscommunication leading to design conflicts. The SHADE (SHAred Dependency Engineering) project is defining agent infrastructure technology that supports dynamic, knowledge-based communication among heterogeneous engineering tools, collaboration systems, and conflict management systems. Building on technologies for defining a shared formal vocabulary and protocols for exchanging information, SHADE is developing facilitators that assist in locating and disseminating information. The result is a flexible infrastructure that helps existing engineering tools work together more effectively, and that supports a variety of new conflict management approaches. This article outlines the facilitation and application agents created by SHADE, and provides an in-depth example of their application to an engineering task.
APA, Harvard, Vancouver, ISO, and other styles
27

SCHNEIDEWIND, NORMAN. "APPROACH FOR COMPUTER SYSTEMS DESIGN." International Journal of Reliability, Quality and Safety Engineering 18, no. 02 (April 2011): 179–208. http://dx.doi.org/10.1142/s0218539311004068.

Full text
Abstract:
We develop a computer design process that uses an architectural approach, involving the definition and analysis of sequences of functions. This approach allowed us to configure a system at a high level, and provides a mechanism for predicting performance, reliability, availability, and security. A key feature of the design approach is sequence and system complexity that we found to be a good predictor of system properties, such as reliability. Both non-concurrent and concurrent processing, using multiple parallel processors, methods are evaluated with respect to performance, reliability, availability, security, and cost. We evaluated the tradeoffs between performance and cost, and found that a two-processor system provides a good balance between performance and cost. Our major contribution to the field of computer design is our innovative approach of using sequences as the basis of design and the application of complexity metrics as a predictor of system attributes, an approach we did not find in an extensive review of the literature.
APA, Harvard, Vancouver, ISO, and other styles
28

DATTOLO, ANTONINA, and VINCENZO LOIA. "DISTRIBUTED INFORMATION AND CONTROL IN A CONCURRENT HYPERMEDIA-ORIENTED ARCHITECTURE." International Journal of Software Engineering and Knowledge Engineering 10, no. 03 (June 2000): 345–69. http://dx.doi.org/10.1142/s0218194000000158.

Full text
Abstract:
The market for parallel and distributed computing systems keeps growing. Technological advances in processor power, networking, telecommunication and multimedia are stimulating the development of applications requiring parallel and distributed computing. An important research problem in this area is the need to find a robust bridge between the decentralisation of knowledge sources in information-based systems and the distribution of computational power. Consequently, the attention of the research community has been directed towards high-level, concurrent, distributed programming. This work proposes a new hypermedia framework based on the metaphor of the actor model. The storage and run-time layers are represented entirely as communities of independent actors that cooperate in order to accomplish common goals, such as version management or user adaptivity. These goals involve fundamental and complex hypermedia issues, which, thanks to the distribution of tasks, are treated in an efficient and simple way.
APA, Harvard, Vancouver, ISO, and other styles
29

Mirenkov, N. N. "Conference Report. Parallel computing technologies (PaCT-91)." Computing & Control Engineering Journal 3, no. 1 (1992): 3. http://dx.doi.org/10.1049/cce:19920002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Guo, Mingqiang, Liang Wu, and Zhong Xie. "An Efficient Parallel Map Visualization Framework for Large Vector Data." GEOMATICA 69, no. 1 (March 2015): 113–17. http://dx.doi.org/10.5623/cig2015-108.

Full text
Abstract:
With the tremendous development of surveying and mapping technologies, the volume of vector data is becoming larger. For mapping workers and other GIS scientists, map visualization is one of the most common functions of GIS software. But it is also a time-consuming process when processing massive amounts of vector data. Especially in an Internet map service environment, large numbers of concurrent users can cause major processing delays. In order to address this issue, this paper develops an efficient parallel visualization framework for large vector data sets by leveraging the advantages and characteristics of graphics cards, focusing on storage strategy and transfer strategy. The test results demonstrate that this new approach can reduce the computing times for visualizing large vector maps.
APA, Harvard, Vancouver, ISO, and other styles
31

Galizia, A., D. D'Agostino, and A. Clematis. "A Grid framework to enable parallel and concurrent TMA image analyses." International Journal of Grid and Utility Computing 1, no. 3 (2009): 261. http://dx.doi.org/10.1504/ijguc.2009.027653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yalaoui, Alice, Farah Belmecheri, Eric Châtelet, and Farouk Yalaoui. "Reliability Allocation Problem in Series-Parallel Systems." International Journal of Applied Evolutionary Computation 2, no. 1 (January 2011): 1–17. http://dx.doi.org/10.4018/jaec.2011010101.

Full text
Abstract:
Reliability optimization is an important step in industrial systems design. In order to develop a reliable system, designers may introduce different redundant technologies with the same functionality in parallel. In this paper, each technology is assumed to be composed of series components. The obtained configuration belongs to the series-parallel systems. The presented tool is for the design or the improvement of such systems, in order to minimize the system cost with a reliability constraint. The aim is to find the reliability to allocate to each component in order to minimize the total cost, such that the global system reliability verifies a minimal level constraint. This problem is known to be NP-hard. In this paper, a metaheuristic approach, based on the Ant Colony Optimization technics (ACO), is used in order to improve an existing approach. The experimental results, based on randomly generated instances, outperform the one of previous method dedicated to this problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Shuguang, and Joseph M. Schimmels. "Minimal realizations of spatial stiffnesses with parallel or serial mechanisms having concurrent axes." Journal of Robotic Systems 18, no. 3 (2001): 135–46. http://dx.doi.org/10.1002/rob.1011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ding, Wei, Tao Xu, Min Song, and Wei Gu. "Parallel Channel Equalizer for Mobile OFDM Baseband Receivers." Applied Mechanics and Materials 333-335 (July 2013): 640–45. http://dx.doi.org/10.4028/www.scientific.net/amm.333-335.640.

Full text
Abstract:
Mobile OFDM refers to OFDM systems with fast moving transceivers, contrastive to traditional OFDM systems whose transceivers are stationary or with a low velocity. An efficient implementation of the channel equalization for mobile OFDM is presented in this paper. Based on the particular OFDM subcarrier allocations, the channel equalizer is split into separated sub-equalizers, enabling a concurrent implementation. This parallel equalizer is implemented on an FPGA platform. The experimental results show that without an efficient design, mobile OFDM leads to an unacceptable hardware cost. The proposed parallel equalizer for mobile OFDM can compensate for time-varying channels, in which a traditional OFDM receiver fails to operate, although the paid price is that the hardware resource is reasonably increased.
APA, Harvard, Vancouver, ISO, and other styles
35

Bagherzadeh, Javad, Aporva Amarnath, Jielun Tan, Subhankar Pal, and Ronald G. Dreslinski. "A Holistic Solution for Reliability of 3D Parallel Systems." ACM Journal on Emerging Technologies in Computing Systems 18, no. 1 (January 31, 2022): 1–27. http://dx.doi.org/10.1145/3488900.

Full text
Abstract:
Monolithic 3D technology is emerging as a promising solution that can bring massive opportunities, but the gains can be hindered due to the reliability issues exaggerated by high temperature. Conventional reliability solutions focus on one specific feature and assume that the other required features would be provided by different solutions. Hence, this assumption has resulted in solutions that are proposed in isolation of each other and fail to consider the overall compatibility and the implied overheads of multiple isolated solutions for one system. This article proposes a holistic reliability management engine, R2D3, for post-Moore’s M3D parallel systems that have low yield and high failure rate. The proposed engine, comprising a controller, reconfigurable crossbars, and detection circuitry, provides concurrent single-replay detection and diagnosis, fault-mitigating repair, and aging-aware lifetime management at runtime. This holistic view enables us to create a solution that is highly effective while achieving a low overhead. Our solution achieves 96% coverage of defect; reduces V th degradation by 53%, leading to a 78% performance improvement on average over 8 years for an eight-core system; and ultimately yields a 2.16× longer mean-time-to-failure (MTTF) while incurring an overhead of 7.4% in area, 6.5% in power, and an 8.2% decrease in frequency.
APA, Harvard, Vancouver, ISO, and other styles
36

Müller, Dominik. "Adopting new technologies in the LHCb Gauss simulation framework." EPJ Web of Conferences 214 (2019): 02004. http://dx.doi.org/10.1051/epjconf/201921402004.

Full text
Abstract:
The increase in luminosity foreseen in the future years of operation of the Large Hadron Collider (LHC) creates new challenges in computing efficiency for all participating experiment. For Run 3 of the LHC, the LHCb collaboration needs to simulate about two orders of magnitude more Monte Carlo events to exploit the increased luminosity and trigger rate. Therefore, the LHCb simulation framework (Gauss) will go through a significant renovation, mostly driven by the upgraded core software framework (Gaudi) and the availability of a multithreaded version of Geant4. The upgraded Gaudi framework replaces single-threaded processing by a multithreaded approach, allowing concurrent execution of tasks with a single event as well as multiple events in parallel. A major task of the required overhaul of Gauss is the implementation of a new interface to the multithreaded version of Geant4.
APA, Harvard, Vancouver, ISO, and other styles
37

Yanakova, E. S., G. T. Macharadze, L.G. Gagarina, and A. A. Shvachko. "Parallel-Pipelined Video Processing in Multicore Heterogeneous Systems on Chip." Proceedings of Universities. Electronics 26, no. 2 (April 2021): 172–83. http://dx.doi.org/10.24151/1561-5405-2021-26-2-172-183.

Full text
Abstract:
A turn from homogeneous to heterogeneous architectures permits to achieve the advantages of the efficiency, size, weight and power consumption, which is especially important for the built-in solutions. However, the development of the parallel software for heterogeneous computer systems is rather complex task due to the requirements of high efficiency, easy programming and the process of scaling. In the paper the efficiency of parallel-pipelined processing of video information in multiprocessor heterogeneous systems on a chip (SoC) such as DSP, GPU, ISP, VDP, VPU and others, has been investigated. A typical scheme of parallel-pipelined processing of video data using various accelerators has been presented. The scheme of the parallel-pipelined video data on heterogeneous SoC 1892VM248 has been developed. The methods of efficient parallel-pipelined processing of video data in heterogeneous computers (SoC), consisting of the operating system level, programming technologies level and the application level, have been proposed. A comparative analysis of the most common programming technologies, such as OpenCL, OpenMP, MPI, OpenAMP, has been performed. The analysis has shown that depend-ing on the device finite purpose two programming paradigms should be applied: based on OpenCL technology (for built-in system) and MPI technology (for inter-cell and inter processor interaction). The results obtained of the parallel-pipelined processing within the framework of the face recognition have confirmed the effectiveness of the chosen solutions.
APA, Harvard, Vancouver, ISO, and other styles
38

Chiang, Chia-Chu, and Roger Lee. "Coordination Languages and Models for Open Distributed Systems." International Journal of Software Innovation 1, no. 1 (January 2013): 1–13. http://dx.doi.org/10.4018/ijsi.2013010101.

Full text
Abstract:
Programming open distributed systems will be of rapidly growing importance in the coming decades to the scientists and engineers that will be using these techniques to solve society’s most pressing problems. Even today, the authors see a growing number of critical applications such as MRI spin relaxometry, gene sequence analysis, climate modeling, and molecular modeling of potential bioactive compounds that require massive amounts of computation. The demands for intensive computational power will only grow in the future, as society tackles more complex problems. Existing concurrent programming languages are not well-suited to the development of open distributed systems. Middleware technologies provide the support for the development of open distributed systems. However, the technologies suffer the same problems of existing concurrent programming approaches which the software evolution of the systems are not supported well. The resulting systems are difficult to maintain due to the changes. This has led to the design and implementation of a variety of coordination models and languages for open distributed systems. The main purpose is to separate the concerns of the complexities including communication, coordination, computation, and heterogeneity in the development of open distributed systems. The models manage the concerns to improve the maintenance of the systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Shukur, Hanan, Subhi R. M. Zeebaree, Abdulraheem Jamil Ahmed, Rizgar R. Zebari, Omar Ahmed, Bareen Shams Aldeen Tahir, and Mohammed A. M.Sadeeq. "A State of Art Survey for Concurrent Computation and Clustering of Parallel Computing for Distributed Systems." Journal of Applied Science and Technology Trends 1, no. 4 (December 31, 2020): 148–54. http://dx.doi.org/10.38094/jastt1466.

Full text
Abstract:
In this paper, several works has been presented related to the clustering parallel computing for distributed system. The trend of the paper is to focus on the strength points of previous works in this field towards enhancing performance of the distributed systems. This concentration conducted via presenting several techniques where each of them has the weak and strong features. The most challenging points for all techniques vary from increasing the performance of the system to time responding to overcome overhead running of the system. For more specific addressing concurrent computation besides parallel computing classifications for distributed systems, this paper depended comprehensive features study and comparison between SYNC and ASYNC Modes.
APA, Harvard, Vancouver, ISO, and other styles
40

Berthomé, Pascal, and Afonso Ferreira. "Communication Issues in Parallel Systems with Optical Interconnections." International Journal of Foundations of Computer Science 08, no. 02 (June 1997): 143–62. http://dx.doi.org/10.1142/s0129054197000124.

Full text
Abstract:
In classical massively parallel computers, the complexity of the interconnection networks is much higher than the complexity of the processing elements themselves. However, emerging optical technologies may provide a way to reconsider very large parallel architectures where processors would communicate by optical means. In this paper, we compare some optically interconnected parallel multicomputer models with regard to their communication capabilities. We first establish a distinction of such systems, based on the independence of the communication elements embedded in the processors (transmitters and receivers). Then, motivated by the fact that in multicomputers some communication operations have to be very efficiently performed, we study communication problems, namely, broadcast and multi-broadcast, under the hypothesis of bounded fanout. Our results take also into account a bounded number of available wavelengths.
APA, Harvard, Vancouver, ISO, and other styles
41

LUGIEZ, DENIS. "FORWARD ANALYSIS OF DYNAMIC NETWORK OF PUSHDOWN SYSTEMS IS EASIER WITHOUT ORDER." International Journal of Foundations of Computer Science 22, no. 04 (June 2011): 843–62. http://dx.doi.org/10.1142/s0129054111008453.

Full text
Abstract:
Dynamic networks of Pushdown Systems (DNPS in short) have been introduced to perform static analysis of concurrent programs that may spawn threads dynamically. In this model the set of successors of a regular set of configurations can be non-regular, making forward analysis of these models difficult. We refine the model by adding the associative-commutative properties of parallel composition, and we define Presburger weighted tree automata, an extension of weighted automata and tree automata, that accept the set of successors of a regular set of configurations. This yields decidability of the forward analysis of DNPS. Finally, we extend this result to the model where configurations are sets of threads running in parallel.
APA, Harvard, Vancouver, ISO, and other styles
42

Sorokin, Aleksei, Sergey Malkovsky, Georgiy Tsoy, Alexander Zatsarinnyy, and Konstantin Volovich. "Comparative Performance Evaluation of Modern Heterogeneous High-Performance Computing Systems CPUs." Electronics 9, no. 6 (June 23, 2020): 1035. http://dx.doi.org/10.3390/electronics9061035.

Full text
Abstract:
The study presents a comparison of computing systems based on IBM POWER8, IBM POWER9, and Intel Xeon Platinum 8160 processors running parallel applications. Memory subsystem bandwidth was studied, parallel programming technologies were compared, and the operating modes and capabilities of simultaneous multithreading technology were analyzed. Performance analysis for the studied computing systems running parallel applications based on the OpenMP and MPI technologies was carried out by using the NAS Parallel Benchmarks. An assessment of the results obtained during experimental calculations led to the conclusion that IBM POWER8 and Intel Xeon Platinum 8160 systems have almost the same maximum memory bandwidth, but require a different number of threads for efficient utilization. The IBM POWER9 system has the highest maximum bandwidth, which can be attributed to the large number of memory channels per socket. Based on the results of numerical experiments, recommendations are given on how the hardware of a similar grade can be utilized to solve various scientific problems, including recommendations on optimal processor architecture choice for leveraging the operation of high-performance hybrid computing platforms.
APA, Harvard, Vancouver, ISO, and other styles
43

Ritter, Jacob, Federico Ghirimoldi, Laura Manuel, Eric Moffett, Paula Shireman, and Bradley Brimhall. "Choosing Wisely: Persistent Amylase Concurrent With Lipase Testing at Multiple Academic Health Systems." American Journal of Clinical Pathology 152, Supplement_1 (September 11, 2019): S125. http://dx.doi.org/10.1093/ajcp/aqz124.004.

Full text
Abstract:
Abstract Objectives Choosing Wisely is a multidisciplinary effort to reduce unnecessary tests and procedures. Evidence-based guidelines advocate using serum lipase to diagnose acute pancreatitis; concurrent amylase and lipase tests provide minimal benefit compared to either alone. Serial measurements after the first elevated test are ineffective for tracking disease course. Our study determined the number of concurrent amylase/lipase tests and unnecessary serial tests to examine adherence to Choosing Wisely recommendations at four academic health systems. We also identified provider-ordering patterns and quantified the variable and total costs of unnecessary tests. Methods We analyzed deidentified laboratory data from four academic health systems in the Greater Plains Collaborative for all serum amylase and lipase tests from 2017, including results, timing, and patient-encounter location. We defined concurrent tests occurring within a 24-hour period and unnecessary serial inpatient measurements occurring after the first elevated result. Conclusion While the majority of providers adhered to Choosing Wisely recommendations obtaining 58,693 lipase-only tests, 85.8% of amylase tests were obtained in parallel with lipase (20,771 concurrent tests; amylase only, 3,447; total amylase tests, 24,218). Encounter location revealed concurrent rates of 43%, 32%, and 5% for ambulatory, inpatient, and emergency department settings, respectively. Ambulatory clinics from multiple services obtained concurrent tests, with Family Medicine obtaining 48%. Services with order sets containing both amylase and lipase were associated with higher rates of concurrent testing. Inpatient unnecessary serial testing resulted in 413 amylase and 1,266 lipase tests occurring in 33% and 31% of inpatient encounters for amylase and lipase, respectively. Unnecessary amylase and lipase tests resulted in $31,195 variable costs and in $86,297 total costs. Targeted education to clinicians/services ordering unnecessary amylase/lipase tests and revising order sets could decrease costs and improve quality of care by decreasing the volume and frequency of blood draws. Funded by UL1TR002645 and the Greater Plains Collaborative.
APA, Harvard, Vancouver, ISO, and other styles
44

Rajalakshmi, N. R., Ankur Dumka, Manoj Kumar, Rajesh Singh, Anita Gehlot, Shaik Vaseem Akram, Divya Anand, Dalia H. Elkamchouchi, and Irene Delgado Noya. "A Cost-Optimized Data Parallel Task Scheduling with Deadline Constraints in Cloud." Electronics 11, no. 13 (June 28, 2022): 2022. http://dx.doi.org/10.3390/electronics11132022.

Full text
Abstract:
Large-scale distributed systems have the advantages of high processing speeds and large communication bandwidths over the network. The processing of huge real-world data within a time constraint becomes tricky, due to the complexity of data parallel task scheduling in a time constrained environment. This paper proposes data parallel task scheduling in cloud to address the minimization of cost and time constraints. By running concurrent executions of tasks on multi-core cloud resources, the number of parallel executions could be increased correspondingly, thereby, finishing the task within the deadline is possible. A mathematical model is developed here to minimize the operational cost of data parallel tasks by feasibly assigning a load to each virtual machine in the cloud data center. This work experiments with a machine learning model that is replicated on the multi-core cloud heterogeneous resources to execute different input data concurrently to accomplish distributive learning. The outcome of concurrent execution of data-intensive tasks on different parts of the input dataset gives better solutions in terms of processing the task by the deadline at optimized cost.
APA, Harvard, Vancouver, ISO, and other styles
45

Ehrig, Hartmut, Annegret Habel, and Barry K. Rosen. "Concurrent Transformations of Relational Structures." Fundamenta Informaticae 9, no. 1 (January 1, 1986): 13–49. http://dx.doi.org/10.3233/fi-1986-9103.

Full text
Abstract:
This paper provides a common framework to study transformations of structures ranging from all kinds of graphs to relational data structures. Transformations of structures can be used as derivations of graphs in the sense of graph grammars, as update of relations in the sense of relational data bases, or even as operations on data structures in the sense of abstract data types. The main aim of the paper is to construct parallel and concurrent transformations from given sequential ones and to study sequentializability properties of complex transformations. The main results are three fundamental theorems concerning parallelism, concurrency and decomposition of transformations of structures. On one hand these results can be considered as a contribution to the study of con currency in graph grammars and on the other hand as a formal framework for consistent concurrent update of relational structures.
APA, Harvard, Vancouver, ISO, and other styles
46

Yonezawa, Akinori. "Parallel Processing Description in the Concurrent Object-Oriented Language ABCL/1." Systems and Computers in Japan 21, no. 4 (1990): 36–44. http://dx.doi.org/10.1002/scj.4690210404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Loogen, Rita, and Ursula Goltz. "Modelling Nondeterministic Concurrent Processes with Event Structures." Fundamenta Informaticae 14, no. 1 (January 1, 1991): 39–73. http://dx.doi.org/10.3233/fi-1991-14103.

Full text
Abstract:
We present a non-interleaving model for non deterministic concurrent processes that is based on labelled event structures. We define operators on labelled event structures like parallel composition, nondeterministic combination, choice, prefixing and hiding. These operators correspond to the operations of the “Theory of Communicating Sequential Processes” (TCSP). Infinite processes are defined using the metric approach. The dynamic behaviour of event structures is defined by a transition relation which describes the execution of partially ordered sets of actions, abstracting from internal events.
APA, Harvard, Vancouver, ISO, and other styles
48

Lewiński, Andrzej. "Security of Railway Control Systems and new information technologies." Transportation Overview - Przeglad Komunikacyjny 2018, no. 6 (June 1, 2018): 54–68. http://dx.doi.org/10.35117/a_eng_18_06_06.

Full text
Abstract:
The new information technologies, such computer techniques, wireless (open) transmission standards and satellite systems applied for positioning have a important influence for different approach to safety criteria of railway control systems. The “fail-safe” rule assumed for relay control systems is based on high reliability of applied relays (guaranteed number of switches) and rigorous maintenance (homologation) procedure. The implementation of redundant, parallel computer systems has modified the concept of safety towards Tolerable Hazard Rate, where safety of computer systems is defined as an intensity of critical (dangerous, catastrophic) failures including self-testing. The wireless technologies respect the threats and their influence for functionality, availability and reliability of railway control systems (defined as probability).
APA, Harvard, Vancouver, ISO, and other styles
49

Abramkina, D. "JUSTIFICATION OF HYBRID VENTILATION SYSTEMS OPERATING BOUNDARIES." Bulletin of Belgorod State Technological University named after. V. G. Shukhov 7, no. 2 (February 14, 2022): 38–46. http://dx.doi.org/10.34031/2071-7318-2021-7-2-38-46.

Full text
Abstract:
The paper presents the results of a theoretical study of existing terms of hybrid ventilation. A classification of hybrid ventilation strategies has been drawn up: concurrent and changeover operations. Concurrent operation includes the sharing of mechanical and natural ventilation systems, for example, in the case of natural inlet and the removal of contaminated air from the room by axial roof fans; mechanical systems, equipped with low-pressure fans, used in conjunction with technologies aimed at increasing natural pressure (heat and wind inducement). Changeover operation includes seasonal work, night cooling and local alternating work. The analysis of climatic characteristics based on data from meteorological station 27612 (Moscow, VDNH) shows that the average temperature of outdoor air exceeds the requirement temperature for natural ventilation calculations for most of the year. Annual average air exchange factors for the period 2016-2020 are less than 50 %, which proves the need for a seasonal hybrid ventilation system. Based on the calculation of average monthly air exchange factors, the mechanical inducement is recommended from March to November.
APA, Harvard, Vancouver, ISO, and other styles
50

Sun, Fuyu, Hua Wang, and Jianping Zhou. "Research and development techniques for early-warning satellite systems using concurrent engineering." Concurrent Engineering 26, no. 3 (April 23, 2018): 215–30. http://dx.doi.org/10.1177/1063293x18768668.

Full text
Abstract:
An early-warning satellite system is a complex project that requires the participation of many aerospace academies and scientific institutions. In terms of software programming, this study proposes a new simulation integrated management platform for the analysis of parallel and distributed systems. The platform facilitates the design and testing of both applications and architectures. To improve the efficiency of project development, new early-warning satellite systems are designed based on the simulation integrated management platform. In terms of project management, this study applies concurrent engineering theory to aerospace engineering and presents a method of collaborative project management. Finally, through a series of experiments, this study validates the simulation integrated management platform, models, and project management method. Furthermore, the causes of deviation and prevention methods are explained in detail. The proposed simulation platform, models, and project management method provide a foundation for further validations of autonomous technology in space attack–defense architecture research.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography