Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Modern multi-core systems.

Artykuły w czasopismach na temat „Modern multi-core systems”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Modern multi-core systems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bucaioni, Alessio, Saad Mubeen, Federico Ciccozzi, Antonio Cicchetti i Mikael Sjödin. "Modelling multi-criticality vehicular software systems: evolution of an industrial component model". Software and Systems Modeling 19, nr 5 (30.04.2020): 1283–302. http://dx.doi.org/10.1007/s10270-020-00795-5.

Pełny tekst źródła
Streszczenie:
Abstract Software in modern vehicles consists of multi-criticality functions, where a function can be safety-critical with stringent real-time requirements, less critical from the vehicle operation perspective, but still with real-time requirements, or not critical at all. Next-generation autonomous vehicles will require higher computational power to run multi-criticality functions and such a power can only be provided by parallel computing platforms such as multi-core architectures. However, current model-based software development solutions and related modelling languages have not been designed to effectively deal with challenges specific of multi-core, such as core-interdependency and controlled allocation of software to hardware. In this paper, we report on the evolution of the Rubus Component Model for the modelling, analysis, and development of vehicular software systems with multi-criticality for deployment on multi-core platforms. Our goal is to provide a lightweight and technology-preserving transition from model-based software development for single-core to multi-core. This is achieved by evolving the Rubus Component Model to capture explicit concepts for multi-core and parallel hardware and for expressing variable criticality of software functions. The paper illustrates these contributions through an industrial application in the vehicular domain.
Style APA, Harvard, Vancouver, ISO itp.
2

Chen, Kuo Yi, Fuh Gwo Chen i Jr Shian Chen. "A Cost-Effective Hardware Approach for Measuring Power Consumption of Modern Multi-Core Processors". Applied Mechanics and Materials 110-116 (październik 2011): 4569–73. http://dx.doi.org/10.4028/www.scientific.net/amm.110-116.4569.

Pełny tekst źródła
Streszczenie:
Multiple processor cores are built within a chip by advanced VLSI technology. With the decreasing prices, multi-core processors are widely deployed in both server and desktop systems. The workload of multi-threaded applications could be separated to different cores by multiple threads, such that application threads can run concurrently to maximize overall execution speed of the applications. Moreover, for the green trend of computing nowadays, most of modern multi-core processors have a functionality of dynamic frequency turning. The power-level tuning techniques are based on Dynamic Voltage and Frequency Scaling (DVFS). In order to evaluate the performance of various power-saving approaches, an appropriate technique to measure the power consumption of multi-core processors is important. However, most of approaches estimate CPU power consumption only from CMOS power consumption data and CPU frequency. These approaches only estimate the dynamic power consumption of multi-core processors, the static power consumption is not be included. In this study, a hardware approach for the power consumption measurement of multi-core processors is proposed. Thus the power consumption of a CPU could be measured precisely, and the performance of CPU power-saving approaches can be evaluated well.
Style APA, Harvard, Vancouver, ISO itp.
3

Pryadko, S. A., A. S. Krutogolova, A. S. Uglyanitsa i A. E. Ivanov. "Multi-core processors use for numerical problems solutions". Radio industry (Russia) 30, nr 4 (23.12.2020): 98–105. http://dx.doi.org/10.21778/2413-9599-2020-30-4-98-105.

Pełny tekst źródła
Streszczenie:
Problem statement. The use of programming technologies on modern multicore systems is an integral part of an enterprise whose activities involve multitasking or the need to make a large number of calculations over a certain time. The article discusses the development of such technologies aimed at increasing the speed of solving various issues, for example, numerical modeling.Objective. Search for alternative ways to increase the speed of calculations by increasing the number of processors. As an example of increasing the calculation speed depending on the number of processors, the well-known heat-transfer equation is taken, and classical numerical schemes for its solution are given. The use of explicit and implicit schemes is compared, including for the possibility of parallelization of calculations.Results. The article describes systems with shared and distributed memory, describes their possible use for solving various problems, and provides recommendations for their use.Practical implications. Parallel computing helps to solve many problems in various fields, as it reduces the time required to solve partial differential equations.
Style APA, Harvard, Vancouver, ISO itp.
4

Burns, Ethan, Seth Lemons, Wheeler Ruml i Rong Zhou. "Suboptimal and Anytime Heuristic Search on Multi-Core Machines". Proceedings of the International Conference on Automated Planning and Scheduling 19 (16.10.2009): 42–49. http://dx.doi.org/10.1609/icaps.v19i1.13375.

Pełny tekst źródła
Streszczenie:
In order to scale with modern processors, planning algorithms must become multi-threaded. In this paper, we present parallel shared-memory algorithms for two problems that underlie many planning systems: suboptimal and anytime heuristic search. We extend a recently-proposed approach for parallel optimal search to the suboptimal case, providing two new pruning rules for bounded suboptimal search. We also show how this new approach can be used for parallel anytime search. Using temporal logic, we prove the correctness of our framework, and in an empirical comparison on STRIPS planning, grid pathfinding, and sliding tile puzzle problems using an 8-core machine, we show that it yields faster search performance than previous proposals.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhao, Huatao, Xiao Luo, Chen Zhu, Takahiro Watanabe i Tianbo Zhu. "Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems". Modern Physics Letters B 31, nr 19-21 (27.07.2017): 1740067. http://dx.doi.org/10.1142/s021798491740067x.

Pełny tekst źródła
Streszczenie:
In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.
Style APA, Harvard, Vancouver, ISO itp.
6

Hanafi Por, Porya Soltani, Abbas Ramazani i Mojtaba Hosseini Toodeshki. "Temperature and performance evaluation of multiprocessors chips by optimal control method". Bulletin of Electrical Engineering and Informatics 12, nr 2 (1.04.2023): 749–59. http://dx.doi.org/10.11591/eei.v12i2.4291.

Pełny tekst źródła
Streszczenie:
Multi-core processors support all modern electronic devices nowadays. However, temperature and performance management are one of the most critical issues in the design of today’s microprocessors. In this paper, we propose a framework by using an optimal control method based on fan speed and frequency control of the multi-core processor. The goal is to optimize performance and at the same time avoid violating an expected temperature. Our proposed method uses a high-precision thermal and power model for multi-core processors. This method is validated on asymmetric ODROID-XU4 multi-core processor. The experimental results show the ability of the proposed method to achieve the adequate trade-off between performance and temperature control.
Style APA, Harvard, Vancouver, ISO itp.
7

Chen, Yong Heng, Wan Li Zuo i Feng Lin He. "Optimization Strategy of Bidirectional Join Enumeration in Multi-Core CPUS". Applied Mechanics and Materials 44-47 (grudzień 2010): 383–87. http://dx.doi.org/10.4028/www.scientific.net/amm.44-47.383.

Pełny tekst źródła
Streszczenie:
Most contemporary database systems query optimizers exploit System-R’s Bottom-up dynamic programming method (DP) to find the optimal query execution plan (QEP) without evaluating redundant sub-plans. As modern microprocessors employ multiple cores to accelerate computations, the parallel optimization algorithm has been proposed to parallelize the Bottom-up DP query optimization process. However Top-down DP method can derive upper bounds for the costs of the plans it generates which is not available to typical Bottom-up DP method since such method generate and cost all subplans before considering larger containing plans. This paper combined the enhancements of two approaches and proposes a comprehensive and practical algorithm based graph-traversal driven, referred to here as DPbid, for parallelizing query optimization in the multi-core processor architecture. This paper has implemented such a search strategy and experimental results show that can improve optimization time effective compared to known existing algorithms.
Style APA, Harvard, Vancouver, ISO itp.
8

Sibai, Fadi N., i Ali El-Moursy. "Performance evaluation and comparison of parallel conjugate gradient on modern multi-core accelerator and massively parallel systems". International Journal of Parallel, Emergent and Distributed Systems 29, nr 1 (6.02.2013): 38–67. http://dx.doi.org/10.1080/17445760.2012.762774.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Filman, Robert E., i Paul H. Morris. "Compiling Knowledge-Based Systems to Ada: The PrkAda Core". International Journal on Artificial Intelligence Tools 06, nr 03 (wrzesień 1997): 341–64. http://dx.doi.org/10.1142/s0218213097000190.

Pełny tekst źródła
Streszczenie:
This paper describes the implementation of PrkAda, a system for delivering, in Ada, Artificial Intelligence and object-oriented applications developed using the ProKappa system. (ProKappa is a modern, multi-paradigm knowledge-based–system development tool. It includes facilities for dynamic object management, rule-based processing, daemons, and graphical developer and end-user interfaces. ProKappa is a successor system to KEE.) Creating PrkAda required creating a run-time, Ada-language, object-system "core," and developing a compiler to Ada from ProTalk (ProKappa's high-level, backtracking-based language). We describe PrkAda ProTalk compiler in a companion paper [5]. This paper concentrates on the issues involved in implementing an AI application delivery core, particularly with respect to Ada, including • Automatic storage management (garbage collection) without either the cooperation of the compiler or access to the run-time stack, • Dynamic (weak) typing in a strongly-typed language, • Dynamic objects (objects that can change their slots and parentage as the program is executing) • Dynamic function binding in a language designed to preclude "self-modifying programs," and • Implementation trade-offs in object-oriented knowledge-based systems development environments
Style APA, Harvard, Vancouver, ISO itp.
10

Rudenko, O., I. Domanov i V. Kravchenko. "TECHNICAL APPROACH IN EVALUATING OF THE CHARACTERISTICS OF MULTI-CORE BALANCED NONQUADDED CABLES FOR DIGITAL COMMUNICATION SYSTEMS". Наукові праці Державного науково-дослідного інституту випробувань і сертифікації озброєння та військової техніки, nr 4 (19.08.2020): 107–17. http://dx.doi.org/10.37701/dndivsovt.4.2020.12.

Pełny tekst źródła
Streszczenie:
The article proposes a variant of the technical approach in evaluating of the characteristics of multi-core balanced nonquadded cables for digital communication systems. The list of modern standards of the International Electrotechnical Commission, and also the National standards of Ukraine and the international standards which are created on their basis is presented. The proposed standards define the general requirements for communication cables, offer verifying methods for the design, structural dimensions, and materials of elements, marking and packaging. In addition, the provisions of these standards define the methods of testing the electrical characteristics, survivability and resistance of the cable to climatic and mechanical factors. A list of proposed cable characteristics, i.e. electrical, survivability and resistance to environmental influences, is sufficient to assess the cable for compliance with the stated requirements. The list of considered characteristics is not exhaustive and can be supplemented or changed depending on requirements of the customer. On the example of using the automatic measuring system AESA 9500 made in Sweden to measure electrical characteristics, the possibilities of optimizing the process of evaluating the characteristics of the cable through the use of modern automatically controlled measuring systems, which will significantly reduce the time and cost of evaluation are considered. The approach presented in the article to the evaluation of the characteristics of multi-core balanced nonquadded cables for digital communication systems can be used during the testing of the cable planned for delivery to the Armed Forces of Ukraine.
Style APA, Harvard, Vancouver, ISO itp.
11

Jamadagni, Nagendra Kumar, Aniruddh M, Dr Govinda Raju M i Dr Usha Rani K. R. "Comparative Study of Heterogeneous Multicore Scheduling Algorithms on Media Codecs". Journal of University of Shanghai for Science and Technology 23, nr 06 (18.06.2021): 840–49. http://dx.doi.org/10.51201/jusst/21/05355.

Pełny tekst źródła
Streszczenie:
All modern-day computers and smartphones come with multi-core CPUs. The multicore architecture is generally heterogeneous in nature to maximize computational throughput. These multicore systems exploit thread-level parallelism to deliver higher performance, but they are limited by the requirement of good scheduling algorithms that maximize CPU utility and minimize wasted and idle cycles. With the rise in streaming services and multimedia capabilities of smartphones, it is necessary to have efficient heterogeneous cores which are capable of performing multimedia processing at a fast pace. It is also needed that they utilize efficient scheduling algorithms to achieve this task. This paper compares some heterogeneous multi-core scheduling algorithms available and determines which is the most optimal scheduling algorithm given various codecs.
Style APA, Harvard, Vancouver, ISO itp.
12

Zhang, Lei, Ren Ping Dong, Chang Zhang i Ya Ping Yu. "The Design and Implementation of a Heterogeneous Multi-Core Security Chip Architecture Based on Shared Memory System". Applied Mechanics and Materials 668-669 (październik 2014): 1314–18. http://dx.doi.org/10.4028/www.scientific.net/amm.668-669.1314.

Pełny tekst źródła
Streszczenie:
With the existence of traditional SOC chip, the encryption and decryption speed and low power cannot meet the computing needs of the modern diversity, then we present a heterogeneous multi-core system which designed based on shared memory on the Xilinx Virtex-5 platform. This paper is in-depth research about heterogeneous multi-core password architecture, static task partitioning, scheduling strategy and the communication mechanism between cores. The three cores systems are designed and builded based on shared memory to realize ZUC algorithm which generates a stream cipher on virtex-5 platform. The three microblaze cores are responsible for inter-core communication, the implementation of ZUC algorithm and articulating IC card to read keys. Through the design of three cores system, give full play to the hardware, software and computer architecture parallelism at all levels to improve the performance of the algorithm to achieve high performance green computing.
Style APA, Harvard, Vancouver, ISO itp.
13

Alhomssi, Adnan, i Viktor Leis. "Scalable and Robust Snapshot Isolation for High-Performance Storage Engines". Proceedings of the VLDB Endowment 16, nr 6 (luty 2023): 1426–38. http://dx.doi.org/10.14778/3583140.3583157.

Pełny tekst źródła
Streszczenie:
MVCC-based snapshot isolation promises that read queries can proceed without interfering with concurrent writes. However, as we show experimentally, in existing implementations a single long-running query can easily cause transactional throughput to collapse. Moreover, existing out-of-memory commit protocols fail to meet the scalability needs of modern multi-core systems. In this paper, we present three complementary techniques for robust and scalable snapshot isolation in out-of-memory systems. First, we propose a commit protocol that minimizes cross-thread communication for better scalability, avoids touching the write set on commit, and enables efficient fine-granular garbage collection. Second, we introduce the Graveyard Index, an auxiliary data structure that moves logically-deleted tuples out of the way of operational transactions. Third, we present an adaptive version storage scheme that enables fast garbage collection and improves scan performance of frequently-modified tuples. All techniques are engineered to scale well on multi-core processors, and together enable robust performance for complex hybrid workloads.
Style APA, Harvard, Vancouver, ISO itp.
14

Kumar, B., i O. Dikshit. "PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (21.06.2016): 263–67. http://dx.doi.org/10.5194/isprs-archives-xli-b7-263-2016.

Pełny tekst źródła
Streszczenie:
Extended morphological profile (EMP) is a good technique for extracting spectral-spatial information from the images but large size of hyperspectral images is an important concern for creating EMPs. However, with the availability of modern multi-core processors and commodity parallel processing systems like graphics processing units (GPUs) at desktop level, parallel computing provides a viable option to significantly accelerate execution of such computations. In this paper, parallel implementation of an EMP based spectralspatial classification method for hyperspectral imagery is presented. The parallel implementation is done both on multi-core CPU and GPU. The impact of parallelization on speed up and classification accuracy is analyzed. For GPU, the implementation is done in compute unified device architecture (CUDA) C. The experiments are carried out on two well-known hyperspectral images. It is observed from the experimental results that GPU implementation provides a speed up of about 7 times, while parallel implementation on multi-core CPU resulted in speed up of about 3 times. It is also observed that parallel implementation has no adverse impact on the classification accuracy.
Style APA, Harvard, Vancouver, ISO itp.
15

Kumar, B., i O. Dikshit. "PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7 (21.06.2016): 263–67. http://dx.doi.org/10.5194/isprsarchives-xli-b7-263-2016.

Pełny tekst źródła
Streszczenie:
Extended morphological profile (EMP) is a good technique for extracting spectral-spatial information from the images but large size of hyperspectral images is an important concern for creating EMPs. However, with the availability of modern multi-core processors and commodity parallel processing systems like graphics processing units (GPUs) at desktop level, parallel computing provides a viable option to significantly accelerate execution of such computations. In this paper, parallel implementation of an EMP based spectralspatial classification method for hyperspectral imagery is presented. The parallel implementation is done both on multi-core CPU and GPU. The impact of parallelization on speed up and classification accuracy is analyzed. For GPU, the implementation is done in compute unified device architecture (CUDA) C. The experiments are carried out on two well-known hyperspectral images. It is observed from the experimental results that GPU implementation provides a speed up of about 7 times, while parallel implementation on multi-core CPU resulted in speed up of about 3 times. It is also observed that parallel implementation has no adverse impact on the classification accuracy.
Style APA, Harvard, Vancouver, ISO itp.
16

Nikolic, Goran, Bojan Dimitrijevic, Tatjana Nikolic i Mile Stojcev. "Fifty years of microprocessor evolution: from single CPU to multicore and manycore systems". Facta universitatis - series: Electronics and Energetics 35, nr 2 (2022): 155–86. http://dx.doi.org/10.2298/fuee2202155n.

Pełny tekst źródła
Streszczenie:
Nowadays microprocessors are among the most complex electronic systems that man has ever designed. One small silicon chip can contain the complete processor, large memory and logic needed to connect it to the input-output devices. The performance of today's processors implemented on a single chip surpasses the performance of a room-sized supercomputer from just 50 years ago, which cost over $ 10 million [1]. Even the embedded processors found in everyday devices such as mobile phones are far more powerful than computer developers once imagined. The main components of a modern microprocessor are a number of general-purpose cores, a graphics processing unit, a shared cache, memory and input-output interface and a network on a chip to interconnect all these components [2]. The speed of the microprocessor is determined by its clock frequency and cannot exceed a certain limit. Namely, as the frequency increases, the power dissipation increases too, and consequently the amount of heating becomes critical. So, silicon manufacturers decided to design new processor architecture, called multicore processors [3]. With aim to increase performance and efficiency these multiple cores execute multiple instructions simultaneously. In this way, the amount of parallel computing or parallelism is increased [4]. In spite of mentioned advantages, numerous challenges must be addressed carefully when more cores and parallelism are used. This paper presents a review of microprocessor microarchitectures, discussing their generations over the past 50 years. Then, it describes the currently used implementations of the microarchitecture of modern microprocessors, pointing out the specifics of parallel computing in heterogeneous microprocessor systems. To use efficiently the possibility of multi-core technology, software applications must be multithreaded. The program execution must be distributed among the multi-core processors so they can operate simultaneously. To use multi-threading, it is imperative for programmer to understand the basic principles of parallel computing and parallel hardware. Finally, the paper provides details how to implement hardware parallelism in multicore systems.
Style APA, Harvard, Vancouver, ISO itp.
17

Dümmler, Jörg, Thomas Rauber i Gudula Rünger. "Combined Scheduling and Mapping for Scalable Computing with Parallel Tasks". Scientific Programming 20, nr 1 (2012): 45–67. http://dx.doi.org/10.1155/2012/514940.

Pełny tekst źródła
Streszczenie:
Recent and future parallel clusters and supercomputers use symmetric multiprocessors (SMPs) and multi-core processors as basic nodes, providing a huge amount of parallel resources. These systems often have hierarchically structured interconnection networks combining computing resources at different levels, starting with the interconnect within multi-core processors up to the interconnection network combining nodes of the cluster or supercomputer. The challenge for the programmer is that these computing resources should be utilized efficiently by exploiting the available degree of parallelism of the application program and by structuring the application in a way which is sensitive to the heterogeneous interconnect. In this article, we pursue a parallel programming method using parallel tasks to structure parallel implementations. A parallel task can be executed by multiple processors or cores and, for each activation of a parallel task, the actual number of executing cores can be adapted to the specific execution situation. In particular, we propose a new combined scheduling and mapping technique for parallel tasks with dependencies that takes the hierarchical structure of modern multi-core clusters into account. An experimental evaluation shows that the presented programming approach can lead to a significantly higher performance compared to standard data parallel implementations.
Style APA, Harvard, Vancouver, ISO itp.
18

SOKURENKO, Vyacheslav, Ivan KHUTOROVYI i Oleg SOKURENKO. "DEVELOPMENT OF OPTICAL SYSTEMS OF 16-LENS ORTHOSCOPIC TELEPHOTO LENSES". Herald of Khmelnytskyi National University. Technical sciences 315, nr 6(1) (29.12.2022): 219–23. http://dx.doi.org/10.31891/2307-5732-2022-315-6-219-223.

Pełny tekst źródła
Streszczenie:
In this work, the possibilities of creating multi-lens optical systems of telephoto lenses with a fixed focal length and high image quality with the help of a computer have been experimentally investigated. The effectiveness of the automated procedure for creating new optical systems of orthoscopic telephoto lenses was repeatedly examined. The design procedure utilizes a modified version of the modern stochastic global optimization algorithm. In particular, we carried out the aberration synthesis of several optical systems of 16-lens orthoscopic telephoto lenses having the effective focal length of 300 mm and a F-number equal to 2.8, and designed to work in the visible spectral range with 35-mm full-frame image matrix detectors. Multiple computer simulations of the design process have shown that the global optimization procedure with a total number of variables of around 90 requires significant computing power. Specifically, when running on the Intel Core i9-9900K processor in multi-threaded mode and optimizing the values of the modulation transfer functions, the design procedure can exceed 50 hours. The maximum value of the relative distortion of synthesized variants of telephoto lenses does not exceed 0.017%. In all considered cases, the values of polychromatic modulation transfer functions for a spatial frequency of 30 lines/mm are not smaller than 0.5 over the entire image field. The design results obtained for the specified telephoto lenses confirm the effectiveness of the proposed approach and the practical possibility of achieving high-quality correction of aberrations in the above-mentioned multi-lens telephoto systems even without the use of fluorite (calcium fluoride).
Style APA, Harvard, Vancouver, ISO itp.
19

Lehmann, Götz. "Efficient Semi-Lagrangian Vlasov-Maxwell Simulations of High Order Harmonic Generation from Relativistic Laser-Plasma Interactions". Communications in Computational Physics 20, nr 3 (31.08.2016): 583–602. http://dx.doi.org/10.4208/cicp.oa-2015-0019.

Pełny tekst źródła
Streszczenie:
AbstractWe describe a relativistic semi-Lagrangian scheme for the numerical solution of the relativistic Vlasov-Maxwell system. The implementation strategy on a modern non-unified memory access (NUMA) architecture using the OpenMP framework is discussed. We demonstrated that close to perfect scaling can be obtained on modern many-core, multi-socket systems. Application of this code to the problem of relativistic generation of high-harmonic laser radiation is demonstrated. The results are compared to particle-in-cell (PIC) simulations, indicating in particular that for warm plasma the Vlasov simulation is superior. We discuss the impact of plasma temperature on the radiation spectrum and show that the efficiency of harmonic generation depends on the plasma temperature.
Style APA, Harvard, Vancouver, ISO itp.
20

Attila, Cséki, i Olivera Latković. "Infinity – A New Program for Modeling Binary Systems". Proceedings of the International Astronomical Union 7, S282 (lipiec 2011): 305–6. http://dx.doi.org/10.1017/s1743921311027633.

Pełny tekst źródła
Streszczenie:
AbstractInfinity is a new program for modeling binary systems. The model is based on Roche geometry with asynchronous rotation, including an assortment of effects like gravity and limb darkening, mutual irradiation, bright and dark spots and so on. However, Infinity brings innovations in the modeling of accretion disks, and introduces the modeling of radial and non-radial oscillations on one or both components of the system.At this stage of development, Infinity can produce light curves, spectra and radial velocity curves; solving the inverse problem is still a work in progress. In terms of programming, Infinity is being developed in the object-oriented language C#, and great care is taken to produce readable, easily extensible and verifiable code. Infinity is fully optimized to take advantage of modern multi-core CPUs, and the code is thoroughly covered with unit-tests. We expect to make a public release during 2012.
Style APA, Harvard, Vancouver, ISO itp.
21

Qiu, Weiming, Yonghao Chen, Dihu Chen, Tao Su i Simei Yang. "Run-Time Hierarchical Management of Mapping, Per-Cluster DVFS and Per-Core DPM for Energy Optimization". Electronics 11, nr 7 (30.03.2022): 1094. http://dx.doi.org/10.3390/electronics11071094.

Pełny tekst źródła
Streszczenie:
Heterogeneous cluster-based multi/many-core systems (e.g., ARM big.LITTLE, supporting dynamic voltage and frequency scaling (DVFS) at cluster level and dynamic power management (DPM) at core level) have attracted much attention to optimize energy on modern embedded systems. For concurrently executing applications on such a platform, this paper aims to study how to appropriately apply the three system configurations (mapping, DVFS, and DPM) to reduce both dynamic and static energy. To this end, this paper first formulates the dependence of the three system configurations on heterogeneous cluster-based systems as a 0–1 integrated linear programming (ILP) model, taking into account run-time configuration overheads (e.g., costs of DPM mode switching and task migration). Then, with the 0–1 ILP model, different run-time strategies (e.g., considering the three configurations in fully separate, partially separate, and holistic manners) are compared based on a hierarchical management structure and design-time prepared data. Experimental case studies offer insights into the effectiveness of different management strategies on different platform sizes (e.g., #cluster × #core, 2 × 4, 2 × 8, 4 × 4, 4 × 8), in terms of application migration, energy efficiency, resource efficiency, and complexity.
Style APA, Harvard, Vancouver, ISO itp.
22

Saykin, A. M., i S. E. Buznikov. "Principles of building competitive motion control systems for highly automated vehicles". Journal of Physics: Conference Series 2061, nr 1 (1.10.2021): 012133. http://dx.doi.org/10.1088/1742-6596/2061/1/012133.

Pełny tekst źródła
Streszczenie:
Abstract The relevance of the issue of development of efficient motion control systems for highly automated vehicles capable to successfully compete with foreign systems of similar purpose is defined by importance of the issue of creation of competitive high-tech products under modern market economy conditions. The research objective was to scientifically justify the building principles for motion control systems for highly automated vehicles providing directed search for solution options for the issue of multi-criterion optimization in the software and hardware space. The research involved methods of system analysis and modern control theory. The research result is a set or complex of building principles for motion control systems for highly automated vehicles providing minimization of hardware environment while keeping observability of all vehicle state coordinates significant for safe control and their dynamic boundaries, as well as ensuring controllability via the channels of traction, braking and direction. The conceptual core or base of such integrated intelligent control systems is mathematical and programming support (software) of indirect measurements of the parameters of motion and control of traction, brakes and direction adapting to the vehicle state and environment changes.
Style APA, Harvard, Vancouver, ISO itp.
23

Karavay, M. F., i A. M. Mikhailov. "Design of local heterogeneous system control networks of a new generation with the preservation of the optimality of the main topological functionals of the network". Journal of Physics: Conference Series 2091, nr 1 (1.11.2021): 012038. http://dx.doi.org/10.1088/1742-6596/2091/1/012038.

Pełny tekst źródła
Streszczenie:
Abstract The paper discusses On-Board Computing Control Systems (OBCS) in astronautics, avionics, autonomous mobile devices, robotics, weapons control and multi-core microprocessors. This is sort of a “backbone”, which unites many sensors, calculators, control and executive devices. The architecture of these networks was developed some 30-40 years ago. At that time, these systems met the technical conditions in terms of dynamics and reliability. Nowadays, these systems must perform their functions for 10 to 15 years without maintenance. The performance of system networks must be high enough to solve such tasks as monitoring “swarms” that comprise hundreds of objects or work as a “garbage collectors” in space orbits. Nevertheless modern system networks continue to be based on bus or multi-bus architectures. Since these systems are serial for active nodes, a multi-bus solution is a main way to increase the performance of networks by using very high frequencies that amount to 2 ÷ 4 GHz. It’s an extensive path of development, which is problematic. More acceptable would be an intensive path of development, which, in electronics and computer engineering, is associated with the parallelism of task execution. It means that the operating frequencies may not be ultra-high, not exceeding that of modern devices for frequencies of 10 – 600 MHz. However, such devices should work in a parallel mode. The paper proposes a new approach to designing of heterogeneous parallel control system networks, solving parallel tasks, and a conflict-free management of “passive” nodes. To the best of our knowledge, such control system networks are not available as yet.
Style APA, Harvard, Vancouver, ISO itp.
24

Rodriguez-Martinez, Eduardo, Cesar Benavides-Alvarez, Carlos Aviles-Cruz, Fidel Lopez-Saca i Andres Ferreyra-Ramirez. "Improved Parallel Implementation of 1D Discrete Wavelet Transform Using CPU-GPU". Electronics 12, nr 16 (10.08.2023): 3400. http://dx.doi.org/10.3390/electronics12163400.

Pełny tekst źródła
Streszczenie:
This work describes a data-level parallelization strategy to accelerate the discrete wavelet transform (DWT) which was implemented and compared in two multi-threaded architectures, both with shared memory. The first considered architecture was a multi-core server and the second one was a graphics processing unit (GPU). The main goal of the research is to improve the computation times for popular DWT algorithms for representative modern GPU architectures. Comparisons were based on performance metrics (i.e., execution time, speedup, efficiency, and cost) for five decomposition levels of the DWT Daubechies db6 over random arrays of lengths 103, 104, 105, 106, 107, 108, and 109. The execution times in our proposed GPU strategy were around 1.2×10−5 s, compared to 3501×10−5 s for the sequential implementation. On the other hand, the maximum achievable speedup and efficiency were reached by our proposed multi-core strategy for a number of assigned threads equal to 32.
Style APA, Harvard, Vancouver, ISO itp.
25

Day, Khaled, i Mohammad H. Al-Towaiq. "An Efficient Parallel Gauss-Seidel Algorithm on a 3D Torus Network-on-Chip". Sultan Qaboos University Journal for Science [SQUJS] 20, nr 1 (1.06.2015): 29. http://dx.doi.org/10.24200/squjs.vol20iss1pp29-38.

Pełny tekst źródła
Streszczenie:
Network-on-chip (NoC) multi-core architectures with a large number of processing elements are becoming a reality with the recent developments in technology. In these modern systems the processing elements are interconnected with regular NoC topologies such as meshes and tori. In this paper we propose a parallel Gauss-Seidel (GS) iterative algorithm for solving large systems of linear equations on a 3-dimensional torus NoC architecture. The proposed parallel algorithm is O(Nn2/k3) time complexity for solving a system with a matrix of order n on a k×k×k 3D torus NoC architecture with N iterations assuming n and N are large compared to k. We show that under these conditions the proposed parallel GS algorithm has near optimal speedup.
Style APA, Harvard, Vancouver, ISO itp.
26

Niu, Shen, Yue Song, Ligong Zhang, Yongyi Chen, Lei Liang, Ye Wang, Li Qin i in. "Research Progress of Monolithic Integrated DFB Laser Arrays for Optical Communication". Crystals 12, nr 7 (21.07.2022): 1006. http://dx.doi.org/10.3390/cryst12071006.

Pełny tekst źródła
Streszczenie:
Photonic integrated circuits (PICs) play a leading role in modern information and communications technology. Among the core devices in PICs is the distributed feedback (DFB) multi-wavelength semiconductor laser array. Multi-wavelength semiconductor laser arrays can be integrated on a single chip and have the advantages of high stability, good single-mode performance, and narrow line width. The wavelength tuning range has been expanded through the design of the DFB laser array, which is an ideal light source for wavelength-division multiplexing systems. The preparation of DFB laser arrays with a large number of channels, ease of mass production, and accurate emission wavelengths has become an important field of research. The connection methods of lasers in DFB laser arrays are introduced systematically and the current methods of manufacturing multi-wavelength DFB laser arrays covering the perspective of technical principles, technical advantages and disadvantages, main research progress, and research status are summarized.
Style APA, Harvard, Vancouver, ISO itp.
27

Fevgas, Athanasios, Konstantis Daloukas, Panagiota Tsompanopoulou i Panayiotis Bozanis. "A Study of Sparse Matrix Methods on New Hardware". International Journal of Monitoring and Surveillance Technologies Research 3, nr 3 (lipiec 2015): 1–19. http://dx.doi.org/10.4018/ijmstr.2015070101.

Pełny tekst źródła
Streszczenie:
Modeling of numerous scientific and engineering problems, such as multi-physic problems and analysis of electrical power systems, amounts to the solution of large scale linear systems. The main characteristics of such systems are the large sparsity ratio and the large number of unknowns that can reach thousands or even millions of equations. As a result, efficient solution of sparse large-scale linear systems is of great importance in order to enable analysis of such problems. Direct and iterative algorithms are the prevalent methods for solution of linear systems. Advances in computer hardware provide new challenges and capabilities for sparse solvers. The authors present a comprehensive evaluation of some, state of the art, sparse methods (direct and iterative) using modern computing platforms, aiming to determine the performance boundaries of each solver on different hardware infrastructures. By identifying the potential performance bottlenecks of out-of-core direct methods, the authors present a series of optimizations that increase their efficiency on flash-based systems.
Style APA, Harvard, Vancouver, ISO itp.
28

Holzer, K., i W. Gall. "Utilizing IHE-based Electronic Health Record Systems for Secondary Use". Methods of Information in Medicine 50, nr 04 (2011): 319–25. http://dx.doi.org/10.3414/me10-01-0060.

Pełny tekst źródła
Streszczenie:
SummaryObjectives: Due to the increasing adoption of Electronic Health Records (EHRs) for primary use, the number of electronic documents stored in such systems will soar in the near future. In order to benefit from this development in secondary fields such as medical research, it is important to define requirements for the secondary use of EHR data. Furthermore, analyses of the extent to which an IHE (Integrating the Healthcare Enterprise)-based architecture would fulfill these requirements could provide further information on upcoming obstacles for the secondary use of EHRs.Methods: A catalog of eight core requirements for secondary use of EHR data was deduced from the published literature, the risk analysis of the IHE profile MPQ (Multi-Patient Queries) and the analysis of relevant questions. The IHE-based architecture for cross-domain, patient-centered document sharing was extended to a cross-patient architecture.Results: We propose an IHE-based architecture for cross-patient and cross-domain secondary use of EHR data. Evaluation of this architecture concerning the eight core requirements revealed positive fulfillment of six and the partial fulfillment of two requirements.Conclusions: Although not regarded as a primary goal in modern electronic healthcare, the re-use of existing electronic medical documents in EHRs for research and other fields of secondary application holds enormous potential for the future. Further research in this respect is necessary.
Style APA, Harvard, Vancouver, ISO itp.
29

Konstantopoulos, George C., Antonio T. Alexandridis i Panos C. Papageorgiou. "Towards the Integration of Modern Power Systems into a Cyber–Physical Framework". Energies 13, nr 9 (1.05.2020): 2169. http://dx.doi.org/10.3390/en13092169.

Pełny tekst źródła
Streszczenie:
The cyber–physical system (CPS) architecture provides a novel framework for analyzing and expanding research and innovation results that are essential in managing, controlling and operating complex, large scale, industrial systems under a holistic insight. Power systems constitute such characteristically large industrial structures. The main challenge in deploying a power system as a CPS lies on how to combine and incorporate multi-disciplinary, core, and advanced technologies into the specific for this case, social, environmental, economic and engineering aspects. In order to substantially contribute towards this target, in this paper, a specific CPS scheme that clearly describes how a dedicated cyber layer is deployed to manage and interact with comprehensive multiple physical layers, like those found in a large-scale modern power system architecture, is proposed. In particular, the measurement, communication, computation, control mechanisms, and tools installed at different hierarchical frames that are required to consider and modulate the social/environmental necessities, as well as the electricity market management, the regulation of the electric grid, and the power injection/absorption of the controlled main devices and distributed energy resources, are all incorporated in a common CPS framework. Furthermore, a methodology for investigating and analyzing the dynamics of different levels of the CPS architecture (including physical devices, electricity and communication networks to market, and environmental and social mechanisms) is provided together with the necessary modelling tools and assumptions made in order to close the loop between the physical and the cyber layers. An example of a real-world industrial micro-grid that describes the main aspects of the proposed CPS-based design for modern electricity grids is also presented at the end of the paper to further explain and visualize the proposed framework.
Style APA, Harvard, Vancouver, ISO itp.
30

Sepúlveda, Johanna, Felix Wilgerodt i Michael Pehl. "Towards memory integrity and authenticity of multi-processors system-on-chip using physical unclonable functions". it - Information Technology 61, nr 1 (25.02.2019): 29–43. http://dx.doi.org/10.1515/itit-2018-0030.

Pełny tekst źródła
Streszczenie:
Abstract A persistent problem for modern Multi-Processors System-on-Chip (MPSoCs) is their vulnerability to code injection attacks. By tampering the memory content, attackers are able to extract secrets from the MPSoC and to modify or deny the MPSoC’s operation. This work proposes SEPUFSoC (Secure PUF-based SoC), a novel flexible, secure, and fast architecture able to be integrated into any MPSoC. SEPUFSoC prevents execution of unauthorized code as well as data manipulation by ensuring memory integrity and authentication. SEPUFSoC achieves: i) efficiency, through the integration of a fast and lightweight hash function for Message Authentication Code (MAC) generation and integrity verification of the memory lines at runtime; and ii) lightweight security, through the use of a Physical Unclonable Function (PUF) to securely generate and store the cryptographic keys that are used for the application authentication. We discuss the security and performance of SEPUFSoC for single core and multi-core systems. Results show that the SEPUFSoC is a secure, fast, and low overhead solution for MPSoCs. We discuss the SEPUFSoC security and cost, which strongly depends on the PUF and hash selection. In the future, new technologies may allow the exploration of different PUFs.
Style APA, Harvard, Vancouver, ISO itp.
31

Lenfers, Ulfia Annette, Nima Ahmady-Moghaddam, Daniel Glake, Florian Ocker, Jonathan Ströbele i Thomas Clemen. "Incorporating Multi-Modal Travel Planning into an Agent-Based Model: A Case Study at the Train Station Kellinghusenstraße in Hamburg". Land 10, nr 11 (3.11.2021): 1179. http://dx.doi.org/10.3390/land10111179.

Pełny tekst źródła
Streszczenie:
Models can provide valuable decision support in the ongoing effort to create a sustainable and effective modality mix in urban settings. Modern transportation infrastructures must meaningfully combine public transport with other mobility initiatives such as shared and on-demand systems. The increase of options and possibilities in multi-modal travel implies an increase in complexity when planning and implementing such an infrastructure. Multi-agent systems are well-suited for addressing questions that require an understanding of movement patterns and decision processes at the individual level. Such models should feature intelligent software agents with flexible internal logic and accurately represent the core functionalities of new modalities. We present a model in which agents can choose between owned modalities, station-based bike sharing modalities, and free-floating car sharing modalities as they exit the public transportation system and seek to finish their personal multi-modal trip. Agents move on a multi-modal road network where dynamic constraints in route planning are evaluated based on an agent’s query. Modality switch points (MSPs) along the route indicate the locations at which an agent can switch from one modality to the next (e.g., a bike rental station to return a used rental bike and continue on foot). The technical implementation of MSPs within the road network was a central focus in this work. To test their efficacy in a controlled experimental setting, agents optimized only the travel time of their multi-modal routes. However, the functionalities of the model enable the implementation of different optimization criteria (e.g., financial considerations or climate neutrality) and unique agent preferences as well. Our findings show that the implemented MSPs enable agents to switch between modalities at any time, allowing for the kind of versatile, individual, and spontaneous travel that is common in modern multi-modal settings.
Style APA, Harvard, Vancouver, ISO itp.
32

Szogradi, Marton. "ANALYSIS OF THE OECD/NEA SFR BENCHMARK WITH ANTS REDUCED-ORDER NODAL DIFFUSION SOLVER AND THE SERPENT MONTE CARLO CODE". EPJ Web of Conferences 247 (2021): 04021. http://dx.doi.org/10.1051/epjconf/202124704021.

Pełny tekst źródła
Streszczenie:
In order to meet modern industrial and scientific demands the Kraken multi-physics platform’s development was recently launched at VTT Technical Research Centre of Finland. The neutronic solver of the framework consists of two calculation chains, providing full core solutions by the Serpent high fidelity code (1) and the AFEN/FENM-based reduced-order diffusion solver called Ants (2) capable of handling square and hexagonal geometries in steady-state. Present work introduces the simulation of a large 3600 MWth Sodium-cooled Fast Reactor (SFR) described within the activities of the Working Party on Scientific Issues of Reactor Systems (WPRS) of OECD. Full-core 3D results were obtained by Serpent for carbide- and oxide-fuel cores, moreover group constants were generated for Ants utilizing 2D super-cell and single assembly infinite lattice models of Serpent. The continuous-energy Monte Carlo method provided the reference results for the verification of the reduced-order method. Implementing the spatially homogenized properties, 3D solutions were obtained by Ants as well for both core configurations. Comparison was made between the various core designs and codes based on reactivity feedbacks (Doppler constant, sodium voiding, control rod worth) considering power distributions. Regarding reactivity sensitivity on geometry, axial fuel- and radial core expansion coefficients were evaluated as well.
Style APA, Harvard, Vancouver, ISO itp.
33

Kim, Minseong, Seon Wook Kim i Youngsun Han. "EPSim-C: A Parallel Epoch-Based Cycle-Accurate Microarchitecture Simulator Using Cloud Computing". Electronics 8, nr 6 (24.06.2019): 716. http://dx.doi.org/10.3390/electronics8060716.

Pełny tekst źródła
Streszczenie:
Recently, computing platforms have been being configured on a large scale to satisfy the diverse requirements of emerging applications like big data and graph processing, neural network, speech recognition and so on. In these computing platforms, each computing node consists of a multicore, an accelerator, and a complex memory hierarchy, which are connected to other nodes using a variety of high-performance networks. Up to now, researchers have been using cycle-accurate simulators to evaluate the performance of computer systems in detail. However, the execution of the simulators, which models modern computing architecture for multi-core, multi-node, datacenter, memory hierarchy, new memory, and new interconnection, is too slow and infeasible; since the architecture has become more complex today, the complexity of the simulator is rapidly increasing. Therefore, it is seriously challenging to employ them in the research and development of next-generation computer systems. To solve this problem, we previously presented EPSim (Epoch-based Simulator), which defines epochs that can be run independently by dividing the simulation run into several sections and executes them in parallel on a multicore platform, resulting in only the limited simulation speedup. In this paper, to overcome the computing resource limitations on multi-core platforms, we propose a novel EPSim-C (EPSim on Cloud) simulator that extends EPSim and achieves higher performance using a cloud computing platform. EPSim-C is designed to perform the epoch-based executions in a massively parallel fashion by using MapReduce on Hadoop-based systems. According to our experiments, we have achieved a maximum speed of 87.0× and an average speed of 46.1× using 256 cores. As far as we know, EPSim-C is the only existing way to accelerate the cycle-accurate simulator on cloud platforms; thus, our significant performance enhancement allows researchers to model and research current and future cutting-edge computing platforms using real workloads.
Style APA, Harvard, Vancouver, ISO itp.
34

Kurzak, Jakub, i Jack Dongarra. "QR Factorization for the Cell Broadband Engine". Scientific Programming 17, nr 1-2 (2009): 31–42. http://dx.doi.org/10.1155/2009/239720.

Pełny tekst źródła
Streszczenie:
The QR factorization is one of the most important operations in dense linear algebra, offering a numerically stable method for solving linear systems of equations including overdetermined and underdetermined systems. Modern implementations of the QR factorization, such as the one in the LAPACK library, suffer from performance limitations due to the use of matrix–vector type operations in the phase of panel factorization. These limitations can be remedied by using the idea of updating of QR factorization, rendering an algorithm, which is much more scalable and much more suitable for implementation on a multi-core processor. It is demonstrated how the potential of the cell broadband engine can be utilized to the fullest by employing the new algorithmic approach and successfully exploiting the capabilities of the chip in terms of single instruction multiple data parallelism, instruction level parallelism and thread-level parallelism.
Style APA, Harvard, Vancouver, ISO itp.
35

Sriram, Vinay, i David Kearney. "Towards A Multi-FPGA Infrared Simulator". Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 4, nr 4 (październik 2007): 343–55. http://dx.doi.org/10.1177/154851290700400404.

Pełny tekst źródła
Streszczenie:
High speed infrared (IR) scene simulation is used extensively in defense and homeland security to test sensitivity of IR cameras and accuracy of IR threat detection and tracking algorithms used commonly in IR missile approach warning systems (MAWS). A typical MAWS requires an input scene rate of over 100 scenes/second. Infrared scene simulations typically take 32 minutes to simulate a single IR scene that accounts for effects of atmospheric turbulence, refraction, optical blurring and charge-coupled device (CCD) camera electronic noise on a Pentium 4 (2.8GHz) dual core processor [7]. Thus, in IR scene simulation, the processing power of modern computers is a limiting factor. In this paper we report our research to accelerate IR scene simulation using high performance reconfigurable computing. We constructed a multi Field Programmable Gate Array (FPGA) hardware acceleration platform and accelerated a key computationally intensive IR algorithm over the hardware acceleration platform. We were successful in reducing the computation time of IR scene simulation by over 36%. This research acts as a unique case study for accelerating large scale defense simulations using a high performance multi-FPGA reconfigurable computer.
Style APA, Harvard, Vancouver, ISO itp.
36

Al-Haddad, Luttfi A., i Alaa Abdulhady Jaber. "An Intelligent Fault Diagnosis Approach for Multirotor UAVs Based on Deep Neural Network of Multi-Resolution Transform Features". Drones 7, nr 2 (24.01.2023): 82. http://dx.doi.org/10.3390/drones7020082.

Pełny tekst źródła
Streszczenie:
As a modern technological trend, unmanned aerial vehicles (UAVs) are extensively employed in various applications. The core purpose of condition monitoring systems, proactive fault diagnosis, is essential in ensuring UAV safety in these applications. In this research, adaptive health monitoring systems perform blade balancing fault diagnosis and classification. There seems to be a bidirectional unpredictability within each, and this paper proposes a hybrid-based transformed discrete wavelet and a multi-hidden-layer deep neural network (DNN) scheme to compensate for it. Wide-scale, high-quality, and comprehensive soft-labeled data are extracted from a selected hovering quad-copter incorporated with an accelerometer sensor via experimental work. A data-driven intelligent diagnostic strategy was investigated. Statistical characteristics of non-stationary six-leveled multi-resolution analysis in three axes are acquired. Two important feature selection methods were adopted to minimize computing time and improve classification accuracy when progressed into an artificial intelligence (AI) model for fault diagnosis. The suggested approach offers exceptional potential: the fault detection system identifies and predicts faults accurately as the resulting 91% classification accuracy exceeds current state-of-the-art fault diagnosis strategies. The proposed model demonstrated operational applicability on any multirotor UAV of choice.
Style APA, Harvard, Vancouver, ISO itp.
37

Chatzakis, Manos, Panagiota Fatourou, Eleftherios Kosmas, Themis Palpanas i Botao Peng. "Odyssey: A Journey in the Land of Distributed Data Series Similarity Search". Proceedings of the VLDB Endowment 16, nr 5 (styczeń 2023): 1140–53. http://dx.doi.org/10.14778/3579075.3579087.

Pełny tekst źródła
Streszczenie:
This paper presents Odyssey, a novel distributed data-series processing framework that efficiently addresses the critical challenges of exhibiting good speedup and ensuring high scalability in data series processing by taking advantage of the full computational capacity of modern distributed systems comprised of multi-core servers. Odyssey addresses a number of challenges in designing efficient and highly-scalable distributed data series index, including efficient scheduling, and load-balancing without paying the prohibitive cost of moving data around. It also supports a flexible partial replication scheme, which enables Odyssey to navigate through a fundamental trade-off between data scalability and good performance during query answering. Through a wide range of configurations and using several real and synthetic datasets, our experimental analysis demonstrates that Odyssey achieves its challenging goals.
Style APA, Harvard, Vancouver, ISO itp.
38

Flores-Vergara, Abraham, Everardo Inzunza-González, Enrique García-Guerrero, Oscar López-Bonilla, Eduardo Rodríguez-Orozco, Juan Hernández-Ontiveros, José Cárdenas-Valdez i Esteban Tlelo-Cuautle. "Implementing a Chaotic Cryptosystem by Performing Parallel Computing on Embedded Systems with Multiprocessors". Entropy 21, nr 3 (9.03.2019): 268. http://dx.doi.org/10.3390/e21030268.

Pełny tekst źródła
Streszczenie:
Profiling and parallel computing techniques in a cluster of six embedded systems with multiprocessors are introduced herein to implement a chaotic cryptosystem for digital color images. The proposed encryption method is based on stream encryption using a pseudo-random number generator with high-precision arithmetic and data processing in parallel with collective communication. The profiling and parallel computing techniques allow discovery of the optimal number of processors that are necessary to improve the efficiency of the cryptosystem. That is, the processing speed improves the time for generating chaotic sequences and execution of the encryption algorithm. In addition, the high numerical precision reduces the digital degradation in a chaotic system and increases the security levels of the cryptosystem. The security analysis confirms that the proposed cryptosystem is secure and robust against different attacks that have been widely reported in the literature. Accordingly, we highlight that the proposed encryption method is potentially feasible to be implemented in practical applications, such as modern telecommunication devices employing multiprocessors, e.g., smart phones, tablets, and in any embedded system with multi-core hardware.
Style APA, Harvard, Vancouver, ISO itp.
39

BALABAN, Anton, Andrei NEAMTU, Sorin BERBENTE, Gabriela-Liliana STROE, Irina-Beatrice STEFANESCU, Emil COSTEA, Irina-Carmen ANDREI i Ionel POPESCU. "Considerations regarding the composition of the cockpit view for a modern simulator". INCAS BULLETIN 15, nr 2 (9.06.2023): 3–10. http://dx.doi.org/10.13111/2066-8201.2023.15.2.1.

Pełny tekst źródła
Streszczenie:
This study shows how to generate images and compose images in the modern simulator room by starting multiple work sessions running at the same time so that all active server stations and the station to connect to are continuously displayed. Each server handles specific functions of the simulation process and runs a dedicated software application for its specific functions. For a supervised flight simulator, the following functions need to have dedicated applications: 1) Flight Dynamics Simulation, 2) Graphical projection, 3) Cockpit and Flight instrumentation simulation and integration, 4) Supervisor station. Out of these functions, Graphical projections and Cockpit and Flight instrumentation and integration require the most computational resources. Simulators need to present the view from a cockpit which requires a field of view of at least 180 degrees. This requires at least three displays usually in the form of projectors. In legacy implementations due to computational bottlenecks, each projector needed a dedicated computer. Similarly, the Cockpit and Flight instrumentation simulation requires usually upwards of 100 flight instrument simulations and embedded processors to be managed. In legacy implementations, one computer is needed for each piloting station. In recent implementations due to the increased performance of multi-core processors, many of these functions can be handled by single computers: the flight dynamics simulation and graphical projection functions can currently be handled by a single computer, similarly, all Cockpit and Flight instrumentation simulation and integration can be handled by another computer. Thus, a minimum of three servers are required to ensure full functionality of supervised simulation using modern computing systems.
Style APA, Harvard, Vancouver, ISO itp.
40

Hu, Tianxun, Tianzheng Wang i Qingqing Zhou. "Online schema evolution is (almost) free for snapshot databases". Proceedings of the VLDB Endowment 16, nr 2 (październik 2022): 140–53. http://dx.doi.org/10.14778/3565816.3565818.

Pełny tekst źródła
Streszczenie:
Modern database applications often change their schemas to keep up with the changing requirements. However, support for online and transactional schema evolution remains challenging in existing database systems. Specifically, prior work often takes ad hoc approaches to schema evolution with "patches" applied to existing systems, leading to many corner cases and often incomplete functionality. Applications therefore often have to carefully schedule downtimes for schema changes, sacrificing availability. This paper presents Tesseract, a new approach to online and transactional schema evolution without the aforementioned drawbacks. We design Tesseract based on a key observation: in widely used multi-versioned database systems, schema evolution can be modeled as data modification operations that change the entire table, i.e., data-definition-as-modification (DDaM). This allows us to support schema almost "for free" by leveraging the concurrency control protocol. By simple tweaks to existing snapshot isolation protocols, on a 40-core server we show that under a variety of workloads, Tesseract is able to provide online, transactional schema evolution without service downtime, and retain high application performance when schema evolution is in progress.
Style APA, Harvard, Vancouver, ISO itp.
41

Yang, Renchi, Jieming Shi, Xiaokui Xiao, Yin Yang, Sourav S. Bhowmick i Juncheng Liu. "No PANE, No Gain". ACM SIGMOD Record 51, nr 1 (31.05.2022): 42–49. http://dx.doi.org/10.1145/3542700.3542711.

Pełny tekst źródła
Streszczenie:
Given a graph G where each node is associated with a set of attributes, attributed network embedding (ANE) maps each node v 2 G to a compact vector Xv, which can be used in downstream machine learning tasks in a variety of applications. Existing ANE solutions do not scale to massive graphs due to prohibitive computation costs or generation of low-quality embeddings. This paper proposes PANE, an effective and scalable approach to ANE computation for massive graphs in a single server that achieves state-of-the-art result quality on multiple benchmark datasets for two common prediction tasks: link prediction and node classification. Under the hood, PANE takes inspiration from well-established data management techniques to scale up ANE in a single server. Specifically, it exploits a carefully formulated problem based on a novel random walk model, a highly efficient solver, and non-trivial parallelization by utilizing modern multi-core CPUs. Extensive experiments demonstrate that PANE consistently outperforms all existing methods in terms of result quality, while being orders of magnitude faster.
Style APA, Harvard, Vancouver, ISO itp.
42

Marques, Diogo, Aleksandar Ilic i Leonel Sousa. "Mansard Roofline Model: Reinforcing the Accuracy of the Roofs". ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6, nr 2 (30.06.2021): 1–23. http://dx.doi.org/10.1145/3475866.

Pełny tekst źródła
Streszczenie:
Continuous enhancements and diversity in modern multi-core hardware, such as wider and deeper core pipelines and memory subsystems, bring to practice a set of hard-to-solve challenges when modeling their upper-bound capabilities and identifying the main application bottlenecks. Insightful roofline models are widely used for this purpose, but the existing approaches overly abstract the micro-architecture complexity, thus providing unrealistic performance bounds that lead to a misleading characterization of real-world applications. To address this problem, the Mansard Roofline Model (MaRM), proposed in this work, uncovers a minimum set of architectural features that must be considered to provide insightful, but yet accurate and realistic, modeling of performance upper bounds for modern processors. By encapsulating the retirement constraints due to the amount of retirement slots, Reorder-Buffer and Physical Register File sizes, the proposed model accurately models the capabilities of a real platform (average rRMSE of 5.4%) and characterizes 12 application kernels from standard benchmark suites. By following a herein proposed MaRM interpretation methodology and guidelines, speed-ups of up to 5× are obtained when optimizing real-world bioinformatic application, as well as a super-linear speedup of 18.5× when parallelized.
Style APA, Harvard, Vancouver, ISO itp.
43

Gräßler, Iris, i Jens Pottebaum. "Generic Product Lifecycle Model: A Holistic and Adaptable Approach for Multi-Disciplinary Product–Service Systems". Applied Sciences 11, nr 10 (15.05.2021): 4516. http://dx.doi.org/10.3390/app11104516.

Pełny tekst źródła
Streszczenie:
The linear economic model behind contemporary product lifecycle representations contradicts planetary boundaries and the idea of sustainability. At the same time, Circular Economy (CE) driven models lack consideration of profound technological insights. Based on observations in research and the application of projects of different industries, a quantitative and qualitative literature analysis is applied to identify both strengths and shortcomings of current lifecycle models. These findings are used to create lifecycle model portfolios and to derive a generic Product Lifecycle model (gPLC). The gPLC is validated by three industrial cases based on collaborative research projects. In practice, resource and energy consumption as well as waste production and emissions can be minimized with the help of established methods not only by economists, but also by engineers. Transparency of material and information circularity practically implies the opportunity to implement, for instance, Minimum Viable Products and DevOps approaches. The originality of the gPLC is characterized by three main aspects: first, material and information flows of multi-disciplinary product–service systems are recognized as the foundation for a modern CE; second, a differentiation between product classes and instances is elaborated to stimulate sustainable design of material core products and digital CE business models; and third, the stakeholder perspective is expanded from manufacturer and consumer/user to further perspectives, such as recycler and society.
Style APA, Harvard, Vancouver, ISO itp.
44

Wedlich-Söldner, Roland, i Timo Betz. "Self-organization: the fundament of cell biology". Philosophical Transactions of the Royal Society B: Biological Sciences 373, nr 1747 (9.04.2018): 20170103. http://dx.doi.org/10.1098/rstb.2017.0103.

Pełny tekst źródła
Streszczenie:
Self-organization refers to the emergence of an overall order in time and space of a given system that results from the collective interactions of its individual components. This concept has been widely recognized as a core principle in pattern formation for multi-component systems of the physical, chemical and biological world. It can be distinguished from self-assembly by the constant input of energy required to maintain order—and self-organization therefore typically occurs in non-equilibrium or dissipative systems. Cells, with their constant energy consumption and myriads of local interactions between distinct proteins, lipids, carbohydrates and nucleic acids, represent the perfect playground for self-organization. It therefore comes as no surprise that many properties and features of self-organized systems, such as spontaneous formation of patterns, nonlinear coupling of reactions, bi-stable switches, waves and oscillations, are found in all aspects of modern cell biology. Ultimately, self-organization lies at the heart of the robustness and adaptability found in cellular and organismal organization, and hence constitutes a fundamental basis for natural selection and evolution. This article is part of the theme issue ‘Self-organization in cell biology’.
Style APA, Harvard, Vancouver, ISO itp.
45

Mendon, Ashwin A., Andrew G. Schmidt i Ron Sass. "A Hardware Filesystem Implementation with Multidisk Support". International Journal of Reconfigurable Computing 2009 (2009): 1–13. http://dx.doi.org/10.1155/2009/572860.

Pełny tekst źródła
Streszczenie:
Modern High-End Computing systems frequently include FPGAs as compute accelerators. These programmable logic devices now support disk controller IP cores which offer the ability to introduce new, innovative functionalities that, previously, were not practical. This article describes one such innovation: a filesystem implemented in hardware. This has the potential of improving the performance of data-intensive applications by connecting secondary storage directly to FPGA compute accelerators. To test the feasibility of this idea, a Hardware Filesystem was designed with four basic operations (open, read, write, and delete). Furthermore, multi-disk and RAID-0 (striping) support has been implemented as an option in the filesystem. A RAM Disk core was created to emulate a SATA disk drive so results on running FPGA systems could be readily measured. By varying the block size from 64 to 4096 bytes, it was found that 1024 bytes gave the best performance while using a very modest 7% of a Xilinx XC4VFX60's slices and only four (of the 232) BRAM blocks available.
Style APA, Harvard, Vancouver, ISO itp.
46

Chakraborty, Shounak, i Magnus Själander. "WaFFLe". ACM Transactions on Architecture and Code Optimization 18, nr 4 (31.12.2021): 1–25. http://dx.doi.org/10.1145/3471908.

Pełny tekst źródła
Streszczenie:
Managing thermal imbalance in contemporary chip multi-processors (CMPs) is crucial in assuring functional correctness of modern mobile as well as server systems. Localized regions with high activity, e.g., register files, ALUs, FPUs, and so on, experience higher temperatures than the average across the chip and are commonly referred to as hotspots. Hotspots affect functional correctness of the underlying circuitry and a noticeable increase in leakage power, which in turn generates heat in a self-reinforced cycle. Techniques that reduce the severity of or completely eliminate hotspots can maintain functional correctness along with improving performance of CMPs. Conventional dynamic thermal management targets the cores to reduce hotspots but often ignores caches, which are known for their high leakage power consumption. This article presents WaFFLe , an approach that targets the leakage power of the last-level cache (LLC) and hotspots occurring at the cores. WaFFLe turns off LLC-ways to reduce leakage power and to generate on-chip thermal buffers. In addition, fine-grained DVFS is applied during long LLC miss induced stalls to reduce core temperature. Our results show that WaFFLe reduces peak and average temperature of a 16-core based homogeneous tiled CMP with up to 8.4 ֯ C and 6.2 ֯ C, respectively, with an average performance degradation of only 2.5 %. We also show that WaFFLe outperforms a state-of-the-art cache-based technique and a greedy DVFS policy.
Style APA, Harvard, Vancouver, ISO itp.
47

Wang, Jinpeng, Ye Zhengpeng, Jeremy Gillbanks, Tarun M. Sanders i Nianyu Zou. "A Power Control Algorithm Based on Chicken Game Theory in Multi-Hop Networks". Symmetry 11, nr 5 (27.05.2019): 718. http://dx.doi.org/10.3390/sym11050718.

Pełny tekst źródła
Streszczenie:
With the development of modern society, there are not only many voice calls being made over wireless communication systems, but there is also a great deal of demand for data services. There are increasing demands from the general public for more information data, especially for high-speed services with elevated Gbps levels. As is well known, higher sending power is needed once data rates increase. In order to solve this problem, virtual cellular networks (VCNs) can be employed in order to reduce these peak power shifts. If a VCN works well, mobile ports will receive their own wireless signals via individual cells, and then, the signals will access core networks with the help of a central terminal. Power control can improve the power capacity in multi-hop networks. However, the use of power control will also have a negative impact on network connectivity, delay, and capacity. In order to address the problem, this paper compares specific control methods and capacities in multi-hop networks. Distributed chicken game algorithm power control (DCGAPC) methods are presented in order to reach acceptable minimum levels of network delay and maximum network capacity and connectivity. Finally, a computer simulation is implemented, and the results are shown.
Style APA, Harvard, Vancouver, ISO itp.
48

Loss, Cristiano, Stefano Pacchioli, Andrea Polastri, Daniele Casagrande, Luca Pozza i Ian Smith. "Numerical Study of Alternative Seismic-Resisting Systems for CLT Buildings". Buildings 8, nr 11 (16.11.2018): 162. http://dx.doi.org/10.3390/buildings8110162.

Pełny tekst źródła
Streszczenie:
Changes to building codes that enable use of materials such as cross-laminated timber (CLT) in mid- and high-rise construction are facilitating sustainable urban development in various parts of the world. Keys to this are the transition to multi-performance-based design approaches along with fewer limitations on heights or the number of storeys in superstructures constructed from combustible materials. Architects and engineers have increased freedom to apply new design and construction concepts and methods, as well as to combine timber with other structural materials. They also have started to develop wall arrangements that optimise interior space layouts and take advantage of the unique characteristics of CLT. This paper discusses the seismic response of multi-story buildings braced with a CLT core and perimeter shear walls anchored to foundations and floor platforms using modern high-capacity angle brackets and hold-downs, or X-Rad connectors. Linear dynamic finite element (FE) models of seismic responses of superstructures of various heights are presented, based on experimentally determined characteristics of wall anchor connections. Particular attention is given to fundamental vibration periods, base shear and uplift forces on walls, as well as inter-story drift. Discussion of FE model results focuses on structural engineering implications and advantages of using CLT to create shear walls, with emphasis on how choice of wall anchoring connections impacts the possible number of storeys and configurations of superstructures. Employing CLT shear walls with X-Rad or other types of high capacity anchoring connections makes possible the creation of building superstructures having eight and potentially more storeys even in high seismicity regions. However, it is important to emphasise that proper selection of suitable arrangements of shear walls for CLT buildings depends on accurate representation of the semi-rigid behaviors of anchoring connections. The linear dynamic analyses presented here demonstrates the need during engineering seismic design practices to avoid use of FE or other design models which do not explicitly incorporate connection flexibilities while estimating parameters like fundamental periods, base shear and uplift forces, as well as inter-story drift.
Style APA, Harvard, Vancouver, ISO itp.
49

Bazhenov, V. E., A. V. Koltygin, A. A. Nikitina, V. D. Belov i E. A. Lazarev. "The efficiency of multithreaded computing in casting simulation software". Izvestiya. Non-Ferrous Metallurgy, nr 3 (17.06.2023): 38–53. http://dx.doi.org/10.17073/0021-3438-2023-3-38-53.

Pełny tekst źródła
Streszczenie:
The utilization of computer simulation software for casting process simulation is becoming essential in the advancement of casting technology in aviation and other high-tech engineering fields. With the increase in the number of computational cores in modern CPUs, the use of multi-threaded computations is becoming increasingly relevant. In this study, the efficiency of multi-threaded computations in modeling casting processes was evaluated using finite element method casting simulation software ProCast and PoligonSoft, which utilize parallel computing architectures with distributed (DMP) and shared (SMP) memory, respectively. Computations were performed on Intel and AMD-based computers, varying the number of computational threads from 4 to 32. The calculation efficiency was evaluated by measuring the calculation speed increase in the filling and solidification of GP25 castings made of ML10 alloy, as well as the complex task of filling and solidification modeling nickel superalloy casing castings with radiation heat transfer simulation. The results indicate that the minimum computation time in ProCast software is observed when using 16 computational threads. This pattern holds true for both computing systems (Intel and AMD processors), and increasing the number of threads beyond this point does not make a practical difference. The performance decrease in this scenario can be attributed to the low-performance energy-efficient cores in systems based on Intel processors or the decrease in core frequency and full loading of physical cores in systems based on AMD processors. Multi-threading the modeling task in PoligonSoft software is less efficient than in ProCast, which is a result of the shared-memory architecture used in PoligonSoft. Despite the significant difference in parallel efficiency, the task of GP25 casting solidification in both PoligonSoft and ProCast is solved in a time close enough to be considered sufficient.
Style APA, Harvard, Vancouver, ISO itp.
50

Wang, Xianxun, Lihua Chen, Qijuan Chen, Yadong Mei i Hao Wang. "Model and Analysis of Integrating Wind and PV Power in Remote and Core Areas with Small Hydropower and Pumped Hydropower Storage". Energies 11, nr 12 (10.12.2018): 3459. http://dx.doi.org/10.3390/en11123459.

Pełny tekst źródła
Streszczenie:
Small hydropower (SHP) and pumped hydropower storage (PHS) are ideal members of power systems with regard to integrating intermittent power production from wind and PV facilities in modern power systems using the high penetration of renewable energy. Due to the limited capacity of SHP and the geographic restrictions of PHS, these power sources have not been adequately utilized in multi-energy integration. On the one hand, rapidly increasing wind/PV power is mostly situated in remote areas (i.e., mountain and rural areas) and is delivered to core areas (i.e., manufacturing bases and cities) for environmental protection and economic profit. On the other hand, SHP is commonly dispersed in remote areas and PHS is usually located in core areas. This paper proposes a strategy to take advantage of the distribution and regulation features of these renewable energy sources by presenting two models, which includes a remote power system model to explore the potential of SHP to smooth the short-term fluctuations in wind and PV power by minimizing output fluctuations as well as a core power system model to employ PHS to shift the surplus power to the peak period by maximizing the income from selling regenerated power and minimizing output fluctuations. In the proposed first model, the cooperative regulation not only dispatches SHP with a reciprocal output shape to the wind/PV output to smooth the fluctuations but also operates the reservoir with the scheduled total power production by adjusting its output in parallel. The results of a case study based on a municipal power system in Southwestern China show that, with the proposed method, SHP can successfully smooth the short-term fluctuations in wind and PV power without influencing the daily total power production. Additionally, SHP can replace the thermal power production with renewable power production, smooth the thermal output, and further reduce the operation costs of thermal power. By storing the surplus power in the upper reservoir and regenerating the power during the peak period, PHS can obtain not only the economic benefit of selling the power at high prices but also the environmental benefit of replacing non-renewable power with renewable power. This study provides a feasible approach to explore the potential of SHP and PHS in multi-energy integration applications.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii