Статті в журналах з теми "Translation Lookaside Buffers"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Translation Lookaside Buffers.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-25 статей у журналах для дослідження на тему "Translation Lookaside Buffers".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

SAGAHYROON, ASSIM, and AHMED H. MOHAMED. "RESIZABLE TRANSLATION STORAGE BUFFERS." Journal of Circuits, Systems and Computers 15, no. 02 (April 2006): 169–81. http://dx.doi.org/10.1142/s0218126606003027.

Повний текст джерела
Анотація:
A translation lookaside buffer (TLB) is a high-speed associative cache of recently used virtual-to-physical address translations. The operating system can deal with a TLB miss through software trap handling. A possible technique for such software trap handling uses translation storage buffers (TSBs). A TSB is an operating system data structure that caches the most recent address translations. On a TLB miss, the TLB trap handler searches the TSB for the missing address translation. If the search generates a hit, the address mapping is added to the TLB. In current implementations, TSBs are organized in a variety of ways: a global TSB for all CPUs within the system, a per-processor TSB, or a per-process TSB. All of the aforementioned techniques have various limitations that will be addressed in this paper. In this work, we propose a new framework for TSBs generation and allocation. In the proposed approach, a policy of resizing and dynamically allocating TSBs for the different processes is used. This dynamic policy allows the system to adopt to different workloads while achieving a low TSB context invalidation overhead. In addition, with the ability to assign a separate TSB to each process, thrashing is practically eliminated. Implementation and experimental results of the proposed scheme are reported. Comparisons against existing implementations confirmed the expected performance enhancement.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Yang, Rami Melhem, and Alex K. Jones. "Leveraging Sharing in Second Level Translation-Lookaside Buffers for Chip Multiprocessors." IEEE Computer Architecture Letters 11, no. 2 (July 2012): 49–52. http://dx.doi.org/10.1109/l-ca.2011.35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Haigh, Jonathan R., and Lawrence T. Clark. "High performance set associative translation lookaside buffers for low power microprocessors." Integration 41, no. 4 (July 2008): 509–23. http://dx.doi.org/10.1016/j.vlsi.2007.11.003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chang, Xiaotao, Hubertus Franke, Yi Ge, Tao Liu, Kun Wang, Jimi Xenidis, Fei Chen, and Yu Zhang. "Improving virtualization in the presence of software managed translation lookaside buffers." ACM SIGARCH Computer Architecture News 41, no. 3 (June 26, 2013): 120–29. http://dx.doi.org/10.1145/2508148.2485933.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jaleel, A., and B. Jacob. "In-line interrupt handling and lock-up free translation lookaside buffers (TLBs)." IEEE Transactions on Computers 55, no. 5 (May 2006): 559–74. http://dx.doi.org/10.1109/tc.2006.77.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Klimiankou, Y. I. "Translation lookaside buffer management." «System analysis and applied information science», no. 4 (December 30, 2019): 20–24. http://dx.doi.org/10.21122/2309-4923-2019-4-20-24.

Повний текст джерела
Анотація:
This paper focuses on the Translation Lookaside Buffer (TLB) management as part of memory management. TLB is an associative cache of the advanced processors, which reduces the overhead of the virtual to physical address translations. We consider challenges related to the design of the TLB management subsystem of the OS kernel on the example of the IA-32 platform and propose a simple model of complete and consistent policy of TLB management. This model can be used as a foundation for memory management subsystems design and verification.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Teller, P. J. "Translation-lookaside buffer consistency." Computer 23, no. 6 (June 1990): 26–36. http://dx.doi.org/10.1109/2.55498.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Tamura, L. R., T. S. Yang, D. E. Wingard, M. A. Horowitz, and B. A. Wolley. "A 4-ns BiCMOS translation-lookaside buffer." IEEE Journal of Solid-State Circuits 25, no. 5 (1990): 1093–101. http://dx.doi.org/10.1109/4.62129.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lee, Jung-Hoon, Seh-Woong Jeong, Shin-Dug Kim, and Charles Weems. "A banked-promotion translation lookaside buffer system." Journal of Systems Architecture 47, no. 14-15 (August 2002): 1065–78. http://dx.doi.org/10.1016/s1383-7621(02)00057-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Black, D. L., R. F. Rashid, D. B. Golub, and C. R. Hill. "Translation lookaside buffer consistency: a software approach." ACM SIGARCH Computer Architecture News 17, no. 2 (April 1989): 113–22. http://dx.doi.org/10.1145/68182.68193.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Cruz, Eduardo H. M., Matthias Diener, and Philippe O. A. Navaux. "Communication-aware thread mapping using the translation lookaside buffer." Concurrency and Computation: Practice and Experience 27, no. 17 (April 29, 2015): 4970–92. http://dx.doi.org/10.1002/cpe.3487.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Stenin, Vladimir, Artem Antonyuk, Yuri Katunin, and Pavel Stepanov. "Translation Lookaside buffer on the 65-NM STG dice hardened elements." Telfor Journal 10, no. 1 (2018): 50–55. http://dx.doi.org/10.5937/telfor1801050s.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Farrens, Matthew, Arvin Park, Rob Fanfelle, Pius Ng, and Gary Tyson. "A partitioned translation lookaside buffer approach to reducing address bandwith (abstract)." ACM SIGARCH Computer Architecture News 20, no. 2 (May 1992): 435. http://dx.doi.org/10.1145/146628.140546.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Rosenburg, B. "Low-synchronization translation lookaside buffer consistency in large-scale shared-memory multiprocessors." ACM SIGOPS Operating Systems Review 23, no. 5 (November 1989): 137–46. http://dx.doi.org/10.1145/74851.74864.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

REZA, SAJJID, and GREGORY T. BYRD. "REDUCING MIGRATION-INDUCED MISSES IN AN OVER-SUBSCRIBED MULTIPROCESSOR SYSTEM." Parallel Processing Letters 23, no. 01 (March 2013): 1350006. http://dx.doi.org/10.1142/s0129626413500060.

Повний текст джерела
Анотація:
In a large multiprocessor server platform using multicore chips, the scheduler often migrates a thread or process, in order to achieve better load balancing or ensure fairness among competing scheduling entities. Each migration incurs a severe performance impact from the loss of cache and Translation Lookaside Buffer (TLB) footprints and subsequent higher cache misses and page walks. Such impact is likely to be more severe in virtualized environments, where high over-subscription of CPUs is very common for server consolidation workloads or virtual desktop infrastructure deployment, causing frequent migrations and context switches. We demonstrate the performance benefit of preserving a portion of L2 cache—in particular, MRU cache lines—and warming the destination L2 cache by prefetching those cache lines under different migration scenarios. We observed a 1.5-27% reduction in CPI (cycles per instruction) following a migration. We also study the effectiveness of preserving TLB entries over a context switch or migration.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Eswer, Varuna, and Sanket S. Naik Dessai. "Processor performance metrics analysis and implementation for MIPS using an open source OS." International Journal of Reconfigurable and Embedded Systems (IJRES) 10, no. 2 (July 1, 2021): 137. http://dx.doi.org/10.11591/ijres.v10.i2.pp137-148.

Повний текст джерела
Анотація:
<p><span>Processor efficiency is a important in embedded system. The efficiency of the processor depends on the L1 cache and translation lookaside buffer (TLB). It is required to understand the L1 cache and TLB performances during varied load for the execution on the processor and hence studies the performance of the varing load and its performance with caches with MIPS and operating system (OS) are studied in this paper. The proposed methods of implementation in the paper considers the counting of the instruction exxecution for respective cache and TLB management and the events are measured using a dedicated counters in software. The software counters are used as there are limitation to hardware counters in the MIPS32. Twenty-seven metrics are considered for analysis and proper identification and implemented for the performance measurement of L1 cache and TLB on the MIPS32 processor. The generated data helps in future research in compiler tuning, memory management design for OS, analysing architectural issues, system benchmarking, scalability, address space analysis, studies of bus communication among processor and its workload sharing characterisation and kernel profiling.</span></p>
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wang, Baokang. "Design and Implementation of Cache Memory with Dual Unit Tile/Line Accessibility." Mathematical Problems in Engineering 2019 (April 1, 2019): 1–12. http://dx.doi.org/10.1155/2019/9601961.

Повний текст джерела
Анотація:
In recent years, the increasing disparity between the data access speed of cache and processing speeds of processors has caused a major bottleneck in achieving high-performance 2-dimensional (2D) data processing, such as that in scientific computing and image processing. To solve this problem, this paper proposes new dual unit tile/line access cache memory based on a hierarchical hybrid Z-ordering data layout and multibank cache organization supporting skewed storage schemes. The proposed layout improves 2D data locality and reduces L1 cache misses and Translation Lookaside Buffer (TLB) misses efficiently and it is transformed from conventional raster layout by a simple hardware-based address translation unit. In addition, we proposed an aligned tile set replacement algorithm (ATSRA) for reduction of the hardware overhead in the tag memory of the proposed cache. Simulation results using Matrix Multiplication (MM) illustrated that the proposed cache with parallel unit tile/line accessibility can reduce both the L1 cache and TLB misses considerably as compared with conventional raster layout and Z-Morton order layout. The number of parallel load instructions for parallel unit tile/line access was reduced to only about one-fourth of the conventional load instruction. The execution time for parallel load instruction was reduced to about one-third of that required for conventional load instruction. By using 40 nm Complementary Metal-Oxide-Semiconductor (CMOS) technology, we combined the proposed cache with a SIMD-based data path and designed a 5 × 5 mm2 Large-Scale Integration (LSI) chip. The entire hardware overhead of the proposed ATSRA-cache was reduced to only 105% of that required for a conventional cache by using the ATSRA method.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Di, Bang, Daokun Hu, Zhen Xie, Jianhua Sun, Hao Chen, Jinkui Ren, and Dong Li. "TLB-pilot: Mitigating TLB Contention Attack on GPUs with Microarchitecture-Aware Scheduling." ACM Transactions on Architecture and Code Optimization 19, no. 1 (March 31, 2022): 1–23. http://dx.doi.org/10.1145/3491218.

Повний текст джерела
Анотація:
Co-running GPU kernels on a single GPU can provide high system throughput and improve hardware utilization, but this raises concerns on application security. We reveal that translation lookaside buffer (TLB) attack, one of the common attacks on CPU, can happen on GPU when multiple GPU kernels co-run. We investigate conditions or principles under which a TLB attack can take effect, including the awareness of GPU TLB microarchitecture, being lightweight, and bypassing existing software and hardware mechanisms. This TLB-based attack can be leveraged to conduct Denial-of-Service (or Degradation-of-Service) attacks. Furthermore, we propose a solution to mitigate TLB attacks. In particular, based on the microarchitecture properties of GPU, we introduce a software-based system, TLB-pilot, that binds thread blocks of different kernels to different groups of streaming multiprocessors by considering hardware isolation of last-level TLBs and the application’s resource requirement. TLB-pilot employs lightweight online profiling to collect kernel information before kernel launches. By coordinating software- and hardware-based scheduling and employing a kernel splitting scheme to reduce load imbalance, TLB-pilot effectively mitigates TLB attacks. The result shows that when under TLB attack, TLB-pilot mitigates the attack and provides on average 56.2% and 60.6% improvement in average normalized turnaround times and overall system throughput, respectively, compared to the traditional Multi-Process Service based co-running solution. When under TLB attack, TLB-pilot also provides up to 47.3% and 64.3% improvement (41% and 42.9% on average) in average normalized turnaround times and overall system throughput, respectively, compared to a state-of-the-art co-running solution for efficiently scheduling of thread blocks.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Naik Dessai, Sanket Suresh, and Varuna Eswer. "Embedded Software Testing to Determine BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 3 (December 1, 2016): 121. http://dx.doi.org/10.11591/ijset.v1i3.4577.

Повний текст джерела
Анотація:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Eswer, Varuna, and Sanket Suresh Naik Dessai. "Embedded Software Engineering Approach to Implement BCM5354 Processor Performance." International Journal of Software Engineering and Technologies (IJSET) 1, no. 1 (April 1, 2016): 41. http://dx.doi.org/10.11591/ijset.v1i1.4568.

Повний текст джерела
Анотація:
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS) using software engineering approach. Software engineering providing better clearity for the system developemt and its performance analysis.In the initial stage if the requirement analysis for the performance measurment sort very clearly,the methodologies for the implementation becomes very economical without any ambigunity.In this paper a implementation is proposed to determine the processor performance metrics using a software engineering approach considering the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. Twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhou, Yufeng, Alan L. Cox, Sandhya Dwarkadas, and Xiaowan Dong. "The Impact of Page Size and Microarchitecture on Instruction Address Translation Overhead." ACM Transactions on Architecture and Code Optimization, May 27, 2023. http://dx.doi.org/10.1145/3600089.

Повний текст джерела
Анотація:
As the volume of data processed by applications has increased, considerable attention has been paid to data address translation overheads, leading to the widespread use of larger page sizes (“superpages”) and multi-level translation lookaside buffers (TLBs). However, far less attention has been paid to instruction address translation and its relation to TLB and pipeline structure. In prior work, we quantified the impact of using code superpages on a variety of widely used applications, ranging from compilers to web user-interface frameworks, and the impact of sharing page table pages for executables and shared libraries. Within this paper, we augment those results by first uncovering the effects that microarchitectural differences between Intel Skylake and AMD Zen+, particularly, their different TLB organizations, have on instruction address translation overhead. This analysis provides some key insights into the microarchitectural design decisions that impact the cost of instruction address translation. First, a lower-level (L2) TLB that has both instruction and data mappings competing for space within the same structure allows better overall performance and utilization when using code superpages. Code superpages not only reduce instruction address translation overhead, but also indirectly reduce data address translation overhead. In fact, for a few applications, the use of just a few code superpages has a larger impact on overall performance than the use of a much larger number of data superpages. Second, a level 1 (L1) TLB with separate structures for different page sizes may require careful tuning of the superpage promotion policy for code, and a correspondingly suboptimal utilization of the L2 TLB. In particular, increasing the number of superpages when the size of the L1 superpage structure is small, may result in more L1 TLB misses for some applications. Moreover, on some microarchitectures, the cost of these misses can be highly variable, because replacement is delayed until all of the in-flight instructions mapped by the victim entry are retired. Hence, more superpage promotions can result in a performance regression. Finally, our findings also make a case for first-class OS support for superpages on ordinary files containing executables and shared libraries, as well as a more aggressive superpage policy for code.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Yan, Jing, Yujuan Tan, Zhulin Ma, Jingcheng Liu, Xianzhang Chen, and Chengliang Wang. "LPE: Locality-based Dead Prediction in Exclusive TLB for Large Coverage." Journal of Circuits, Systems and Computers, June 28, 2021, 2150292. http://dx.doi.org/10.1142/s0218126621502923.

Повний текст джерела
Анотація:
Translation lookaside buffer (TLB) is critical to modern multi-level memory systems’ performance. However, due to the limited size of the TLB itself, its address coverage is limited. Adopting a two-level exclusive TLB hierarchy can increase the coverage [M. Swanson, L. Stoller and J. Carter, Increasing TLB reach using superpages backed by shadow memory, 25th Annual Int. Symp. Computer Architecture (1998); H.P. Chang, T. Heo, J. Jeong and J. Huh Hybrid TLB coalescing: Improving TLB translation coverage under diverse fragmented memory allocations, ACM SIGARCH Comput. Arch. News 45 (2017) 444–456] to improve memory performance. However, after analyzing the existing two-level exclusive TLBs, we find that a large number of “dead” entries (they will have no further use) exist in the last-level TLB (LLT) for a long time, which occupy much cache space and result in low TLB hit-rate. Based on this observation, we first propose exploiting temporal and spatial locality to predict and identify dead entries in the exclusive LLT and remove them as soon as possible to leave room for more valid data to increase the TLB hit rates. Extensive experiments show that our method increases the average hit rate by 8.67%, to a maximum of 19.95%, and reduces total latency by an average of 9.82%, up to 24.41%.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Stolz, Florian, Jan Philipp Thoma, Pascal Sasdrich, and Tim Güneysu. "Risky Translations: Securing TLBs against Timing Side Channels." IACR Transactions on Cryptographic Hardware and Embedded Systems, November 29, 2022, 1–31. http://dx.doi.org/10.46586/tches.v2023.i1.1-31.

Повний текст джерела
Анотація:
Microarchitectural side-channel vulnerabilities in modern processors are known to be a powerful attack vector that can be utilized to bypass common security boundaries like memory isolation. As shown by recent variants of transient execution attacks related to Spectre and Meltdown, those side channels allow to leak data from the microarchitecture to the observable architectural state. The vast majority of attacks currently build on the cache-timing side channel, since it is easy to exploit and provides a reliable, fine-grained communication channel. Therefore, many proposals for side-channel secure cache architectures have been made. However, caches are not the only source of side-channel leakage in modern processors and mitigating the cache side channel will inevitably lead to attacks exploiting other side channels. In this work, we focus on defeating side-channel attacks based on page translations.It has been shown that the Translation Lookaside Buffer (TLB) can be exploited in a very similar fashion to caches. Since the main caches and the TLB share many features in their architectural design, the question arises whether existing countermeasures against cache-timing attacks can be used to secure the TLB. We analyze state-ofthe-art proposals for side-channel secure cache architectures and investigate their applicability to TLB side channels. We find that those cache countermeasures are notdirectly applicable to TLBs, and propose TLBcoat, a new side-channel secure TLB architecture. We provide evidence of TLB side-channel leakage on RISC-V-based Linux systems, and demonstrate that TLBcoat prevents this leakage. We implement TLBcoat using the gem5 simulator and evaluate its performance using the PARSEC benchmark suite.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kumar, Krishan, and Renu. "A Multithreading Based Enhanced Process Scheduling Technique for Heterogeneous Distributed Environment." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, October 10, 2021, 125–29. http://dx.doi.org/10.32628/cseit217543.

Повний текст джерела
Анотація:
Multithreading is ability of a central processing unit (CPU) or a single core within a multi-core processor to execute multiple processes or threads concurrently, appropriately supported by operating system. This approach differs from multiprocessing, as with multithreading processes & threads have to share resources of a single or multiple cores: computing units, CPU caches, & translation lookaside buffer (TLB). Multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by using thread-level as well as instruction-level parallelism. Objective of research is increase efficiency of scheduling dependent task using enhanced multithreading. gang scheduling of parallel implicit-deadline periodic task systems upon identical multiprocessor platforms is considered. In this scheduling problem, parallel tasks use several processors simultaneously. first algorithm is based on linear programming & is first one to be proved optimal for considered gang scheduling problem. Furthermore, it runs in polynomial time for a fixed number m of processors & an efficient implementation is fully detailed. Second algorithm is an approximation algorithm based on a fixed-priority rule that is competitive under resource augmentation analysis in order to compute an optimal schedule pattern. Precisely, its speedup factor is bounded by (2?1/m). Both algorithms are also evaluated through intensive numerical experiments. In our research we have enhanced capability of Gang Scheduling by integration of multi core processor & Cache & make simulation of performance in MATLAB.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ulfat Altaf and Deepinder Kaur. "Enhancement of Resource Scheduling on Gui Based Operating System." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, January 1, 2022, 28–31. http://dx.doi.org/10.32628/cseit2176111.

Повний текст джерела
Анотація:
In computer architecture, multithreading is ability of a central processing unit (CPU) or a single core within a multi-core processor to execute multiple processes or threads concurrently, appropriately supported by operating system. This approach differs from multiprocessing, as with multithreading processes & threads have to share resources of a single or multiple cores: computing units, CPU caches, & translation lookaside buffer (TLB). Multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by using thread-level as well as instruction-level parallelism. Objective of research is increase efficiency of scheduling dependent task using enhanced multithreading. gang scheduling of parallel implicit-deadline periodic task systems upon identical multiprocessor platforms is considered. In this scheduling problem, parallel tasks use several processors simultaneously. first algorithm is based on linear programming & is first one to be proved optimal for considered gang scheduling problem. Furthermore, it runs in polynomial time for a fixed number m of processors & an efficient implementation is fully detailed. Second algorithm is an approximation algorithm based on a fixed-priority rule that is competitive under resource augmentation analysis in order to compute an optimal schedule pattern. Precisely, its speedup factor is bounded by (2?1/m). Both algorithms are also evaluated through intensive numerical experiments. In our research we have enhanced capability of Gang Scheduling by integration of multi core processor & Cache & make simulation of performance in MATLAB.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії