Artículos de revistas sobre el tema "Translation Lookaside Buffers"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Translation Lookaside Buffers.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 25 mejores artículos de revistas para su investigación sobre el tema "Translation Lookaside Buffers".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

SAGAHYROON, ASSIM y AHMED H. MOHAMED. "RESIZABLE TRANSLATION STORAGE BUFFERS". Journal of Circuits, Systems and Computers 15, n.º 02 (abril de 2006): 169–81. http://dx.doi.org/10.1142/s0218126606003027.

Texto completo
Resumen
A translation lookaside buffer (TLB) is a high-speed associative cache of recently used virtual-to-physical address translations. The operating system can deal with a TLB miss through software trap handling. A possible technique for such software trap handling uses translation storage buffers (TSBs). A TSB is an operating system data structure that caches the most recent address translations. On a TLB miss, the TLB trap handler searches the TSB for the missing address translation. If the search generates a hit, the address mapping is added to the TLB. In current implementations, TSBs are organized in a variety of ways: a global TSB for all CPUs within the system, a per-processor TSB, or a per-process TSB. All of the aforementioned techniques have various limitations that will be addressed in this paper. In this work, we propose a new framework for TSBs generation and allocation. In the proposed approach, a policy of resizing and dynamically allocating TSBs for the different processes is used. This dynamic policy allows the system to adopt to different workloads while achieving a low TSB context invalidation overhead. In addition, with the ability to assign a separate TSB to each process, thrashing is practically eliminated. Implementation and experimental results of the proposed scheme are reported. Comparisons against existing implementations confirmed the expected performance enhancement.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Li, Yang, Rami Melhem y Alex K. Jones. "Leveraging Sharing in Second Level Translation-Lookaside Buffers for Chip Multiprocessors". IEEE Computer Architecture Letters 11, n.º 2 (julio de 2012): 49–52. http://dx.doi.org/10.1109/l-ca.2011.35.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Haigh, Jonathan R. y Lawrence T. Clark. "High performance set associative translation lookaside buffers for low power microprocessors". Integration 41, n.º 4 (julio de 2008): 509–23. http://dx.doi.org/10.1016/j.vlsi.2007.11.003.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chang, Xiaotao, Hubertus Franke, Yi Ge, Tao Liu, Kun Wang, Jimi Xenidis, Fei Chen y Yu Zhang. "Improving virtualization in the presence of software managed translation lookaside buffers". ACM SIGARCH Computer Architecture News 41, n.º 3 (26 de junio de 2013): 120–29. http://dx.doi.org/10.1145/2508148.2485933.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jaleel, A. y B. Jacob. "In-line interrupt handling and lock-up free translation lookaside buffers (TLBs)". IEEE Transactions on Computers 55, n.º 5 (mayo de 2006): 559–74. http://dx.doi.org/10.1109/tc.2006.77.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Klimiankou, Y. I. "Translation lookaside buffer management". «System analysis and applied information science», n.º 4 (30 de diciembre de 2019): 20–24. http://dx.doi.org/10.21122/2309-4923-2019-4-20-24.

Texto completo
Resumen
This paper focuses on the Translation Lookaside Buffer (TLB) management as part of memory management. TLB is an associative cache of the advanced processors, which reduces the overhead of the virtual to physical address translations. We consider challenges related to the design of the TLB management subsystem of the OS kernel on the example of the IA-32 platform and propose a simple model of complete and consistent policy of TLB management. This model can be used as a foundation for memory management subsystems design and verification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Teller, P. J. "Translation-lookaside buffer consistency". Computer 23, n.º 6 (junio de 1990): 26–36. http://dx.doi.org/10.1109/2.55498.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Tamura, L. R., T. S. Yang, D. E. Wingard, M. A. Horowitz y B. A. Wolley. "A 4-ns BiCMOS translation-lookaside buffer". IEEE Journal of Solid-State Circuits 25, n.º 5 (1990): 1093–101. http://dx.doi.org/10.1109/4.62129.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Lee, Jung-Hoon, Seh-Woong Jeong, Shin-Dug Kim y Charles Weems. "A banked-promotion translation lookaside buffer system". Journal of Systems Architecture 47, n.º 14-15 (agosto de 2002): 1065–78. http://dx.doi.org/10.1016/s1383-7621(02)00057-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Black, D. L., R. F. Rashid, D. B. Golub y C. R. Hill. "Translation lookaside buffer consistency: a software approach". ACM SIGARCH Computer Architecture News 17, n.º 2 (abril de 1989): 113–22. http://dx.doi.org/10.1145/68182.68193.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Cruz, Eduardo H. M., Matthias Diener y Philippe O. A. Navaux. "Communication-aware thread mapping using the translation lookaside buffer". Concurrency and Computation: Practice and Experience 27, n.º 17 (29 de abril de 2015): 4970–92. http://dx.doi.org/10.1002/cpe.3487.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Stenin, Vladimir, Artem Antonyuk, Yuri Katunin y Pavel Stepanov. "Translation Lookaside buffer on the 65-NM STG dice hardened elements". Telfor Journal 10, n.º 1 (2018): 50–55. http://dx.doi.org/10.5937/telfor1801050s.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Farrens, Matthew, Arvin Park, Rob Fanfelle, Pius Ng y Gary Tyson. "A partitioned translation lookaside buffer approach to reducing address bandwith (abstract)". ACM SIGARCH Computer Architecture News 20, n.º 2 (mayo de 1992): 435. http://dx.doi.org/10.1145/146628.140546.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Rosenburg, B. "Low-synchronization translation lookaside buffer consistency in large-scale shared-memory multiprocessors". ACM SIGOPS Operating Systems Review 23, n.º 5 (noviembre de 1989): 137–46. http://dx.doi.org/10.1145/74851.74864.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

REZA, SAJJID y GREGORY T. BYRD. "REDUCING MIGRATION-INDUCED MISSES IN AN OVER-SUBSCRIBED MULTIPROCESSOR SYSTEM". Parallel Processing Letters 23, n.º 01 (marzo de 2013): 1350006. http://dx.doi.org/10.1142/s0129626413500060.

Texto completo
Resumen
In a large multiprocessor server platform using multicore chips, the scheduler often migrates a thread or process, in order to achieve better load balancing or ensure fairness among competing scheduling entities. Each migration incurs a severe performance impact from the loss of cache and Translation Lookaside Buffer (TLB) footprints and subsequent higher cache misses and page walks. Such impact is likely to be more severe in virtualized environments, where high over-subscription of CPUs is very common for server consolidation workloads or virtual desktop infrastructure deployment, causing frequent migrations and context switches. We demonstrate the performance benefit of preserving a portion of L2 cache—in particular, MRU cache lines—and warming the destination L2 cache by prefetching those cache lines under different migration scenarios. We observed a 1.5-27% reduction in CPI (cycles per instruction) following a migration. We also study the effectiveness of preserving TLB entries over a context switch or migration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Eswer, Varuna y Sanket S. Naik Dessai. "Processor performance metrics analysis and implementation for MIPS using an open source OS". International Journal of Reconfigurable and Embedded Systems (IJRES) 10, n.º 2 (1 de julio de 2021): 137. http://dx.doi.org/10.11591/ijres.v10.i2.pp137-148.

Texto completo
Resumen
<p><span>Processor efficiency is a important in embedded system. The efficiency of the processor depends on the L1 cache and translation lookaside buffer (TLB). It is required to understand the L1 cache and TLB performances during varied load for the execution on the processor and hence studies the performance of the varing load and its performance with caches with MIPS and operating system (OS) are studied in this paper. The proposed methods of implementation in the paper considers the counting of the instruction exxecution for respective cache and TLB management and the events are measured using a dedicated counters in software. The software counters are used as there are limitation to hardware counters in the MIPS32. Twenty-seven metrics are considered for analysis and proper identification and implemented for the performance measurement of L1 cache and TLB on the MIPS32 processor. The generated data helps in future research in compiler tuning, memory management design for OS, analysing architectural issues, system benchmarking, scalability, address space analysis, studies of bus communication among processor and its workload sharing characterisation and kernel profiling.</span></p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Wang, Baokang. "Design and Implementation of Cache Memory with Dual Unit Tile/Line Accessibility". Mathematical Problems in Engineering 2019 (1 de abril de 2019): 1–12. http://dx.doi.org/10.1155/2019/9601961.

Texto completo
Resumen
In recent years, the increasing disparity between the data access speed of cache and processing speeds of processors has caused a major bottleneck in achieving high-performance 2-dimensional (2D) data processing, such as that in scientific computing and image processing. To solve this problem, this paper proposes new dual unit tile/line access cache memory based on a hierarchical hybrid Z-ordering data layout and multibank cache organization supporting skewed storage schemes. The proposed layout improves 2D data locality and reduces L1 cache misses and Translation Lookaside Buffer (TLB) misses efficiently and it is transformed from conventional raster layout by a simple hardware-based address translation unit. In addition, we proposed an aligned tile set replacement algorithm (ATSRA) for reduction of the hardware overhead in the tag memory of the proposed cache. Simulation results using Matrix Multiplication (MM) illustrated that the proposed cache with parallel unit tile/line accessibility can reduce both the L1 cache and TLB misses considerably as compared with conventional raster layout and Z-Morton order layout. The number of parallel load instructions for parallel unit tile/line access was reduced to only about one-fourth of the conventional load instruction. The execution time for parallel load instruction was reduced to about one-third of that required for conventional load instruction. By using 40 nm Complementary Metal-Oxide-Semiconductor (CMOS) technology, we combined the proposed cache with a SIMD-based data path and designed a 5 × 5 mm2 Large-Scale Integration (LSI) chip. The entire hardware overhead of the proposed ATSRA-cache was reduced to only 105% of that required for a conventional cache by using the ATSRA method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Di, Bang, Daokun Hu, Zhen Xie, Jianhua Sun, Hao Chen, Jinkui Ren y Dong Li. "TLB-pilot: Mitigating TLB Contention Attack on GPUs with Microarchitecture-Aware Scheduling". ACM Transactions on Architecture and Code Optimization 19, n.º 1 (31 de marzo de 2022): 1–23. http://dx.doi.org/10.1145/3491218.

Texto completo
Resumen
Co-running GPU kernels on a single GPU can provide high system throughput and improve hardware utilization, but this raises concerns on application security. We reveal that translation lookaside buffer (TLB) attack, one of the common attacks on CPU, can happen on GPU when multiple GPU kernels co-run. We investigate conditions or principles under which a TLB attack can take effect, including the awareness of GPU TLB microarchitecture, being lightweight, and bypassing existing software and hardware mechanisms. This TLB-based attack can be leveraged to conduct Denial-of-Service (or Degradation-of-Service) attacks. Furthermore, we propose a solution to mitigate TLB attacks. In particular, based on the microarchitecture properties of GPU, we introduce a software-based system, TLB-pilot, that binds thread blocks of different kernels to different groups of streaming multiprocessors by considering hardware isolation of last-level TLBs and the application’s resource requirement. TLB-pilot employs lightweight online profiling to collect kernel information before kernel launches. By coordinating software- and hardware-based scheduling and employing a kernel splitting scheme to reduce load imbalance, TLB-pilot effectively mitigates TLB attacks. The result shows that when under TLB attack, TLB-pilot mitigates the attack and provides on average 56.2% and 60.6% improvement in average normalized turnaround times and overall system throughput, respectively, compared to the traditional Multi-Process Service based co-running solution. When under TLB attack, TLB-pilot also provides up to 47.3% and 64.3% improvement (41% and 42.9% on average) in average normalized turnaround times and overall system throughput, respectively, compared to a state-of-the-art co-running solution for efficiently scheduling of thread blocks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Naik Dessai, Sanket Suresh y Varuna Eswer. "Embedded Software Testing to Determine BCM5354 Processor Performance". International Journal of Software Engineering and Technologies (IJSET) 1, n.º 3 (1 de diciembre de 2016): 121. http://dx.doi.org/10.11591/ijset.v1i3.4577.

Texto completo
Resumen
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation of embedded testing procedure to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS). The implementation proposed for embedded testing in the paper considers the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements embedding testbed with a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the testbed implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. In this testbed twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Eswer, Varuna y Sanket Suresh Naik Dessai. "Embedded Software Engineering Approach to Implement BCM5354 Processor Performance". International Journal of Software Engineering and Technologies (IJSET) 1, n.º 1 (1 de abril de 2016): 41. http://dx.doi.org/10.11591/ijset.v1i1.4568.

Texto completo
Resumen
Efficiency of a processor is a critical factor for an embedded system. One of the deciding factors for efficiency is the functioning of the L1 cache and Translation Lookaside Buffer (TLB). Certain processors have the L1 cache and TLB managed by the operating system, MIPS32 is one such processor. The performance of the L1 cache and TLB necessitates a detailed study to understand its management during varied load on the processor. This paper presents an implementation to analyse the performance of the MIPS32 processor L1 cache and TLB management by the operating system (OS) using software engineering approach. Software engineering providing better clearity for the system developemt and its performance analysis.In the initial stage if the requirement analysis for the performance measurment sort very clearly,the methodologies for the implementation becomes very economical without any ambigunity.In this paper a implementation is proposed to determine the processor performance metrics using a software engineering approach considering the counting of the respective cache and TLB management instruction execution, which is an event that is measurable with the use of dedicated counters. The lack of hardware counters in the MIPS32 processor results in the usage of software based event counters that are defined in the kernel. This paper implements a subset of MIPS32 processor performance measurement metrics using software based counters. Techniques were developed to overcome the challenges posed by the kernel source code. To facilitate better understanding of the implementation procedure of the software based processor performance counters; use-case analysis diagram, flow charts, screen shots, and knowledge nuggets are supplemented along with histograms of the cache and TLB events data generated by the proposed implementation. Twenty-seven metrics have been identified and implemented to provide data related to the events of the L1 cache and TLB on the MIPS32 processor. The generated data can be used in tuning of compiler, OS memory management design, system benchmarking, scalability, analysing architectural issues, address space analysis, understanding bus communication, kernel profiling, and workload characterisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Zhou, Yufeng, Alan L. Cox, Sandhya Dwarkadas y Xiaowan Dong. "The Impact of Page Size and Microarchitecture on Instruction Address Translation Overhead". ACM Transactions on Architecture and Code Optimization, 27 de mayo de 2023. http://dx.doi.org/10.1145/3600089.

Texto completo
Resumen
As the volume of data processed by applications has increased, considerable attention has been paid to data address translation overheads, leading to the widespread use of larger page sizes (“superpages”) and multi-level translation lookaside buffers (TLBs). However, far less attention has been paid to instruction address translation and its relation to TLB and pipeline structure. In prior work, we quantified the impact of using code superpages on a variety of widely used applications, ranging from compilers to web user-interface frameworks, and the impact of sharing page table pages for executables and shared libraries. Within this paper, we augment those results by first uncovering the effects that microarchitectural differences between Intel Skylake and AMD Zen+, particularly, their different TLB organizations, have on instruction address translation overhead. This analysis provides some key insights into the microarchitectural design decisions that impact the cost of instruction address translation. First, a lower-level (L2) TLB that has both instruction and data mappings competing for space within the same structure allows better overall performance and utilization when using code superpages. Code superpages not only reduce instruction address translation overhead, but also indirectly reduce data address translation overhead. In fact, for a few applications, the use of just a few code superpages has a larger impact on overall performance than the use of a much larger number of data superpages. Second, a level 1 (L1) TLB with separate structures for different page sizes may require careful tuning of the superpage promotion policy for code, and a correspondingly suboptimal utilization of the L2 TLB. In particular, increasing the number of superpages when the size of the L1 superpage structure is small, may result in more L1 TLB misses for some applications. Moreover, on some microarchitectures, the cost of these misses can be highly variable, because replacement is delayed until all of the in-flight instructions mapped by the victim entry are retired. Hence, more superpage promotions can result in a performance regression. Finally, our findings also make a case for first-class OS support for superpages on ordinary files containing executables and shared libraries, as well as a more aggressive superpage policy for code.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yan, Jing, Yujuan Tan, Zhulin Ma, Jingcheng Liu, Xianzhang Chen y Chengliang Wang. "LPE: Locality-based Dead Prediction in Exclusive TLB for Large Coverage". Journal of Circuits, Systems and Computers, 28 de junio de 2021, 2150292. http://dx.doi.org/10.1142/s0218126621502923.

Texto completo
Resumen
Translation lookaside buffer (TLB) is critical to modern multi-level memory systems’ performance. However, due to the limited size of the TLB itself, its address coverage is limited. Adopting a two-level exclusive TLB hierarchy can increase the coverage [M. Swanson, L. Stoller and J. Carter, Increasing TLB reach using superpages backed by shadow memory, 25th Annual Int. Symp. Computer Architecture (1998); H.P. Chang, T. Heo, J. Jeong and J. Huh Hybrid TLB coalescing: Improving TLB translation coverage under diverse fragmented memory allocations, ACM SIGARCH Comput. Arch. News 45 (2017) 444–456] to improve memory performance. However, after analyzing the existing two-level exclusive TLBs, we find that a large number of “dead” entries (they will have no further use) exist in the last-level TLB (LLT) for a long time, which occupy much cache space and result in low TLB hit-rate. Based on this observation, we first propose exploiting temporal and spatial locality to predict and identify dead entries in the exclusive LLT and remove them as soon as possible to leave room for more valid data to increase the TLB hit rates. Extensive experiments show that our method increases the average hit rate by 8.67%, to a maximum of 19.95%, and reduces total latency by an average of 9.82%, up to 24.41%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Stolz, Florian, Jan Philipp Thoma, Pascal Sasdrich y Tim Güneysu. "Risky Translations: Securing TLBs against Timing Side Channels". IACR Transactions on Cryptographic Hardware and Embedded Systems, 29 de noviembre de 2022, 1–31. http://dx.doi.org/10.46586/tches.v2023.i1.1-31.

Texto completo
Resumen
Microarchitectural side-channel vulnerabilities in modern processors are known to be a powerful attack vector that can be utilized to bypass common security boundaries like memory isolation. As shown by recent variants of transient execution attacks related to Spectre and Meltdown, those side channels allow to leak data from the microarchitecture to the observable architectural state. The vast majority of attacks currently build on the cache-timing side channel, since it is easy to exploit and provides a reliable, fine-grained communication channel. Therefore, many proposals for side-channel secure cache architectures have been made. However, caches are not the only source of side-channel leakage in modern processors and mitigating the cache side channel will inevitably lead to attacks exploiting other side channels. In this work, we focus on defeating side-channel attacks based on page translations.It has been shown that the Translation Lookaside Buffer (TLB) can be exploited in a very similar fashion to caches. Since the main caches and the TLB share many features in their architectural design, the question arises whether existing countermeasures against cache-timing attacks can be used to secure the TLB. We analyze state-ofthe-art proposals for side-channel secure cache architectures and investigate their applicability to TLB side channels. We find that those cache countermeasures are notdirectly applicable to TLBs, and propose TLBcoat, a new side-channel secure TLB architecture. We provide evidence of TLB side-channel leakage on RISC-V-based Linux systems, and demonstrate that TLBcoat prevents this leakage. We implement TLBcoat using the gem5 simulator and evaluate its performance using the PARSEC benchmark suite.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Kumar, Krishan y Renu. "A Multithreading Based Enhanced Process Scheduling Technique for Heterogeneous Distributed Environment". International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 10 de octubre de 2021, 125–29. http://dx.doi.org/10.32628/cseit217543.

Texto completo
Resumen
Multithreading is ability of a central processing unit (CPU) or a single core within a multi-core processor to execute multiple processes or threads concurrently, appropriately supported by operating system. This approach differs from multiprocessing, as with multithreading processes & threads have to share resources of a single or multiple cores: computing units, CPU caches, & translation lookaside buffer (TLB). Multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by using thread-level as well as instruction-level parallelism. Objective of research is increase efficiency of scheduling dependent task using enhanced multithreading. gang scheduling of parallel implicit-deadline periodic task systems upon identical multiprocessor platforms is considered. In this scheduling problem, parallel tasks use several processors simultaneously. first algorithm is based on linear programming & is first one to be proved optimal for considered gang scheduling problem. Furthermore, it runs in polynomial time for a fixed number m of processors & an efficient implementation is fully detailed. Second algorithm is an approximation algorithm based on a fixed-priority rule that is competitive under resource augmentation analysis in order to compute an optimal schedule pattern. Precisely, its speedup factor is bounded by (2?1/m). Both algorithms are also evaluated through intensive numerical experiments. In our research we have enhanced capability of Gang Scheduling by integration of multi core processor & Cache & make simulation of performance in MATLAB.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Ulfat Altaf y Deepinder Kaur. "Enhancement of Resource Scheduling on Gui Based Operating System". International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 1 de enero de 2022, 28–31. http://dx.doi.org/10.32628/cseit2176111.

Texto completo
Resumen
In computer architecture, multithreading is ability of a central processing unit (CPU) or a single core within a multi-core processor to execute multiple processes or threads concurrently, appropriately supported by operating system. This approach differs from multiprocessing, as with multithreading processes & threads have to share resources of a single or multiple cores: computing units, CPU caches, & translation lookaside buffer (TLB). Multiprocessing systems include multiple complete processing units, multithreading aims to increase utilization of a single core by using thread-level as well as instruction-level parallelism. Objective of research is increase efficiency of scheduling dependent task using enhanced multithreading. gang scheduling of parallel implicit-deadline periodic task systems upon identical multiprocessor platforms is considered. In this scheduling problem, parallel tasks use several processors simultaneously. first algorithm is based on linear programming & is first one to be proved optimal for considered gang scheduling problem. Furthermore, it runs in polynomial time for a fixed number m of processors & an efficient implementation is fully detailed. Second algorithm is an approximation algorithm based on a fixed-priority rule that is competitive under resource augmentation analysis in order to compute an optimal schedule pattern. Precisely, its speedup factor is bounded by (2?1/m). Both algorithms are also evaluated through intensive numerical experiments. In our research we have enhanced capability of Gang Scheduling by integration of multi core processor & Cache & make simulation of performance in MATLAB.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía