Academic literature on the topic 'CPU throttling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'CPU throttling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "CPU throttling"

1

Owahid, Abdullah A., and Eugene B. John. "Wasted dynamic power and correlation to instruction set architecture for CPU throttling." Journal of Supercomputing 75, no. 5 (October 11, 2018): 2436–54. http://dx.doi.org/10.1007/s11227-018-2637-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Benoit-Cattin, Théo, Delia Velasco-Montero, and Jorge Fernández-Berni. "Impact of Thermal Throttling on Long-Term Visual Inference in a CPU-Based Edge Device." Electronics 9, no. 12 (December 10, 2020): 2106. http://dx.doi.org/10.3390/electronics9122106.

Full text
Abstract:
Many application scenarios of edge visual inference, e.g., robotics or environmental monitoring, eventually require long periods of continuous operation. In such periods, the processor temperature plays a critical role to keep a prescribed frame rate. Particularly, the heavy computational load of convolutional neural networks (CNNs) may lead to thermal throttling and hence performance degradation in few seconds. In this paper, we report and analyze the long-term performance of 80 different cases resulting from running five CNN models on four software frameworks and two operating systems without and with active cooling. This comprehensive study was conducted on a low-cost edge platform, namely Raspberry Pi 4B (RPi4B), under stable indoor conditions. The results show that hysteresis-based active cooling prevented thermal throttling in all cases, thereby improving the throughput up to approximately 90% versus no cooling. Interestingly, the range of fan usage during active cooling varied from 33% to 65%. Given the impact of the fan on the power consumption of the system as a whole, these results stress the importance of a suitable selection of CNN model and software components. To assess the performance in outdoor applications, we integrated an external temperature sensor with the RPi4B and conducted a set of experiments with no active cooling in a wide interval of ambient temperature, ranging from 22 °C to 36 °C. Variations up to 27.7% were measured with respect to the maximum throughput achieved in that interval. This demonstrates that ambient temperature is a critical parameter in case active cooling cannot be applied.
APA, Harvard, Vancouver, ISO, and other styles
3

Kwon, Ohchul, Wonjae Jang, Giyeon Kim, and Chang-Gun Lee. "Optimal Planning of Dynamic Thermal Management for NANS (N-App N-Screen) Services." Electronics 7, no. 11 (November 8, 2018): 311. http://dx.doi.org/10.3390/electronics7110311.

Full text
Abstract:
Existing multi-screening technologies have been limited to mirroring the current screen of the smartphone onto all the connected external display devices. In contrast, NANS (N-App N-Screen) technology is able to display different applications (N-App) on different multiple display devices (N-Screen) using only a smartphone. For such NANS services, this paper empirically shows that the thermal violation constraint is more critical than the battery life constraint. For preventing the thermal violation, the existing DTM (Dynamic Thermal Management) techniques cannot be used since they consider thermal violations as abnormal, and hence prevent them by severely throttling CPU frequencies resulting in serious QoS degradation. In NANS service scenarios it is normal to operate in high temperature ranges to continue services with acceptable QoS. Targeting such scenarios, we first propose a novel thermal prediction method specially designed for NANS services. Based on the novel thermal prediction method, we then propose a novel DTM technique called, “thermal planning” to provide sustainable NANS services with sufficiently high QoS without thermal violations.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Jing, Madhavan Manivannan, Mustafa Abduljabbar, and Miquel Pericàs. "ERASE: Energy Efficient Task Mapping and Resource Management for Work Stealing Runtimes." ACM Transactions on Architecture and Code Optimization 19, no. 2 (June 30, 2022): 1–29. http://dx.doi.org/10.1145/3510422.

Full text
Abstract:
Parallel applications often rely on work stealing schedulers in combination with fine-grained tasking to achieve high performance and scalability. However, reducing the total energy consumption in the context of work stealing runtimes is still challenging, particularly when using asymmetric architectures with different types of CPU cores. A common approach for energy savings involves dynamic voltage and frequency scaling (DVFS) wherein throttling is carried out based on factors like task parallelism, stealing relations, and task criticality. This article makes the following observations: (i) leveraging DVFS on a per-task basis is impractical when using fine-grained tasking and in environments with cluster/chip-level DVFS; (ii) task moldability, wherein a single task can execute on multiple threads/cores via work-sharing, can help to reduce energy consumption; and (iii) mismatch between tasks and assigned resources (i.e., core type and number of cores) can detrimentally impact energy consumption. In this article, we propose EneRgy Aware SchedulEr (ERASE), an intra-application task scheduler on top of work stealing runtimes that aims to reduce the total energy consumption of parallel applications. It achieves energy savings by guiding scheduling decisions based on per-task energy consumption predictions of different resource configurations. In addition, ERASE is capable of adapting to both given static frequency settings and externally controlled DVFS. Overall, ERASE achieves up to 31% energy savings and improves performance by 44% on average, compared to the state-of-the-art DVFS-based schedulers.
APA, Harvard, Vancouver, ISO, and other styles
5

Kirov, Denis E., Natalia V. Toutova, Anatoly S. Vorozhtsov, and Iliya A. Andreev. "FEATURE SELECTION FOR PREDICTING LIVE MIGRATION CHARACTERISTICS OF VIRTUAL MACHINES." T-Comm 15, no. 7 (2021): 62–70. http://dx.doi.org/10.36724/2072-8735-2021-15-7-62-70.

Full text
Abstract:
Virtual machine migration is widely used in cloud data centers to scale and maintain the stability of cloud services. However, the performance metrics of virtual machine (VM) applications during migration that are set in the Service Level Agreements may deteriorate. Before starting a migration, it is necessary to evaluate the migration characteristics that affect the quality of service. These characteristics are the total migration time and virtual machine downtime, which are random variables that depend on a variety of factors. The prediction is based on the VM monitoring data. In this paper, we select the most suitable factors for forecasting five types of migrations: precopy migration, postcopy migration, and modification of precopy migration such as CPU throttling, data compression, and delta compression of modified memory pages. To do this, we analyzed a dataset that includes data on five types of migrations, approximately 8000 records of each type. Using correlation analysis, the factors that mostly affect the total migration time and the VM downtime are chosen. These characteristics are predicted using machine learning methods such as linear regression and the support vector machine. It is shown that the number of factors can be reduced almost twice with the same quality of the forecast. In general, linear regression provides relatively high accuracy in predicting the total migration time and the duration of virtual machine downtime. At the same time, the observed nonlinearity in the correlations shows that it is advisable to use the support vector machine to improve the quality of the forecast.
APA, Harvard, Vancouver, ISO, and other styles
6

Alkharabsheh, Sami, Udaya L. N. Puvvadi, Bharath Ramakrishnan, Kanad Ghose, and Bahgat Sammakia. "Failure Analysis of Direct Liquid Cooling System in Data Centers." Journal of Electronic Packaging 140, no. 2 (May 9, 2018). http://dx.doi.org/10.1115/1.4039137.

Full text
Abstract:
In this paper, the impact of direct liquid cooling (DLC) system failure on the information technology (IT) equipment is studied experimentally. The main factors that are anticipated to affect the IT equipment response during failure are the central processing unit (CPU) utilization, coolant set point temperature (SPT), and the server type. These factors are varied experimentally and the IT equipment response is studied in terms of chip temperature and power, CPU utilization, and total server power. It was found that failure of this cooling system is hazardous and can lead to data center shutdown in less than a minute. Additionally, the CPU frequency throttling mechanism was found to be vital to understand the change in chip temperature, power, and utilization. Other mechanisms associated with high temperatures were also observed such as the leakage power and the fans' speed change. Finally, possible remedies are proposed to reduce the probability and the consequences of the cooling system failure.
APA, Harvard, Vancouver, ISO, and other styles
7

Brilli, Gianluca, Roberto Cavicchioli, Marco Solieri, Paolo Valente, and Andrea Marongiu. "Evaluating Controlled Memory Request Injection for Efficient Bandwidth Utilization and Predictable Execution in Heterogeneous SoCs." ACM Transactions on Embedded Computing Systems, September 19, 2022. http://dx.doi.org/10.1145/3548773.

Full text
Abstract:
High-performance embedded platforms are increasingly adopting heterogeneous systems-on-chip (HeSoC) that couple multi-core CPUs with accelerators such as GPU, FPGA or AI engines. Adopting HeSoCs in the context of real-time workloads is not immediately possible, though, as contention on shared resources like the memory hierarchy – and in particular the main memory (DRAM) – causes unpredictable latency increase. To tackle this problem, both the research community and certification authorities mandate (i) that accesses from parallel threads to the shared system resources (typically, main memory) happen in a mutually exclusive manner by design, or (ii) that per-thread bandwidth regulation is enforced. Such arbitration schemes provide timing guarantees, but make poor use of the memory bandwidth available in a modern HeSoC. Controlled Memory Request Injection (CMRI) is a recently-proposed bandwidth limitation concept that builds on top of a mutually-exclusive schedule but still allows the threads currently not entitled to access memory to use as much of the unused bandwidth as possible without losing the timing guarantee. CMRI has been discussed in the context of a multi-core CPU, but the same principle applies also to a more complex system such as an HeSoC. In this paper we introduce two CMRI schemes suitable for HeSoCs: Voluntary Throttling via code refactoring and Bandwidth Regulation via dynamic throttling. We extensively characterize a proof-of-concept incarnation of both schemes on two HeSoCs: an NVIDIA Tegra TX2 and a Xilinx UltraScale+, highlighting the benefits and the costs of CMRI for synthetic workloads that model worst-case DRAM access. We also test the effectiveness of CMRI with real benchmarks, studying the effect of interference among the host CPU and the accelerators.
APA, Harvard, Vancouver, ISO, and other styles
8

"Enhancement of Plant Disease Detection Framework using Cloud Computing and GPU Computing." International Journal of Engineering and Advanced Technology 9, no. 1 (October 30, 2019): 3139–41. http://dx.doi.org/10.35940/ijeat.a9541.109119.

Full text
Abstract:
GPUs are very useful in high performance computing. With the emerging new trend of cloud environments with GPU instances are now gaining popularity in many real time applications. GPUs in a cloud environment still needs a long way to initiate various challenges in a cloud .The gap for making this as a shared resource in the cloud is still at preliminary level and still limited to many real life problems like plant disease detection at a grass root level. Timely information to farmers about diseases is still a great bottleneck for farmers. Due to this, farmers pour many rounds of pesticides to prevent their crops from diseases. But due to lack of proper ICT, farmers are not informed properly about their crop diseases which result in high loss to ecological balance and community people. Thus, to solve the problem of delayed updation about their crop diseases to farmers, GPU processing is used instead of Normal CPU processing on the cloud. This research paper focuses on the applications of GPU Computing within cloud and to study the performance of GPU image processing and Normal CPU image processing with respect to plant disease detection framework. This paper also addresses the problem of throttling in normal CPU when used for large datasets. GPU processors had shown a four-fold increase in performance as compared to normal datasets. GPU results shown 63 times faster as compared to normal CPU for analyzing 52,486 images of healthy and diseased leaf images. This include 16 plant types and 55 leaf diseases.
APA, Harvard, Vancouver, ISO, and other styles
9

Maity, Srijeeta, Anirban Majumder, Rudrajyoti Roy, Ashish Hota, and Soumyajit Dey. "Harnessing Machine Learning in Dynamic Thermal Management in Embedded CPU-GPU Platforms." ACM Transactions on Design Automation of Electronic Systems, December 20, 2024. https://doi.org/10.1145/3708890.

Full text
Abstract:
With increasing transistor density, modern heterogeneous embedded processors often exhibit high temperature gradients due to complex application scheduling scenarios which may have missed design considerations. In many use cases, off-chip ”active” cooling solutions are considered prohibitive in such reduced form factors. Core frequency throttling by existing dynamic thermal management techniques often compromises the Quality-of-Service (QoS) and violates real-time deadlines. This necessitates the adoption of intelligent resource management that simultaneously manages both thermal and latency performance. Coupled with the complexity of modern heterogeneous multi-cores, the periodic application updates that cater to ever-changing user requirements often render model-driven thermal-aware resource allocation approaches unsuitable for heterogeneous multi-core systems. For such application-architecture scenarios, we propose a novel self-learning based resource manager using Reinforcement Learning that intelligently manipulates core frequencies and task set mappings to fulfil thermal and latency objectives. Our framework employs a data-driven system modeling technique using Gaussian Process Regression to enable efficient offline training of this learning-based resource manager to avoid challenges associated with initial online training. We evaluate the approach on a heterogeneous embedded CPU-GPU platform with real workloads and observe a significant reduction in peak operating temperature when compared to the default onboard frequency governor as well as other learning-based state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
10

Pandey, Shailja, Lokesh Siddhu, and Preeti Ranjan Panda. "NeuroCool: Dynamic Thermal Management of 3D DRAM for Deep Neural Networks through Customized Prefetching." ACM Transactions on Design Automation of Electronic Systems, October 23, 2023. http://dx.doi.org/10.1145/3630012.

Full text
Abstract:
Deep neural network (DNN) implementations are typically characterized by huge data sets and concurrent computation, resulting in a demand for high memory bandwidth due to intensive data movement between processors and off-chip memory. Performing DNN inference on general-purpose cores/ edge is gaining attraction to enhance user experience and reduce latency. The mismatch in the CPU and conventional DRAM speed leads to under utilization of the compute capabilities, causing increased inference time. 3D DRAM is a promising solution to effectively fulfill the bandwidth requirement of high-throughput DNNs. However, due to high power density in stacked architectures, 3D DRAMs need dynamic thermal management (DTM), resulting in performance overhead due to memory-induced CPU throttling. We study the thermal impact of DNN applications running on a 3D DRAM system, and make a case for a memory temperature-aware customized prefetch mechanism to reduce DTM overheads and significantly improve performance. In our proposed NeuroCool DTM policy, we intelligently place either DRAM ranks or tiers in low power state, using the DNN layer characteristics and access rate. We establish the generalization of our approach through training and test data sets comprising diverse data points from widely used DNN applications. Experimental results on popular DNNs show that NeuroCool results in a average performance gain of 44% (as high as 52%) and memory energy improvement of 43% (as high as 69%) over general-purpose DTM policies.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "CPU throttling"

1

Perera, Jayasuriya Kuranage Menuka. "AI-driven Zero-Touch solutions for resource management in cloud-native 5G networks." Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2024. http://www.theses.fr/2024IMTA0427.

Full text
Abstract:
Le déploiement des réseaux 5G a introduit des architectures cloud-native et des systèmes de gestion automatisés, offrant aux fournisseurs de services de communication une infrastructure évolutive, flexible et agile. Ces avancées permettent une allocation dynamique des ressources, augmentant celles-ci en période de forte demande et les réduisant en période de faible utilisation, optimisant ainsi les CapEx et OpEx. Cependant, une observabilité limitée et une caractérisation insuffisante des charges de travail entravent la gestion des ressources. Une surprovisionnement pendant les périodes creuses augmente lescoûts, tandis qu’un sous-provisionnement dégrade la QoS lors des pics de demande. Malgré les solutions existantes dans l’industrie, le compromis entre efficacité des coûts et optimisation de la QoS reste difficile. Cette thèse aborde ces défis en proposant des solutions d’autoscaling proactives pour les fonctions réseau dans un environnement cloud native 5G. Elle se concentre sur la prévision précise de l’utilisation des ressources, l’identification des opérations de changement d’échelle à mettre en oeuvre, et l’optimisation des instants auxquels opérer ces ajustements pour préserver l’équilibre entre coût et QoS. De plus, une approche novatrice permet de tenir compte de façon efficace du throttling de la CPU. Le cadre développé assure une allocation efficace des ressources, réduisant les coûts opérationnels tout en maintenant une QoS élevée. Ces contributions établissent une base pour des opérations réseau 5G durables et efficaces et proposent une base pour les futures architectures cloud-native
The deployment of 5G networks has introduced cloud-native architectures and automated management systems, offering communication service providers scalable, flexible, and agile infrastructure. These advancements enable dynamic resource allocation, scaling resources up during high demand and down during low usage, optimizing CapEx and OpEx. However, limited observability and poor workload characterization hinder resource management. Overprovisioning during off-peak periods raises costs, while underprovisioning during peak demand degrades QoS. Despite industry solutions, the trade-off between cost efficiency and QoS remains unresolved. This thesis addresses these challenges by proposing proactive autoscaling solutions for network functions in cloud-native 5G. It focuses on accurately forecasting resource usage, intelligently differentiating scaling events (scaling up, down, or none), and optimizing timing to achieve a balance between cost and QoS. Additionally, CPU throttling, a significant barrier to this balance, is mitigated through a novel approach. The developed framework ensures efficient resource allocation, reducing operational costs while maintaining high QoS. These contributions establish a foundation for sustainable and efficient 5G network operations, setting a benchmark for future cloud-native architectures
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "CPU throttling"

1

Kuranage, Menuka Perera Jayasuriya, Elisabeth Hanser, Ahmed Bouabdallah, Loutfi Nuaymi, and Philippe Bertin. "CPU throttling-aware AI-based autoscaling for Kubernetes." In 2024 IEEE 35th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), 1–7. IEEE, 2024. https://doi.org/10.1109/pimrc59610.2024.10817283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Knorst, Tiago, Michael Guilherme Jordan, Guilherme Korol, Mateus Beck Rutzig, and Antonio Carlos Schneider Beck. "An Automatic Framework for Collaborative CPU Thread Throttling and FPGA HLS-Versioning." In 2024 XIV Brazilian Symposium on Computing Systems Engineering (SBESC), 1–6. IEEE, 2024. https://doi.org/10.1109/sbesc65055.2024.10771920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Knorst, Tiago, Michael G. Jordan, Arthur F. Lorenzen, Mateus Beck Rutzig, and Antonio Carlos Schneider Beck. "ETCG: Energy-Aware CPU Thread Throttling for CPU-GPU Collaborative Environments." In 2021 34th SBC/SBMicro/IEEE/ACM Symposium on Integrated Circuits and Systems Design (SBCCI). IEEE, 2021. http://dx.doi.org/10.1109/sbcci53441.2021.9529986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rai, Siddharth, and Mainak Chaudhuri. "Improving CPU Performance Through Dynamic GPU Access Throttling in CPU-GPU Heterogeneous Processors." In 2017 IEEE International Parallel and Distributed Processing Symposium: Workshops (IPDPSW). IEEE, 2017. http://dx.doi.org/10.1109/ipdpsw.2017.37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Knorst, Tiago, Michael G. Jordan, Arthur F. Lorenzon, Mateus Beck Rutzig, and Antonio Carlos Schneider Beck. "ETCF – Energy-Aware CPU Thread Throttling and Workload Balancing Framework for CPU-FPGA Collaborative Environments." In 2021 XI Brazilian Symposium on Computing Systems Engineering (SBESC). IEEE, 2021. http://dx.doi.org/10.1109/sbesc53686.2021.9628345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Owahid, Abdullah A., and Eugene B. John. "RTL Level Instruction Profiling for CPU Throttling to Reduce Wasted Dynamic Power." In 2017 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2017. http://dx.doi.org/10.1109/csci.2017.281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Knorst, Tiago, Guilherme Korol, Michael Guilherme Jordan, Julio Costella Vicenzi, Arthur Lorenzon, Mateus Beck Rutzig, and Antonio Carlos Schneider Beck. "On the benefits of Collaborative Thread Throttling and HLS-Versioning in CPU-FPGA Environments." In 2022 35th SBC/SBMicro/IEEE/ACM Symposium on Integrated Circuits and Systems Design (SBCCI). IEEE, 2022. http://dx.doi.org/10.1109/sbcci55532.2022.9893223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alkharabsheh, Sami, Bharath Ramakrishnan, and Bahgat Sammakia. "Failure Analysis of Direct Liquid Cooling System in Data Centers." In ASME 2017 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2017 Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/ipack2017-74174.

Full text
Abstract:
In this paper, the impact of direct liquid cooling (DLC) system failure on the IT equipment is studied experimentally. The main factors that are anticipated to affect the IT equipment response during failure are the CPU utilization, coolant set point temperature (SPT) and the server type. These factors are varied experimentally and the IT equipment response is studied in terms of chip temperature and power, CPU utilization and total server power. It was found that failure of the cooling system is hazardous and can lead to data center shutdown in less than a minute. Additionally, the CPU frequency throttling mechanism was found to be vital to understand the change in chip temperature, power, and utilization. Other mechanisms associated with high temperatures were also observed such as the leakage power and the fans speed change. Finally, possible remedies are proposed to reduce the probability and the consequences of the cooling system failure.
APA, Harvard, Vancouver, ISO, and other styles
9

Dawson, Michael K., and Jeffrey W. Herrmann. "Metareasoning Approaches for Thermal Management During Image Processing." In ASME 2022 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/detc2022-88459.

Full text
Abstract:
Abstract Resource-constrained electronic systems are present in many semi- and fully-autonomous systems and are tasked with computationally heavy tasks such as neural network image processing. Without sufficient cooling, these tasks often increase device temperature up to a predetermined maximum, beyond which the task is slowed by the device firmware to maintain the maximum. This is done to avoid decreased processor lifespan due to thermal fatigue or catastrophic processor failure due to thermal overstress. This paper describes a study that evaluated how well metareasoning can manage a Raspberry Pi 4B’s central processing unit (CPU) temperature while it is performing image processing (object detection and classification) on the Common Objects in Context (COCO) dataset. We developed and tested two metareasoning approaches: the first maintains constant image throughput, and the second maintains constant expected detection accuracy. The first approach switched between the InceptionV2 and MobileNetV2 image classification networks with a Single Shot Multibox Detector (SSD) attached. The second approach was tested on each network for a range of parameter values. The study also considered cases that used the system’s built-in throttling method to control the temperature. Both metarea-soning approaches were able to stabilize the device temperature without relying on throttling.
APA, Harvard, Vancouver, ISO, and other styles
10

Heydari, Ali, and Kathy Russell. "Miniature Vapor Compression Refrigeration Systems for Active Cooling of High Performance Computers." In ASME 2001 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/imece2001/epp-24710.

Full text
Abstract:
Abstract A small refrigeration system for cooling of computer system components is evaluated. A thermodynamic model describing the performance of the cycle along with a computer simulation program is developed to evaluate its performance. The refrigeration system makes use of a miniature reciprocating vapor compression compressor. Due to space limitations in some high performance computer servers, a miniature refrigeration system composed of a compressor, capillary tube, a compact condenser, and a cold-plate evaporator heat exchanger are used. Mathematical multi-zone formulation for modeling thermal-hydraulic performance of heat exchanger for the condenser and evaporator are presented. The throttling device is a capillary tube and there is presented a mathematical formulation for predicting refrigerant mass flow rate through the throttling device. A physically based efficiency formulation for simulating the performance of the miniature compressor is used. An efficient iterative numerical scheme with allowance for utilization of various refrigerants is developed to solve the governing system of equations. Using the simulation program, the effects of parameters such as the choice of working refrigerant, evaporating and condensing temperatures on system components and overall efficiency of system are studied. In addition, a RAS (reliability, availability and serviceability) discussion of the proposed CPU-cooling refrigeration solution is presented. The results of analysis show that the new technology not only overcomes many shortcomings of the traditional fan-cooled systems, but also has the capacity of increasing the cooling system’s coefficient of performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography