Gotowa bibliografia na temat „In-memory compute”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „In-memory compute”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "In-memory compute"

1

Varnava, Christiana. "Photonic devices compute in memory". Nature Electronics 2, nr 3 (marzec 2019): 91. http://dx.doi.org/10.1038/s41928-019-0226-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

John-Africa, Elijah, i Victor T. Emmah. "Performance Evaluation of LSTM and RNN Models in the Detection of Email Spam Messages". European Journal of Information Technologies and Computer Science 2, nr 6 (26.11.2022): 24–30. http://dx.doi.org/10.24018/compute.2022.2.6.80.

Pełny tekst źródła
Streszczenie:
Email spam is an unwanted bulk message that is sent to a recipient’s email address without explicit consent from the recipient. This is usually considered a means of advertising and maximizing profit, especially with the increase in the usage of the internet for social networking, but can also be very frustrating and annoying to the recipients of these messages. Recent research has shown that about 14.7 billion spam messages are sent out every single day of which more than 45% of these messages are promotional sales content that the recipient did not specifically opt-in. This has gotten the attention of many researchers in the area of natural language processing. In this paper, we used the Long Short-Time Memory (LSTM) for classification tasks between spam and Ham messages. The performance of LSTM is compared with that of a Recurrent Neural Network( RNN) which can also be used for a classification task of this nature but suffers from short-time memory and tends to leave out important information from earlier time steps to later ones in terms of prediction. The evaluation of the result shows that LSTM achieved 97% accuracy with both Adams and RMSprop optimizers compared to RNN with an accuracy of 94% with RMSprop and 87% accuracy with Adams optimizer.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhao, Dongyan, Yubo Wang, Jin Shao, Yanning Chen, Zhiwang Guo, Cheng Pan, Guangzhi Dong i in. "Compute-in-Memory for Numerical Computations". Micromachines 13, nr 5 (2.05.2022): 731. http://dx.doi.org/10.3390/mi13050731.

Pełny tekst źródła
Streszczenie:
In recent years, compute-in-memory (CIM) has been extensively studied to improve the energy efficiency of computing by reducing data movement. At present, CIM is frequently used in data-intensive computing. Data-intensive computing applications, such as all kinds of neural networks (NNs) in machine learning (ML), are regarded as ‘soft’ computing tasks. The ‘soft’ computing tasks are computations that can tolerate low computing precision with little accuracy degradation. However, ‘hard’ tasks aimed at numerical computations require high-precision computing and are also accompanied by energy efficiency problems. Numerical computations exist in lots of applications, including partial differential equations (PDEs) and large-scale matrix multiplication. Therefore, it is necessary to study CIM for numerical computations. This article reviews the recent developments of CIM for numerical computations. The different kinds of numerical methods solving partial differential equations and the transformation of matrixes are deduced in detail. This paper also discusses the iterative computation of a large-scale matrix, which tremendously affects the efficiency of numerical computations. The working procedure of the ReRAM-based partial differential equation solver is emphatically introduced. Moreover, other PDEs solvers, and other research about CIM for numerical computations, are also summarized. Finally, prospects and the future of CIM for numerical computations with high accuracy are discussed.
Style APA, Harvard, Vancouver, ISO itp.
4

Handy, Jim, i Tom Coughlin. "Semiconductor Architectures Enable Compute in Memory". Computer 56, nr 5 (maj 2023): 126–29. http://dx.doi.org/10.1109/mc.2023.3252099.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Miller, Ethan, Achilles Benetopoulos, George Neville-Neil, Pankaj Mehra i Daniel Bittman. "Pointers in Far Memory". Queue 21, nr 3 (23.06.2023): 75–93. http://dx.doi.org/10.1145/3606029.

Pełny tekst źródła
Streszczenie:
Effectively exploiting emerging far-memory technology requires consideration of operating on richly connected data outside the context of the parent process. Operating-system technology in development offers help by exposing abstractions such as memory objects and globally invariant pointers that can be traversed by devices and newly instantiated compute. Such ideas will allow applications running on future heterogeneous distributed systems with disaggregated memory nodes to exploit near-memory processing for higher performance and to independently scale their memory and compute resources for lower cost.
Style APA, Harvard, Vancouver, ISO itp.
6

Wan, Weier, Rajkumar Kubendran, Clemens Schaefer, Sukru Burc Eryilmaz, Wenqiang Zhang, Dabin Wu, Stephen Deiss i in. "A compute-in-memory chip based on resistive random-access memory". Nature 608, nr 7923 (17.08.2022): 504–12. http://dx.doi.org/10.1038/s41586-022-04992-8.

Pełny tekst źródła
Streszczenie:
AbstractRealizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2–5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6–17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM—a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0 percent on MNIST18 and 85.7 percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task.
Style APA, Harvard, Vancouver, ISO itp.
7

Wang, Ruihong, Jianguo Wang, Stratos Idreos, M. Tamer Özsu i Walid G. Aref. "The case for distributed shared-memory databases with RDMA-enabled memory disaggregation". Proceedings of the VLDB Endowment 16, nr 1 (wrzesień 2022): 15–22. http://dx.doi.org/10.14778/3561261.3561263.

Pełny tekst źródła
Streszczenie:
Memory disaggregation (MD) allows for scalable and elastic data center design by separating compute (CPU) from memory. With MD, compute and memory are no longer coupled into the same server box. Instead, they are connected to each other via ultra-fast networking such as RDMA. MD can bring many advantages, e.g., higher memory utilization, better independent scaling (of compute and memory), and lower cost of ownership. This paper makes the case that MD can fuel the next wave of innovation on database systems. We observe that MD revives the great debate of "shared what" in the database community. We envision that distributed shared-memory databases (DSM-DB, for short) - that have not received much attention before - can be promising in the future with MD. We present a list of challenges and opportunities that can inspire next steps in system design making the case for DSM-DB.
Style APA, Harvard, Vancouver, ISO itp.
8

Yu, Shimeng, Wonbo Shim, Xiaochen Peng i Yandong Luo. "RRAM for Compute-in-Memory: From Inference to Training". IEEE Transactions on Circuits and Systems I: Regular Papers 68, nr 7 (lipiec 2021): 2753–65. http://dx.doi.org/10.1109/tcsi.2021.3072200.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Alam, Shamiul, Md Mazharul Islam, Md Shafayat Hossain, Akhilesh Jaiswal i Ahmedullah Aziz. "CryoCiM: Cryogenic compute-in-memory based on the quantum anomalous Hall effect". Applied Physics Letters 120, nr 14 (4.04.2022): 144102. http://dx.doi.org/10.1063/5.0092169.

Pełny tekst źródła
Streszczenie:
The scaling of the already matured complementary metal-oxide-semiconductor technology is steadily approaching its physical limit, motivating the quest for a suitable alternative. Cryogenic operation offers a promising pathway toward continued improvement in computing speed and energy efficiency without aggressive scaling. However, the memory wall bottleneck of the traditional von-Neumann architecture persists even at cryogenic temperature. That is where a compute-in-memory (CiM) architecture, which embeds computing within the memory unit, comes into play. Computations within the memory unit help to reduce the expensive data transfer between the memory and the computing units. Therefore, CiM provides extreme energy efficiency that can enable lower cooling cost at cryogenic temperature. In this work, we demonstrate CryoCiM, a cryogenic compute-in-memory framework utilizing a nonvolatile memory system based on the quantum anomalous Hall effect (QAHE). Our design can perform memory read/write and universal binary logic operations (NAND, NOR, and XOR). We custom design a peripheral circuit assembly that can perform the read/write and single-cycle in-memory logic operations. The utilization of a QAHE-based memory system promises robustness against process variations, through the usage of topologically protected resistive states for data storage. CryoCiM is a major step toward utilizing exclusively cryogenic phenomena to serve the dual purpose of storage and computation with ultra-low power (∼nano-watts) operations.
Style APA, Harvard, Vancouver, ISO itp.
10

Redwan, Sadi M., Md Rashed-Al-Mahfuz i Md Ekramul Hamid. "Recognizing Command Words using Deep Recurrent Neural Network for Both Acoustic and Throat Speech". European Journal of Information Technologies and Computer Science 3, nr 2 (22.05.2023): 7–13. http://dx.doi.org/10.24018/compute.2023.3.2.88.

Pełny tekst źródła
Streszczenie:
The importance of speech command recognition in a human-machine interaction system is increased in recent years. In this study, we propose a deep neural network-based system for acoustic and throat command speech recognition. We apply a preprocessed pipeline to create the input of the deep learning model. Firstly, speech commands are decomposed into components using well-known signal decomposition techniques. The Mel-frequency cepstral coefficients (MFCC) feature extraction method is applied to each component of the speech commands to obtain the feature inputs for the recognition system. At this stage, we apply and compare performance using different speech decomposition techniques such as wavelet packet decomposition (WPD), continuous wavelet transform (CWT), and empirical mode decomposition (EMD) in order to find out the best technique for our model. We observe that WPD shows the best performance in terms of classification accuracy. This paper investigates long short-term memory (LSTM)-based recurrent neural network (RNN), which is trained using the extracted MFCC features. The proposed neural network is trained and tested using acoustic speech commands. Moreover, we also train and test the proposed model using a throat mic. speech commands as well. Lastly, the transfer learning technique is employed to increase the test accuracy for throat speech recognition. The weights of the model train with the acoustic signal are used to initialize the model used for throat speech recognition. Overall, we have found significant classification accuracy for both acoustic and throat command speech. We obtain LSTM is much better than the GMM-HMM model, convolutional neural networks such as CNN-tpool2 and residual networks such as res15 and res26 with an accuracy score of over 97% on Google’s Speech Commands dataset and we achieve 95.35% accuracy on our throat speech data set using the transfer learning technique.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "In-memory compute"

1

Scrbak, Marko. "Methodical Evaluation of Processing-in-Memory Alternatives". Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1505199/.

Pełny tekst źródła
Streszczenie:
In this work, I characterized a series of potential application kernels using a set of architectural and non-architectural metrics, and performed a comparison of four different alternatives for processing-in-memory cores (PIMs): ARM cores, GPGPUs, coarse-grained reconfigurable dataflow (DF-PIM), and a domain specific architecture using SIMD PIM engine consisting of a series of multiply-accumulate circuits (MACs). For each PIM alternative I investigated how performance and energy efficiency changes with respect to a series of system parameters, such as memory bandwidth and latency, number of PIM cores, DVFS states, cache architecture, etc. In addition, I compared the PIM core choices for a subset of applications and discussed how the application characteristics correlate to the achieved performance and energy efficiency. Furthermore, I compared the PIM alternatives to a host-centric solution that uses a traditional server-class CPU core or PIM-like cores acting as host-side accelerators instead of being part of 3D-stacked memories. Such insights can expose the achievable performance limits and shortcomings of certain PIM designs and show sensitivity to a series of system parameters (available memory bandwidth, application latency and bandwidth sensitivity, etc.). In addition, identifying the common application characteristics for PIM kernels provides opportunity to identify similar types of computation patterns in other applications and allows us to create a set of applications which can then be used as benchmarks for evaluating future PIM design alternatives.
Style APA, Harvard, Vancouver, ISO itp.
2

Thomas, Jonathan. "Asynchronous Validity Resolution in Sequentially Consistent Shared Virtual Memory". Fogler Library, University of Maine, 2001. http://www.library.umaine.edu/theses/pdf/Thomas.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Jiang, Song. "Efficient caching algorithms for memory management in computer systems". W&M ScholarWorks, 2004. https://scholarworks.wm.edu/etd/1539623446.

Pełny tekst źródła
Streszczenie:
As disk performance continues to lag behind that of memory systems and processors, fully utilizing memory to reduce disk accesses is a highly effective effort to improve the entire system performance. Furthermore, to serve the applications running on a computer in distributed systems, not only the local memory but also the memory on remote servers must be effectively managed to minimize I/O operations. The critical challenges in an effective memory cache management include: (1) Insightfully understanding and quantifying the locality inherent in the memory access requests; (2) Effectively utilizing the locality information in replacement algorithms; (3) Intelligently placing and replacing data in the multi-level caches of a distributed system; (4) Ensuring that the overheads of the proposed schemes are acceptable.;This dissertation provides solutions and makes unique and novel contributions in application locality quantification, general replacement algorithms, low-cost replacement policy, thrashing protection, as well as multi-level cache management in a distributed system. First, the dissertation proposes a new method to quantify locality strength, and accurately to identify the data with strong locality. It also provides a new replacement algorithm, which significantly outperforms existing algorithms. Second, considering the extremely low-cost requirements on replacement policies in virtual memory management, the dissertation proposes a policy meeting the requirements, and considerably exceeding the performance existing policies. Third, the dissertation provides an effective scheme to protect the system from thrashing for running memory-intensive applications. Finally, the dissertation provides a multi-level block placement and replacement protocol in a distributed client-server environment, exploiting non-uniform locality strengths in the I/O access requests.;The methodology used in this study include careful application behavior characterization, system requirement analysis, algorithm designs, trace-driven simulation, and system implementations. A main conclusion of the work is that there is still much room for innovation and significant performance improvement for the seemingly mature and stable policies that have been broadly used in the current operating system design.
Style APA, Harvard, Vancouver, ISO itp.
4

Squillante, Mark S. "Issues in shared-memory multiprocessor scheduling : a performance evaluation /". Thesis, Connect to this title online; UW restricted, 1990. http://hdl.handle.net/1773/6858.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Sperens, Martin. "Dynamic Memory Managment in C++". Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-76611.

Pełny tekst źródła
Streszczenie:
Memory allocation is an important part of program optimization as well as of computer architecture. This thesis examines some of the concepts of memory allocation and tries to implement overrides for the standard new and delete functions in the c++ library using memory pools combined with other techniques. The overrides are tested against the standard new and delete as well as a custom memory pool with perfect size for the allocations. The study finds that the overrides are slightly faster on a single thread but not on multiple. The study also finds that the biggest gain on performance is to create custom memory pools specific to the programs needs. Lastly, the study also lists a number of ways that the library could be improved
Style APA, Harvard, Vancouver, ISO itp.
6

Chan, Chun Keung. "A study on non-volatile memory scaling in the sub-100nm regime /". View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202005%20CHAN.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Zeffer, Håkan. "Hardware–Software Tradeoffs in Shared-Memory Implementations". Licentiate thesis, Uppsala universitet, Avdelningen för datorteknik, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86369.

Pełny tekst źródła
Streszczenie:
Shared-memory architectures represent a class of parallel computer systems commonly used in the commercial and technical market. While shared-memory servers typically come in a large variety of configurations and sizes, the advance in semiconductor technology have set the trend towards multiple cores per die and multiple threads per core. Software-based distributed shared-memory proposals were given much attention in the 90s. But their promise of short time to market and low cost could not make up for their unstable performance. Hence, these systems seldom made it to the market. However, with the trend towards chip multiprocessors, multiple hardware threads per core and increased cost of connecting multiple chips together to form large-scale machines, software coherence in one form or another might be a good intra-chip coherence solution. This thesis shows that data locality, software flexibility and minimal processor support for read and write coherence traps can offer good performance, while removing the hard limit of scalability. Our aggressive fine-grained software-only distributed shared-memory system exploits key application properties, such as locality and sharing patterns, to outperform a hardware-only machine on some benchmarks. On average, the software system is 11 percent slower than the hardware system when run on identical node and interconnect hardware. A detailed full-system simulation study of dual core CMPs, with multiple hardware threads per core and minimal processor support for coherence traps is on average one percent slower than its hardware-only counterpart when some flexibility is taken into account. Finally, a functional full-system simulation study of an adaptive coherence-batching scheme shows that the number of coherence misses can be reduced with up to 60 percent and bandwidth consumption reduced with up to 22 percent for both commercial and scientific applications.
Style APA, Harvard, Vancouver, ISO itp.
8

McDonald, Ian Lindsay. "Memory management in a distributed system of single address space operating systems supporting quality of service". Thesis, University of Glasgow, 2001. http://theses.gla.ac.uk/5427/.

Pełny tekst źródła
Streszczenie:
The choices provided by an operating system to the application developer for managing memory came in two forms: no choice at all, with the operating system making all decisions about managing memory; or the choice to implement virtual memory management specific to the individual application. The second of these choices is, for all intents and purposes, the same as the first: no choice at all. For many application developers, the cost of implementing a customised virtual memory management system is just too high. The results is that, regardless of the level of flexibility available, the developer ends up using the system-provided default. Further exacerbating the problem is the tendency for operating system developers to be extremely unimaginative when providing that same default. Advancements in virtual memory techniques such as prefetching, remote paging, compressed caching, and user-level page replacement coupled with the provision of user-level virtual memory management should have heralded a new era of choice and an application-centric approach to memory management. Unfortunately, this has failed to materialise. This dissertation describes the design and implementation of the Heracles virtual memory management system. The Heracles approach is one of inclusion rather than exclusion. The main goal of Heracles is to provide an extensible environment that is configurable to the extent of providing application-centric memory management without the need for application developers to implement their own. However, should the application developer wish to provide a more specialised implementation for all or any part of Heracles, the system is constructed around well-defined interfaces that allow new implementations to be "plugged in" where required. The result is a virtual memory management hierarchy that is highly configurable, highly flexible, and can be adapted at run-time to meet new phases in the application's behaviour. Furthermore, different parts of an application's address space can have different hierarchies associated with managing its memory.
Style APA, Harvard, Vancouver, ISO itp.
9

Bearpark, Keith. "Learning and memory in genetic programming". Thesis, University of Southampton, 2000. https://eprints.soton.ac.uk/45930/.

Pełny tekst źródła
Streszczenie:
Genetic Programming is a form of Evolutionary Computation in which computer programs are evolved by methods based on simulating the natural evolution of biological species. A new generation of a species acquires the characteristics of previous generations through the inheritance of genes by sexual reproduction and through random changes in alleles by random mutation. The new generation may enhance its ability to survive by the acquisition of cultural knowledge through learning processes. This thesis combines the transfer of knowledge by genetic means with the transfer of knowledge by cultural means. In particular, it introduces a new evolutionary operator, memory operator. In conventional genetic programming systems, a new generation is formed from a mating pool whose members are selected from the fittest members of previous generation. The new generation is produced by the exchange of genes between members of the mating pool and the random replacement of genes in the offspring. The new generation may or may not be able to survive better than its predecessor in a given environment. The memory operator augments the evolutionary process by inserting into new chromosomes genetic material known to often result in fitness improvements. This material is acquired through a learning process in which the system is required to evolve generations that survive in a less demanding environment. The cultural knowledge acquired in this learning process is applied as an intelligent form of mutation to aid survival in a more demanding environment.
Style APA, Harvard, Vancouver, ISO itp.
10

Olson, Julius, i Emma Södergren. "Long Term Memory in Conversational Robots". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260316.

Pełny tekst źródła
Streszczenie:
This study discusses an implementation of a long term memory in the robot Furhat. The idea was to find a way to prevent identical and very similar questions from being asked several times and to store the information of which questions have already been asked in a document database. The project encompasses tf-idf, as well as a small-scale test with Word2Vec, to find a vector representation of all questions from Furhat’s database and then clustering these questions with the k-means method. The tests resulted in high scores on all the evaluation metrics used, which is promising for implementation into the actual Furhat robot, as well as further research on similar implementations of long term memory functions in chatbots.
I denna rapport behandlas implementeringen av ett långtidsminne i roboten Furhat. Idén bakom detta minne var att hindra roboten från att vara repetitiv och ställa allt för likartade eller identiska frågor till en konversationspartner. Projektet inkluderar användandet av tf-idf, samt inledande försök med word2vec i skapandet av vektorrepresentationer av dialogsystemets frågor, samt klustring av dessa representationer med algoritmen k-means. De genomförda testerna renderade goda resultat, vilket är lovande för implementering av en liknande mekanism i Furhats dialogsystem samt för framtida forskning inom långtidsminnesfunktionalitet i chatbots i allmänhet.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "In-memory compute"

1

Memory storage patterns in parallel processing. Boston: Kluwer Academic, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Miller, R. K. Neural networks: Implementing associative memory models in neurocomputers. Lilburn, GA: Fairmont Press, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Miller, Richard Kendall. Neural networks: Implementing associative memory models in neurocomputers. Lilburn, GA: Fairmont Press, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kanerva, Pentti. Parallel structures in human and computer memory. Moffett Field, Calif: Research Institute for Advanced Computer Science, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Donovan, Kevin. Performance of shared memory in a parallel computer. New York: Courant Institute of Mathematical Sciences, New York University, 1990.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Mace, Mary E. Memory Storage Patterns in Parallel Processing. Boston, MA: Springer US, 1987.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

David, Hutchison. Mathematical Methods in Computer Science: Essays in Memory of Thomas Beth. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Dickey, Susan. Designing VLSI network nodes to reduce memory traffic in a shared memory parallel computer. New York: Courant Institute of Mathematical Sciences, New York University, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Cheriton, David R. The unified management of memory in the V distributed system. Stanford, Calif: Dept. of Computer Science, Stanford University, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Nikil, Dutt, i Nicolau Alexandru, red. Memory issues in embedded systems-on-chip: Optimizations and exploration. Boston: Kluwer Academic, 1999.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "In-memory compute"

1

Jiang, Hongwu, Shanshi Huang i Shimeng Yu. "Compute-in-Memory Architecture". W Handbook of Computer Architecture, 1–40. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-15-6401-7_62-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Karam, Robert, Somnath Paul i Swarup Bhunia. "Compute-in-Memory Architecture for Data-Intensive Kernels". W Emerging Technology and Architecture for Big-data Analytics, 77–101. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54840-1_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ristov, Sasko, Goran Velkoski, Marjan Gusev i Kiril Kjiroski. "Compute and Memory Intensive Web Service Performance in the Cloud". W ICT Innovations 2012, 215–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37169-1_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Crafton, Brian, Samuel Spetalnick, Gauthaman Murali, Tushar Krishna, Sung-Kyu Lim i Arijit Raychowdhury. "Statistical Array Allocation and Partitioning for Compute In-Memory Fabrics". W VLSI-SoC: Design Trends, 323–41. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81641-4_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Simner, Ben, Alasdair Armstrong, Jean Pichon-Pharabod, Christopher Pulte, Richard Grisenthwaite i Peter Sewell. "Relaxed virtual memory in Armv8-A". W Programming Languages and Systems, 143–73. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99336-8_6.

Pełny tekst źródła
Streszczenie:
AbstractVirtual memory is an essential mechanism for enforcing security boundaries, but its relaxed-memory concurrency semantics has not previously been investigated in detail. The concurrent systems code managing virtual memory has been left on an entirely informal basis, and OS and hypervisor verification has had to make major simplifying assumptions.We explore the design space for relaxed virtual memory semantics in the Armv8-A architecture, to support future system-software verification. We identify many design questions, in discussion with Arm; develop a test suite, including use cases from the pKVM production hypervisor under development by Google; delimit the design space with axiomatic-style concurrency models; prove that under simple stable configurations our architectural model collapses to previous “user” models; develop tooling to compute allowed behaviours in the model integrated with the full Armv8-A ISA semantics; and develop a hardware test harness.This lays out some of the main issues in relaxed virtual memory bringing these security-critical systems phenomena into the domain of programming-language semantics and verification with foundational architecture semantics.
Style APA, Harvard, Vancouver, ISO itp.
6

Natale, Emanuele, i Iliad Ramezani. "On the Necessary Memory to Compute the Plurality in Multi-agent Systems". W Lecture Notes in Computer Science, 323–38. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-17402-6_27.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Liao, Xiangke, Canqun Yang, Zhe Quan, Tao Tang i Cheng Chen. "An Efficient Clique-Based Algorithm of Compute Nodes Allocation for In-memory Checkpoint System". W Lecture Notes in Computer Science, 197–211. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20119-1_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Finkel, Alain, Serge Haddad i Igor Khmelnitsky. "Minimal Coverability Tree Construction Made Complete and Efficient". W Lecture Notes in Computer Science, 237–56. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45231-5_13.

Pełny tekst źródła
Streszczenie:
AbstractDownward closures of Petri net reachability sets can be finitely represented by their set of maximal elements called the minimal coverability set or Clover. Many properties (coverability, boundedness, ...) can be decided using Clover, in a time proportional to the size of Clover. So it is crucial to design algorithms that compute it efficiently. We present a simple modification of the original but incomplete Minimal Coverability Tree algorithm (MCT), computing Clover, which makes it complete: it memorizes accelerations and fires them as ordinary transitions. Contrary to the other alternative algorithms for which no bound on the size of the required additional memory is known, we establish that the additional space of our algorithm is at most doubly exponential. Furthermore we have implemented a prototype which is already very competitive: on benchmarks it uses less space than all the other tools and its execution time is close to the one of the fastest tool.
Style APA, Harvard, Vancouver, ISO itp.
9

Tsugane, Keisuke, Taisuke Boku, Hitoshi Murai, Mitsuhisa Sato, William Tang i Bei Wang. "Hybrid-View Programming of Nuclear Fusion Simulation Code in XcalableMP". W XcalableMP PGAS Programming Language, 181–203. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7683-6_7.

Pełny tekst źródła
Streszczenie:
AbstractXcalableMP(XMP) supports a global-view model that allows programmers to define global data and to map them to a set of processors, which execute the distributed global data as a single thread. In XMP, the concept of a coarray is also employed for local-view programming. In this study, we port Gyrokinetic Toroidal Code - Princeton (GTC-P), which is a three-dimensional gyrokinetic PIC code developed at Princeton University to study the microturbulence phenomenon in magnetically confined fusion plasmas, to XMP as an example of hybrid memory model coding with the global-view and local-view programming models. In local-view programming, the coarray notation is simple and intuitive compared with Message Passing Interface (MPI) programming, while the performance is comparable to that of the MPI version. Thus, because the global-view programming model is suitable for expressing the data parallelism for a field of grid space data, we implement a hybrid-view version using a global-view programming model to compute the field and a local-view programming model to compute the movement of particles. The performance is degraded by 20% compared with the original MPI version, but the hybrid-view version facilitates more natural data expression for static grid space data (in the global-view model) and dynamic particle data (in the local-view model), and it also increases the readability of the code for higher productivity.
Style APA, Harvard, Vancouver, ISO itp.
10

Beyer, Dirk. "Competition on Software Verification and Witness Validation: SV-COMP 2023". W Tools and Algorithms for the Construction and Analysis of Systems, 495–522. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30820-8_29.

Pełny tekst źródła
Streszczenie:
AbstractThe 12th edition of the Competition on Software Verification (SV-COMP 2023) is again the largest overview of tools for software verification, evaluating 52 verification systems from 34 teams from 10 countries. Besides providing an overview of the state of the art in automatic software verification, the goal of the competition is to establish standards, provide a platform for exchange to developers of such tools, educate PhD students on reproducibility approaches and benchmarking, and provide computing resources to developers that do not have access to compute clusters. The competition consisted of 23 805 verification tasks for C programs and 586 verification tasks for Java programs. The specifications include reachability, memory safety, overflows, and termination. This year, the competition introduced a new competition track on witness validation, where validators for verification witnesses are evaluated with respect to their quality.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "In-memory compute"

1

Fick, Laura, i Dave Fick. "Introduction to Compute-in-Memory". W 2019 IEEE Custom Integrated Circuits Conference (CICC). IEEE, 2019. http://dx.doi.org/10.1109/cicc.2019.8780261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Salahuddin, Saveef, Ava Tan, Suraj Cheema, Nirmaan Shanker, Michael Hoffmann i J. H. Bae. "FeFETs for Near-Memory and In-Memory Compute". W 2021 IEEE International Electron Devices Meeting (IEDM). IEEE, 2021. http://dx.doi.org/10.1109/iedm19574.2021.9720622.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

"Session 7 Overview: SRAM Compute-In-Memory: Memory Subcommittee". W 2023 IEEE International Solid- State Circuits Conference (ISSCC). IEEE, 2023. http://dx.doi.org/10.1109/isscc42615.2023.10067261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, C. T., W. L. Chang, C. Y. Chen i Douglas Yu. "Immersion in Memory Compute (ImMC) Technology". W 2020 IEEE Symposium on VLSI Technology. IEEE, 2020. http://dx.doi.org/10.1109/vlsitechnology18217.2020.9265019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

"Session 15 Overview: SRAM & Compute-In-Memory: Memory Subcommittee". W 2020 IEEE International Solid- State Circuits Conference - (ISSCC). IEEE, 2020. http://dx.doi.org/10.1109/isscc19947.2020.9063041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Patricia, A. Trephena, S. Ewins Pon Pushpa i A. Diana Andrushia. "Analysis of Cache Memory Circuit for Compute In Memory Applications". W 2023 4th International Conference on Signal Processing and Communication (ICSPC). IEEE, 2023. http://dx.doi.org/10.1109/icspc57692.2023.10125285.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

"Session 33 Overview: Non-Volatile Memory and Compute-In-Memory". W 2023 IEEE International Solid- State Circuits Conference (ISSCC). IEEE, 2023. http://dx.doi.org/10.1109/isscc42615.2023.10067750.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Dutt, Shantanu. "Session details: Compute-in-Memory and Design of Structured Compute Arrays". W ICCAD '22: IEEE/ACM International Conference on Computer-Aided Design. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3582551.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Jao, Nicholas, Akshay Krishna Ramanathan, Srivatsa Srinivasa, Sumitha George, John Sampson i Vijaykrishnan Narayanan. "Harnessing Emerging Technology for Compute-in-Memory Support". W 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 2018. http://dx.doi.org/10.1109/isvlsi.2018.00087.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Fick, D. "Analog Compute-in-Memory For AI Edge Inference". W 2022 IEEE International Electron Devices Meeting (IEDM). IEEE, 2022. http://dx.doi.org/10.1109/iedm45625.2022.10019367.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "In-memory compute"

1

Mayas, Magda. Creating with timbre. Norges Musikkhøgskole, sierpień 2018. http://dx.doi.org/10.22501/nmh-ar.686088.

Pełny tekst źródła
Streszczenie:
Unfolding processes of timbre and memory in improvisational piano performance This exposition is an introduction to my research and practice as a pianist, in which I unfold processes of timbre and memory in improvised music from a performer’s perspective. Timbre is often understood as a purely sonic perceptual phenomenon. However, this is not in accordance with a site-specific improvisational practice with changing spatial circumstances impacting the listening experience, nor does it take into account the agency of the instrument and objects used or the performer’s movements and gestures. In my practice, I have found a concept as part of the creating process in improvised music which has compelling potential: Timbre orchestration. My research takes the many and complex aspects of a performance environment into account and offers an extended understanding of timbre, which embraces spatial, material and bodily aspects of sound in improvised music performance. The investigative projects described in this exposition offer a methodology to explore timbral improvisational processes integrated into my practice, which is further extended through collaborations with sound engineers, an instrument builder and a choreographer: -experiments in amplification and recording, resulting in Memory piece, a series of works for amplified piano and multichannel playback - Piano mapping, a performance approach, with a custom-built device for live spatialization as means to expand and deepen spatio-timbral relationships; - Accretion, a project with choreographer Toby Kassell for three grand pianos and a pianist, where gestural approaches are used to activate and compose timbre in space. Together, the projects explore memory as a structural, reflective and performative tool and the creation of performing and listening modes as integrated parts of timbre orchestration. Orchestration and choreography of timbre turn into an open and hybrid compositional approach, which can be applied to various contexts, engaging with dynamic relationships and re-configuring them.
Style APA, Harvard, Vancouver, ISO itp.
2

Bartlett, Roscoe Ainsworth. Teuchos C++ memory management classes, idioms, and related topics, the complete reference : a comprehensive strategy for safe and efficient memory management in C++ for high performance computing. Office of Scientific and Technical Information (OSTI), maj 2010. http://dx.doi.org/10.2172/992335.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Castro, Carolina Robledo, Piedad Rocio Lerma-Castaño i Luis Gerardo Pachón-Ospina. Rehabilitation programs based on computational systems: effects in the executive functions in young and middle adulthood: A scoping review. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, październik 2022. http://dx.doi.org/10.37766/inplasy2022.10.0052.

Pełny tekst źródła
Streszczenie:
Review question / Objective: To identify empirical studies that measured the feasibility and effect of computer-based executive function stimulation and rehabilitation programs in the young and middle adult population. Background: Reviews that evaluate the effectiveness of computerized cognitive training programs on executive functions in different population groups have shown contradictory results, to a certain extent associated with the methodological characteristics of said studies (Gates et al., 2019; 2020); most of them These reviews have focused on older adults (Ten Brinke et al., 2020; Yoo et al., 2015) with stroke sequelae, and adults with cognitive impairment. These studies have found improvements in general cognitive function in older adults (Ten Brinke et al., 2020); however, the effect on executive functions have not been studied. Only one review was carried out on the average adult (Gates et al., 2019); the authors restricted the search to interventions with more than 12 weeks and only found one article with eligibility criteria. Their work concluded that computerized cognitive training in midlife demonstrated lasting effects on general cognitive function after 12 weeks of training and on memory after 24 weeks of training.
Style APA, Harvard, Vancouver, ISO itp.
4

Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano i R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.

Pełny tekst źródła
Streszczenie:
This deliverable focuses on the proling activities developed in the project with the partner's applications. To perform this proling activities, a couple of benchmarks were dened in collaboration with WP5. The rst benchmark is an embarrassingly parallel benchmark that performs a read and then multiple writes of the same object, with the objective of stressing the memory and storage systems and evaluate the overhead when these reads and writes are performed in parallel. A second benchmark is dened based on the Continuation Multi Level Monte Carlo (C-MLMC) algorithm. While this algorithm is normally executed using multiple levels, for the proling and performance analysis objectives, the execution of a single level was enough since the forthcoming levels have similar performance characteristics. Additionally, while the simulation tasks can be executed as parallel (multi-threaded tasks), in the benchmark, single threaded tasks were executed to increase the number of simulations to be scheduled and stress the scheduling engines. A set of experiments based on these two benchmarks have been executed in the MareNostrum 4 supercomputer and using PyCOMPSs as underlying programming model and dynamic scheduler of the tasks involved in the executions. While the rst benchmark was executed several times in a single iteration, the second benchmark was executed in an iterative manner, with cycles of 1) Execution and trace generation; 2) Performance analysis; 3) Improvements. This had enabled to perform several improvements in the benchmark and in the scheduler of PyCOMPSs. The initial iterations focused on the C-MLMC structure itself, performing re-factors of the code to remove ne grain and sequential tasks and merging them in larger granularity tasks. The next iterations focused on improving the PyCOMPSs scheduler, removing existent bottlenecks and increasing its performance by making the scheduler a multithreaded engine. While the results can still be improved, we are satised with the results since the granularity of the simulations run in this evaluation step are much ner than the one that will be used for the real scenarios. The deliverable nishes with some recommendations that should be followed along the project in order to obtain good performance in the execution of the project codes.
Style APA, Harvard, Vancouver, ISO itp.
5

van der Mensbrugghe, Dominique. A Summary Guide to the Latin Hypercube Sampling (LHS) Utility. GTAP Working Paper, maj 2023. http://dx.doi.org/10.21642/gtap.wp94.

Pełny tekst źródła
Streszczenie:
Latin Hypercube Sampling (LHS) is one method of Monte Carlo-type sampling, which is useful for limiting sample size yet maximizing the range of sampling of the underlying distributions. The LHS utility, for which this document describes the usage, also allows for user-specified correlations between two or more of the sampled distributions. The LHS utility described herein is a full re-coding using C/C++ of the original LHS utility—developed at Sandia National Labs (Swiler and Wyss (2004)), written in FORTRAN and freely available. The re-coding hones close to the original FORTRAN code, but allows for significantly more flexibility. For example, dynamic memory allocation is used for all internal variables and hence there are no pre-determined dimensions. The new utility has additional features compared to the original FORTRAN code: (1) it includes 10 new statistical distributions; (2) it has four additional output formats; and (3) it has an alternative random number generator. This guide provides a summary of the full features of the LHS utility. For a complete reference, with the exception of the new features, as well as a description of the intuition behind the LHS algorithm users are referred to Swiler and Wyss (2004)
Style APA, Harvard, Vancouver, ISO itp.
6

Tan, Peng, i Nicholas Sitar. Parallel Level-Set DEM (LS-DEM) Development and Application to the Study of Deformation and Flow of Granular Media. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, marzec 2023. http://dx.doi.org/10.55461/kmiz5819.

Pełny tekst źródła
Streszczenie:
We present a systematic investigation of computational approaches to the modeling of granular materials. Granular materials are ubiquitous in everyday life and in a variety of engineering and industrial applications. Despite the apparent simplicity of the laws governing particle-scale interactions, predicting the continuum mechanical response of granular materials still poses extraordinary challenges. This is largely due to the complex history dependence resulting from continuous rearrangement of the microstructure of granular material, as well as the mechanical interlocking due to grain morphology and surface roughness. X-Ray Computed Tomography (XRCT) is used to characterize the grain morphology and the fabric of the granular media, naturally deposited sand in this study. The Level-Set based Discrete Element Method (LS-DEM) is then used to bridge the granular behavior gap between the micro and macro scale. The LS-DEM establishes a one-to-one correspondence between granular objects and numerical avatars and captures the details of grain morphology and surface roughness. However, the high-fidelity representation significantly increases the demands on computational resources. To this end a parallel version of LS-DEM is introduced to significantly decrease the computational demands. The code employs a binning algorithm, which reduces the search complexity of contact detection from O(n2) to O(n), and a domain decomposition strategy is used to elicit parallel computing in a memory- and communication-efficient manner. The parallel implementation shows good scalability and efficiency. High fidelity LS avatars obtained from XRCT images of naturally deposited sand are then used to replicate the results of triaxial tests using the new, parallel LS-DEM code. The result show that both micro- and macro-mechanical behavior of natural material is well captured and is consistent with experimental data, confirming experimental observation that the primary source of peak strength of sand is the mechanical interlocking between irregularly shaped grains. Specifically, triaxial test simulations with a flexible membrane produce a very good match to experimentally observed relationships between deviatoric stress and mobilized friction angle for naturally deposited sand. We then explore the viability of modeling dynamic problems with a new formulation of an impulse based LS-DEM. The new formulation is stable, fast, and energy conservative. However, it can be numerically stiff when the assembly has substantial mass differences between particles. We also demonstrate the feasibility of modeling deformable structures in the rigid body framework and propose several enhancements to improve the convergence of collision resolution, including a hybrid time integration scheme to separately handle at rest contacts and dynamic collisions. Finally, we extend the impulse-based LS-DEM to include arbitrarily shaped topographic surfaces and exploit its algorithmic advantages to demonstrate the feasibility of modeling realistic behavior of granular flows. The novel formulation significantly improves performance of dynamic simulations by allowing larger time steps, which is advantageous for observing the full development of physical phenomena such as rock avalanches, which we present as an illustrative example.
Style APA, Harvard, Vancouver, ISO itp.
7

Palmer, Guy, Varda Shkap, Wendy Brown i Thea Molad. Control of bovine anaplasmosis: cytokine enhancement of vaccine efficacy. United States Department of Agriculture, marzec 2007. http://dx.doi.org/10.32747/2007.7695879.bard.

Pełny tekst źródła
Streszczenie:
Anaplasmosis an arthropod-born disease of cattle caused by the rickettsia Anaplasma marginale and is an impediment to efficient production of healthy livestock in both Israel and the United States. Currently the only effective vaccines are derived from the blood of infected cattle. The risk of widespread transmission of both known and newly emergent pathogens has prevented licensure of live blood-based vaccines in the U.S. and is a major concern for their continued use in Israel. Consequently development of a safe, effective vaccine is a high priority. In this collaborative project we focused on two approaches to vaccine development. The first focused o n improving antigen delivery to livestock and specifically examined how DNA vaccines could be improved to enhance priming and expansion of the immune response. This research resulted in development and testing of two novel vaccine delivery systems--one that targeted antigen spread among dendritic cells (the key cell in priming immune responses and a follow-on construct that also specifically targeted antigen to the endosomal-lysosomal compartment the processing organelle within the dendritic cell that directs vaccine antigen to the MHC class ll-CD4* T cell priming pathway). The optimized construct targeting vaccine antigen to the dendritic cell MHC class II pathway was tested for ability to prime A. marginale specific immune responses in outbred cattle. The results demonstrated both statistically significant effects of priming with a single immunization, continued expansion of the primary immune response including development of high affinity lgG antibodies and rapid recall of the memory response following antigen challenge. This portion of the study represented a significant advance in vaccine delivery for livestock. Importantly the impact of these studies is not limited to A. marginale a s the targeting motifs are optimized for cattle and can be adapted to other cattle vaccinations by inserting a relevant pathogen-specific antigen. The second approach (which represented an addition to the project for which approval was requested as part of the first annual report) was a comparative approach between A . marginale and the Israel A . centrale vaccines train. This addition was requested as studies on Major Surface Protein( MSP)- 2 have shown that this antigen is highly antigenically variable and presented solely as a "static vaccine" antigen does not give cross-strain immunity. In contrast A. . centrale is an effective vaccine which Kimron Veterinary institute has used in the field in Israel for over 50 years. Taking advantage of this expertise, a broad comparison of wild type A. marginale and vaccine strain was initiated. These studies revealed three primary findings: i) use of the vaccine is associated with superinfection, but absence of clinical disease upon superinfection with A. marginale; ii) the A. centrale vaccine strain is not only less virulent but transmission in competent in Dermacentor spp. ticks; and iii) some but not all MSPs are conserved in basic orthologous structure but there are significant polymorphisms among the strains. These studies clearly indicated that there are statistically significant differences in biology (virulence and transmission) and provide a clear path for mapping of biology with the genomes. Based on these findings, we initiated complete genome sequencing of the Israel vaccine strain (although not currently funded by BARD) and plant to proceed with a comparative genomics approach using already sequenced wild-type A. marginale. These findings and ongoing collaborative research tie together filed vaccine experience with new genomic data, providing a new approach to vaccine development against a complex pathogen.
Style APA, Harvard, Vancouver, ISO itp.
8

DESIGN OF THE DEPLOYABLE-FOLDABLE ACTUATOR AND VIBRATION CONTROL DEVICE BASED ON THE SHAPE MEMORY ALLOYS WITH A TWO-WAY EFFECT. The Hong Kong Institute of Steel Construction, sierpień 2022. http://dx.doi.org/10.18057/icass2020.p.306.

Pełny tekst źródła
Streszczenie:
The paper aims at the design method of the space deployable-foldable actuator and vibration control device, and the selected material is the shape memory alloy. These devices can repeatedly adjust the deploy and fold states by changing the temperature, and also present a large energy dissipation to keep the stability of the structures in the vibration control. It can be observed that the fabricated two-way shape memory alloy actuator can present steady fold-deploy procedures more than five times, in which the recoverable rate is higher than 95.83%, and the required time in the complete deploying process is 15 s. Meanwhile, the vibration control device based on the shape memory alloys also gives an excellent performance, the lightweight device is only 315 g, and the vibration in the vertical direction can be limited to the millimeter-level (0.917 mm), it can also endure the repeated loadings in the applications and keep a good operating condition.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii