Статті в журналах з теми "Memory (Artificial)"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Memory (Artificial).

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Memory (Artificial)".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Huang, Guang-qiu. "Artificial memory optimization." Applied Soft Computing 61 (December 2017): 497–526. http://dx.doi.org/10.1016/j.asoc.2017.08.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Wan, Changjin, Pingqiang Cai, Ming Wang, Yan Qian, Wei Huang, and Xiaodong Chen. "Artificial Sensory Memory." Advanced Materials 32, no. 15 (July 30, 2019): 1902434. http://dx.doi.org/10.1002/adma.201902434.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kim, Dongshin, and Jang-Sik Lee. "Liquid-based memory and artificial synapse." Nanoscale 11, no. 19 (2019): 9726–32. http://dx.doi.org/10.1039/c9nr02767j.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Park, Youngjun, Min-Kyu Kim, and Jang-Sik Lee. "Emerging memory devices for artificial synapses." Journal of Materials Chemistry C 8, no. 27 (2020): 9163–83. http://dx.doi.org/10.1039/d0tc01500h.

Повний текст джерела
Анотація:
This paper reviews recent developments in artificial synapses that exploit various emerging memory devices. The emulation of synaptic plasticity and operation mechanism of artificial synapses using various materials and structures are presented.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Welberg, Leonie. "Artificial activation of a memory trace." Nature Reviews Neuroscience 13, no. 5 (April 13, 2012): 287. http://dx.doi.org/10.1038/nrn3242.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Li, Xianneng, and Guangfei Yang. "Artificial bee colony algorithm with memory." Applied Soft Computing 41 (April 2016): 362–72. http://dx.doi.org/10.1016/j.asoc.2015.12.046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Chen, Yujie, Chi Chen, Hafeez Ur Rehman, Xu Zheng, Hua Li, Hezhou Liu, and Mikael S. Hedenqvist. "Shape-Memory Polymeric Artificial Muscles: Mechanisms, Applications and Challenges." Molecules 25, no. 18 (September 16, 2020): 4246. http://dx.doi.org/10.3390/molecules25184246.

Повний текст джерела
Анотація:
Shape-memory materials are smart materials that can remember an original shape and return to their unique state from a deformed secondary shape in the presence of an appropriate stimulus. This property allows these materials to be used as shape-memory artificial muscles, which form a subclass of artificial muscles. The shape-memory artificial muscles are fabricated from shape-memory polymers (SMPs) by twist insertion, shape fixation via Tm or Tg, or by liquid crystal elastomers (LCEs). The prepared SMP artificial muscles can be used in a wide range of applications, from biomimetic and soft robotics to actuators, because they can be operated without sophisticated linkage design and can achieve complex final shapes. Recently, significant achievements have been made in fabrication, modelling, and manipulation of SMP-based artificial muscles. This paper presents a review of the recent progress in shape-memory polymer-based artificial muscles. Here we focus on the mechanisms of SMPs, applications of SMPs as artificial muscles, and the challenges they face concerning actuation. While shape-memory behavior has been demonstrated in several stimulated environments, our focus is on thermal-, photo-, and electrical-actuated SMP artificial muscles.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Caravelli, Francesco, Gia-Wei Chern, and Cristiano Nisoli. "Artificial spin ice phase-change memory resistors." New Journal of Physics 24, no. 2 (February 1, 2022): 023020. http://dx.doi.org/10.1088/1367-2630/ac4c0a.

Повний текст джерела
Анотація:
Abstract We present a proposal for realization of an electrical memory reminiscent of a memristor in connected Kagome artificial spin ice. We show that current flowing through the system alters the magnetic ensemble, which in turns controls the overall resistance thus leaving memory of current passage in the system. This introduces a current-dependent effect for a dynamic resistive state. We simulate a spin-induced thermal phase-change mechanism, and an athermal domain-wall spin inversion. In both cases we observe electrical memory behavior with an I–V hysteretic pinched loop, typical of memristors. These results can be extended to the more complex geometries in which artificial spin ice can be designed to engineer the hysteresis curve.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Izawa, Hideki, Yukio Sekiguchi, and Yasuhito Shiota. "The artificial muscle from shape memory alloy." Journal of Life Support Engineering 17, Supplement (2005): 124. http://dx.doi.org/10.5136/lifesupport.17.supplement_124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Querlioz, Damien. "(Invited) Memory-Centric Artificial Intelligence with Nanodevices." ECS Meeting Abstracts MA2020-01, no. 24 (May 1, 2020): 1387. http://dx.doi.org/10.1149/ma2020-01241387mtgabs.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Takashima, Kazuto, Jonathan Rossiter, and Toshiharu Mukai. "McKibben artificial muscle using shape-memory polymer." Sensors and Actuators A: Physical 164, no. 1-2 (November 2010): 116–24. http://dx.doi.org/10.1016/j.sna.2010.09.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lo, Shih-Hsiang, Che-Rung Lee, Quey-Liang Kao, I.-Hsin Chung, and Yeh-Ching Chung. "Improving GPU Memory Performancewith Artificial Barrier Synchronization." IEEE Transactions on Parallel and Distributed Systems 25, no. 9 (September 2014): 2342–52. http://dx.doi.org/10.1109/tpds.2013.133.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Buddefeld, J., and K. E. Grosspietsch. "Intelligent-memory architecture for artificial neural networks." IEEE Micro 22, no. 3 (May 2002): 32–40. http://dx.doi.org/10.1109/mm.2002.1013302.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

., Santosh Saraf. "ASSOCIATIVE MEMORY IMPLEMENTATION WITH ARTIFICIAL NEURAL NETWORKS." International Journal of Research in Engineering and Technology 03, no. 15 (May 25, 2014): 152–54. http://dx.doi.org/10.15623/ijret.2014.0315028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

ISHIKAWA, Toshiya, and Takeshi NAKADA. "Shape Memory Alloy Actuator for Artificial Muscle." Journal of Environment and Engineering 5, no. 1 (2010): 105–13. http://dx.doi.org/10.1299/jee.5.105.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

M, Robert. "A Successful Artificial Memory Has Been Created." Scientific American 30, no. 6 (November 2019): 12. http://dx.doi.org/10.1038/scientificamericanmind1119-12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Manca, Vincenzo. "Artificial Neural Network Learning, Attention, and Memory." Information 15, no. 7 (July 2, 2024): 387. http://dx.doi.org/10.3390/info15070387.

Повний текст джерела
Анотація:
The learning equations of an ANN are presented, giving an extremely concise derivation based on the principle of backpropagation through the descendent gradient. Then, a dual network is outlined acting between synapses of a basic ANN, which controls the learning process and coordinates the subnetworks selected by attention mechanisms toward purposeful behaviors. Mechanisms of memory and their affinity with comprehension are considered, by emphasizing the common role of abstraction and the interplay between assimilation and accommodation, in the spirit of Piaget’s analysis of psychological acquisition and genetic epistemology. Learning, comprehension, and knowledge are expressed as different levels of organization of informational processes inside cognitive systems. It is argued that formal analyses of cognitive artificial systems could shed new light on typical mechanisms of “natural intelligence” and, in a specular way, that models of natural cognition processes could promote further developments of ANN models. Finally, new possibilities of chatbot interaction are briefly discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Dushkin, R. V., V. A. Lelekova, V. Y. Stepankov, and S. Fadeeva. "Structure of associative heterarchical memory." Russian Technological Journal 10, no. 5 (October 20, 2022): 7–15. http://dx.doi.org/10.32362/2500-316x-2022-10-5-7-15.

Повний текст джерела
Анотація:
Objectives. Since the 20th century, artificial intelligence methods can be divided into two paradigms: top-down and bottom-up. While the methods of the ascending paradigm are difficult to interpret as natural language outputs, those applied according to the descending paradigm make it difficult to actualize information. Thus, natural language processing (NLP) by artificial intelligence remains a pressing problem of our time. The main task of NLP is to create applications that can process and understand natural languages. According to the presented approach to the construction of artificial intelligence agents (AI-agents), processing of natural language should be conducted at two levels: at the bottom, methods of the ascending paradigm are employed, while symbolic methods associated with the descending paradigm are used at the top. To solve these problems, the authors of the present paper propose a new mathematical formalism: associative heterarchical memory (AH-memory), whose structure and functionality are based both on bionic principles and on the achievements of top-down and bottom-up artificial intelligence paradigms.Methods. Natural language recognition algorithms were used in conjunction with various artificial intelligence methods.Results. The problem of character binding as applied to AH-memory was explored by the research group in earlier research. Here, abstract symbol binding was performed using multi-serial integration, eventually converting the primary symbols produced by the program into integrated abstract symbols. The present paper provides a comprehensive description of AH-memory in the form of formulas, along with their explanations and corresponding schemes.Conclusions. The most universal structure of AH-memory is presented. When working with AH-memory, a developer should select from a variety of possible module sets those AH-memory components that support the most successful and efficient functioning of the AI-agent.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Green, HS, and T. Triffet. "Quantum Mechanics, Real and Artificial Intelligence." Australian Journal of Physics 44, no. 3 (1991): 323. http://dx.doi.org/10.1071/ph910323.

Повний текст джерела
Анотація:
Some incompletely resolved problems in the quantal theories of measurement and observation are discussed with reference to Schrodinger's 'cat paradox' and the paradox of Wigner's friend. A simple version of the theory of measurement is presented, which does not completely resolve these paradoxes but suggests the need for an objective quantal description of the process of observation, and the formation of memory, of an event originating at the microscopic level, by an animal or artificial intelligence. A quantised model is then developed to simulate the function of the cerebral cortex in the formation of memory of sensory impressions, with macroscopic observables expressed in terms of parafermion operators of very large order. A letter from Schrodinger, which corrects some published versions of his paradox, is presented as well as a short account of the simulated formation of long-term memory by the model in an appendix.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Li, Xuesong, Pan Zeng, Feilong Wang, Dai Zhang, Yi Zhou, Rongqing Liang, Qiongrong Ou, Xiang Wu, and Shuyu Zhang. "A nanoimprinted artificial engram device." Nanoscale Horizons 6, no. 9 (2021): 718–28. http://dx.doi.org/10.1039/d1nh00064k.

Повний текст джерела
Анотація:
A nanoimprinted artificial engram device is presented, which meets all the requirements for engrams including synaptic plasticity, long memory storage time, asymmetric memorizing-forgetting behavior and measurable changes and responses.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Kulyukin, Vladimir A. "On Correspondences between Feedforward Artificial Neural Networks on Finite Memory Automata and Classes of Primitive Recursive Functions." Mathematics 11, no. 12 (June 8, 2023): 2620. http://dx.doi.org/10.3390/math11122620.

Повний текст джерела
Анотація:
When realized on computational devices with finite quantities of memory, feedforward artificial neural networks and the functions they compute cease being abstract mathematical objects and turn into executable programs generating concrete computations. To differentiate between feedforward artificial neural networks and their functions as abstract mathematical objects and the realizations of these networks and functions on finite memory devices, we introduce the categories of general and actual computabilities and show that there exist correspondences, i.e., bijections, between functions computable by trained feedforward artificial neural networks on finite memory automata and classes of primitive recursive functions.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Widrow, Bernard, and Juan Carlos Aragon. "Cognitive memory." Neural Networks 41 (May 2013): 3–14. http://dx.doi.org/10.1016/j.neunet.2013.01.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Park, Taewon, Inchul Choi, and Minho Lee. "Distributed associative memory network with memory refreshing loss." Neural Networks 144 (December 2021): 33–48. http://dx.doi.org/10.1016/j.neunet.2021.07.030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Meshref, Hossam. "Novel Implementation of the Artificial Immune System Memory." International Journal of Computer Applications 155, no. 13 (December 15, 2016): 36–41. http://dx.doi.org/10.5120/ijca2016912585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

TAKAGI, Toshiyuki, Yun LUO, Shinya HARA, Tomoyuki YAMABE, Shintaro AMAE, Motoki WADA, and Hirokazu NAKAMURA. "An artificial sphincter using shape memory alloy actuators." Journal of Advanced Science 12, no. 3 (2000): 337–42. http://dx.doi.org/10.2978/jsas.12.337.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

TAKAGI, Toshiyuki, Yun LUO, Hirokazu NAKAMURA, Shintaro AMAE, Tomoyuki YAMBE, Takamichi KAMIYAMA, Motoki WADA, Shinya Hara, Jun Makino, and Kiyoshi Yamauchi. "Application of Shape Memory Alloys in Artificial Sphincters." Proceedings of the JSME annual meeting 2000.1 (2000): 55–56. http://dx.doi.org/10.1299/jsmemecjo.2000.1.0_55.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Patnaik, L. M., and J. Mohan Kumar. "Distributed memory systems for simulating artificial neural networks." Computers & Electrical Engineering 19, no. 6 (November 1993): 431–43. http://dx.doi.org/10.1016/0045-7906(93)90019-n.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kunze, Donald. "Chiasmus, Artificial Memory, and the Arts of Arrangement." Nexus Network Journal 12, no. 3 (September 15, 2010): 377–88. http://dx.doi.org/10.1007/s00004-010-0041-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

BROM, CYRIL, JIŘÍ LUKAVSKÝ, and RUDOLF KADLEC. "EPISODIC MEMORY FOR HUMAN-LIKE AGENTS AND HUMAN-LIKE AGENTS FOR EPISODIC MEMORY." International Journal of Machine Consciousness 02, no. 02 (December 2010): 227–44. http://dx.doi.org/10.1142/s1793843010000461.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Munir, Aumm-e.-hani, and Wajahat Mahmood Qazi. "Artificial Subjectivity: Personal Semantic Memory Model for Cognitive Agents." Applied Sciences 12, no. 4 (February 11, 2022): 1903. http://dx.doi.org/10.3390/app12041903.

Повний текст джерела
Анотація:
Personal semantic memory is a way of inducing subjectivity in intelligent agents. Personal semantic memory has knowledge related to personal beliefs, self-knowledge, preferences, and perspectives in humans. Modeling this cognitive feature in the intelligent agent can help them in perception, learning, reasoning, and judgments. This paper presents a methodology for the development of personal semantic memory in response to external information. The main contribution of the work is to propose and implement the computational version of personal semantic memory. The proposed model has modules for perception, learning, sentiment analysis, knowledge representation, and personal semantic construction. These modules work in synergy for personal semantic knowledge formulation, learning, and storage. Personal semantics are added to the existing body of knowledge qualitatively and quantitatively. We performed multiple experiments where the agent had conversations with the humans. Results show an increase in personal semantic knowledge in the agent’s memory during conversations with an F1 score of 0.86. These personal semantics evolved qualitatively and quantitatively with time during experiments. Results demonstrated that agents with the given personal semantics architecture possessed personal semantics that can help the agent to produce some sort of subjectivity in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bhatnagar, Priyanka, Thanh Tai Nguyen, Sangho Kim, Ji Heun Seo, Malkeshkumar Patel, and Joondong Kim. "Transparent photovoltaic memory for neuromorphic device." Nanoscale 13, no. 10 (2021): 5243–50. http://dx.doi.org/10.1039/d0nr08966d.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Qin, Ling, Siqi Cheng, Bingyang Xie, Xianhua Wei, and Wenjing Jie. "Co-existence of bipolar nonvolatile and volatile resistive switching based on WO3 nanowire for applications in neuromorphic computing and selective memory." Applied Physics Letters 121, no. 9 (August 29, 2022): 093502. http://dx.doi.org/10.1063/5.0113433.

Повний текст джерела
Анотація:
A two-terminal memristor can be used for information memory and logic operation as well as serving as an artificial synapse for neuromorphic computing. Selective memory with some enjoyable information to be remembered and other to be screened out can be emulated by an artificial synapse. In this work, a memristor based on a single WO3 nanowire can be constructed, which demonstrates the co-existence of bipolar nonvolatile and volatile resistive switching (RS) behaviors that can be tuned by the amplitude of the operation voltage. For small operation voltages (2 V), the device demonstrates nonvolatile analog RS, which can be utilized as an artificial synapse with long- and short-term plasticity. The learning–forgetting experience of human can be emulated based on the artificial synapse. Moreover, the artificial synapse can be used for image recognition with the recognition accuracy up to 94% for small hand-written image. On the other hand, volatile RS can be observed with large operation voltages (6 V). Furthermore, based on the diverse nonvolatile and volatile RS behaviors, selective memory can be emulated. Our fabricated memristor can be used as an artificial synapse to achieve image recognition and to emulate selective memory, which paves a way to construct smart neuromorphic systems facing complex information.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Hochreiter, Sepp, and Jürgen Schmidhuber. "Long Short-Term Memory." Neural Computation 9, no. 8 (November 1, 1997): 1735–80. http://dx.doi.org/10.1162/neco.1997.9.8.1735.

Повний текст джерела
Анотація:
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Molas, Gabriel, and Etienne Nowak. "Advances in Emerging Memory Technologies: From Data Storage to Artificial Intelligence." Applied Sciences 11, no. 23 (November 27, 2021): 11254. http://dx.doi.org/10.3390/app112311254.

Повний текст джерела
Анотація:
This paper presents an overview of emerging memory technologies. It begins with the presentation of stand-alone and embedded memory technology evolution, since the appearance of Flash memory in the 1980s. Then, the progress of emerging memory technologies (based on filamentary, phase change, magnetic, and ferroelectric mechanisms) is presented with a review of the major demonstrations in the literature. The potential of these technologies for storage applications addressing various markets and products is discussed. Finally, we discuss how the rise of artificial intelligence and bio-inspired circuits offers an opportunity for emerging memory technology and shifts the application from pure data storage to storage and computing tasks, and also enlarges the range of required specifications at the device level due to the exponential number of new systems and architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Maua, D. D., C. P. De Campos, and M. Zaffalon. "Solving Limited Memory Influence Diagrams." Journal of Artificial Intelligence Research 44 (May 21, 2012): 97–140. http://dx.doi.org/10.1613/jair.3625.

Повний текст джерела
Анотація:
We present a new algorithm for exactly solving decision making problems represented as influence diagrams. We do not require the usual assumptions of no forgetting and regularity; this allows us to solve problems with simultaneous decisions and limited information. The algorithm is empirically shown to outperform a state-of-the-art algorithm on randomly generated problems of up to 150 variables and 10^64 solutions. We show that these problems are NP-hard even if the underlying graph structure of the problem has low treewidth and the variables take on a bounded number of states, and that they admit no provably good approximation if variables can take on an arbitrary number of states.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zigunovs, Maksims. "THE ALZHEIMER’S DISEASE IMPACT ON ARTIFICIAL NEURAL NETWORKS." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 17, 2021): 205–9. http://dx.doi.org/10.17770/etr2021vol2.6632.

Повний текст джерела
Анотація:
The Alzheimer’s Disease main impact on the brain is the memory loss effect. Therefore, in the “neuron world” this makes a disorder of signal impulses and disconnects neurons that causes the neuron death and memory loss. The research main aim is to determine the average loss of signal and develop memory loss prediction models for artificial neuron network. The Izhikevich neural networking model is often used for constructing neuron neural electrical signal modeling. The neuron model signal rhythm and spikes are used as model neuron characteristics for understanding if the system is stable at certain moment and in time. In addition, the electrical signal parameters are used in similar way as they are used in a biological brain. During the research the neural network initial conditions are assumed to be randomly selected in specified the working neuron average sigma I parameters range.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Bahr, Gisela Susanne, William H. Allen, Philip J. Bernhard, and Stephen Wood. "The Artificial Memory of Mr. Polly: Memory Simulation in Databases and the Emergence of Knowledge." Leonardo 52, no. 3 (June 2019): 300–304. http://dx.doi.org/10.1162/leon_a_01441.

Повний текст джерела
Анотація:
Human memory may be characterized by five dimensions: (1) large capacity; (2) associativity; (3) diversity of memory systems; (4) change over time; and (5) a unified memory experience. The organization and multidimensionality underlying memory can be represented with set theory. This offers a new mathematical perspective, which is the foundation for the cognitive memory architecture Ardemia. The authors present a relational database implementation of Ardemia that supports the creation of the artificial memory of Mr. Polly, the main character in H.G. Wells’s novel The History of Mr. Polly. In addition to the implementation of Mr. Polly’s artificial memory using TimeGlue, his memory is probed with a collection of everyday memory queries that are related to temporal and schema knowledge. The investigation of Mr. Polly’s knowledge suggests an alternative representation of schemas; rather than fixed structures or explicit associations, it is possible to model schemas as the results of the interaction between existing knowledge and remembering.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Ramos Grané, Marta. "Manipulating artificial memory: An example of mistake in recalling." Fortunatae. Revista Canaria de Filología, Cultura y Humanidades Clásicas 38, no. 2 (2023): 59–69. http://dx.doi.org/10.25145/j.fortunat.2023.38.04.

Повний текст джерела
Анотація:
In late medieval Scholastic thought, the sensory experience from the Aristotelian tradition is essential for the knowledge of reality. In the case of the arts of memory, such as Romberch’s Congestorium (Venice, 1520), theories of perception are applied to the mental formation of places and images, key elements of memory systems «per locos et imagines». Through an example taken from the Congestorium, in which the author acknowledges his error in the process of imagining places, one can appreciate how theories of perception are applied to mental processes. Thus, after a detailed description of the rules for the formation of mental places, Romberch defends the importance of experimentation and one’s own (sensory) experience to generate effective artificial memory systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Mikki, Said. "Generalized Neuromorphism and Artificial Intelligence: Dynamics in Memory Space." Symmetry 16, no. 4 (April 18, 2024): 492. http://dx.doi.org/10.3390/sym16040492.

Повний текст джерела
Анотація:
This paper introduces a multidisciplinary conceptual perspective encompassing artificial intelligence (AI), artificial general intelligence (AGI), and cybernetics, framed within what we call the formalism of generalized neuromorphism. Drawing from recent advancements in computing, such as neuromorphic computing and spiking neural networks, as well as principles from the theory of open dynamical systems and stochastic classical and quantum dynamics, this formalism is tailored to model generic networks comprising abstract processing events. A pivotal aspect of our approach is the incorporation of the memory space and the intrinsic non-Markovian nature of the abstract generalized neuromorphic system. We envision future computations taking place within an expanded space (memory space) and leveraging memory states. Positioned at a high abstract level, generalized neuromorphism facilitates multidisciplinary applications across various approaches within the AI community.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Volodina, O. V., A. A. Skvortsov, and V. K. Nikolaev. "Memory computing based on thermal memory elements." E3S Web of Conferences 458 (2023): 01009. http://dx.doi.org/10.1051/e3sconf/202345801009.

Повний текст джерела
Анотація:
The article analyses the possibility of using elements of thermal memory to create a system that allows to perform calculations in memory. Such a computing system is built on devices that are used simultaneously for storing input data, performing a logical operation, and storing the output result. The authors conclude that it is possible to emulate this behaviour by using thermal memory elements with dielectric (SiO2) by a layer of thermal insulation. Special attention is paid to the logic gates of computing systems and their realisation on the basis of thermal memory elements. Simulation modelling of the work of such elements is carried out on the ANSYS Workbanch platform using the Transient Thermal module for non-stationary thermal calculations. On the basis of the modelling, the possibility of creating two basic logic gates “AND” and “OR” is established. The results can be used to create more integrated structures, such as artificial neural networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Dushkin, R. V., V. A. Lelecova, and K. Yu Eidemiller. "System of Operations on Associative Heterarchical Memory." Programmnaya Ingeneria 14, no. 2 (February 8, 2023): 69–76. http://dx.doi.org/10.17587/prin.14.69-76.

Повний текст джерела
Анотація:
Text processing in natural language remains an important task for the field of development of artificial intelligence methods and tools. Since the twentieth century, artificial intelligence methods have been divided into two paradigms — top-down and bottom-up. The methods of the ascending paradigm are difficult to interpret in the form of the output of natural language, and the methods of the descending paradigm are difficult to actualize information. Taking into account the authors approach to the construction of artificial intelligence agents, the processing of natural language must be performed on two levels: on the lower level, using methods of the bottom-up paradigm, and on the upper level, using symbolic methods of the top-down paradigm. The authors of the article have already introduced a new mathematical formalism based on the notion of a hypergraph — associative heterarchical memory (AH-memory). Such memory should simplify the process of natural language processing with new technologies. Earlier the authors group has thoroughly analyzed the problem of symbol binding in the application to АН-memory and its structure. In the first paper, abstract symbol binding was performed using multi-serial integration, eventually converting the primary symbols received by the program into integrated abstract symbols. The second paper provided a comprehensive description of the AH-memory in the form of formulas, explanations of them, and their corresponding diagrams. Although there are many possible modules to use, the developer working with AH-memory should choose those parts of AH-memory which are required for successful and efficient functioning of the AI agent. The article will be of interest to developers of artificial intelligence methods and tools, mathematicians and specialists in natural language processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wu, Jiann-Ming, Pei-Hsun Hsu, and Cheng-Yuan Liou. "Sudoku associative memory." Neural Networks 57 (September 2014): 112–27. http://dx.doi.org/10.1016/j.neunet.2014.05.023.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chun-Hsien, Chen, and Vasant Honavar. "Associative Memory Content-ADDRESSED Memory Fault-TOLERANCE Multiple Associative Recall Adjustable-PRECISION Memory Binary Mappings Information Retrieval Address-BASED Memory High-CAPACITY Memory Interference." Connection Science 7, no. 3-4 (September 1995): 281–300. http://dx.doi.org/10.1080/09540099509696194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chen Wang, Chen Wang, Bingchun Liu Chen Wang, Jiali Chen Bingchun Liu, and Xiaogang Yu Jiali Chen. "Air Quality Index Prediction Based on a Long Short-Term Memory Artificial Neural Network Model." 電腦學刊 34, no. 2 (April 2023): 069–79. http://dx.doi.org/10.53106/199115992023043402006.

Повний текст джерела
Анотація:
<p>Air pollution has become one of the important challenges restricting the sustainable development of cities. Therefore, it is of great significance to achieve accurate prediction of Air Quality Index (AQI). Long Short Term Memory (LSTM) is a deep learning method suitable for learning time series data. Considering its superiority in processing time series data, this study established an LSTM forecasting model suitable for air quality index forecasting. First, we focus on optimizing the feature metrics of the model input through Information Gain (IG). Second, the prediction results of the LSTM model are compared with other machine learning models. At the same time the time step aspect of the LSTM model is used with selective experiments to ensure that model validation works properly. The results show that compared with other machine learning models, the LSTM model constructed in this paper is more suitable for the prediction of air quality index.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Li, Jun, Shengkai Wen, Dongliang Jiang, Linkang Li, and Jianhua Zhang. "Fully solution-processed InSnO/HfGdO X thin-film transistor for light-stimulated artificial synapse." Flexible and Printed Electronics 7, no. 1 (February 2, 2022): 014006. http://dx.doi.org/10.1088/2058-8585/ac4bb2.

Повний текст джерела
Анотація:
Abstract In recent years, the research interest in brain-inspired light-stimulated artificial synaptic electronic devices has greatly increased, due to their great potential in constructing low-power, high-efficiency, and high-speed neuromorphic computing systems. However, in the field of electronic synaptic device simulation, the development of three-terminal synaptic transistors with low manufacturing cost and excellent memory function still faces huge challenges. Here, a fully solution-processed InSnO/HfGdO X thin film transistor (TFT) is fabricated by a simple and convenient solution process to verify the feasibility of light-stimulated artificial synapses. This experiment investigated the electrical and synaptic properties of the device under light stimulation conditions. The device successfully achieved some important synaptic properties, such as paired-pulse facilitation, excitatory postsynaptic current and the transition from short-term memory to long-term memory. In addition, the device also exhibits brain-like memory and learning behaviors under different colors of light stimulation. This work provides an important strategy for the realization of light-stimulated artificial synapses and may have good applications in the field of artificial neuromorphic computing by light signals in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Porras, Eva R., and M. Guadalupe Sánchez-Escribano. "Decentralized Blockchain for Autobiographical Memory in Cognitive Robotics." AI, Computer Science and Robotics Technology 2022 (March 28, 2022): 1–19. http://dx.doi.org/10.5772/acrt.04.

Повний текст джерела
Анотація:
Memory in biological beings is as complex as the rational complexity of that concrete being requires. Clearly, memory helps to conform knowledge bases to serve the needs of the specific natural being. To analogize from Robotics concepts, it seems that the degrees of freedom in the biological being’s memory are higher or lower depending upon the rationality of each living being. Robots and artificial systems appear to require analogous structures. That is, to build a reactive system, the requirement of memory is not highly demanding with respect to the degrees of freedom. However, the required degrees of freedom seems to grow as the ability of the artificial system to deliberate increases. Consequently, to design artificial systems that would implement cognitive abilities, it is required to rethink memory structures. When designing a Cognitive Artificial System, memory systems should be thought of as highly accessible discrete units. In addition, these systems would require designs in the form of distributed architectures with non-linear features, such as those of human thought. In addition, they should allow for complex mixed types of data (text, images, time or so). Blockchain has attracted great interest for a few years now, especially since the appearance of Bitcoin. A blockchain is a distributed ledger that combines an append-only data structure designed to be resistant to modifications, with a consensus protocol [1, 2]. This innovation can be thought of as a sequence of containers, the blocks, that store two things: the information of a “system” and the “service” that such system provides [2], and it provides an interesting starting point to rethink memory systems in robots.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Ikenson, Ben. "Pioneering Major Advances in Brain-Inspired Artificial Intelligence." Scilight 2022, no. 25 (June 24, 2022): 251104. http://dx.doi.org/10.1063/10.0011814.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

He, Ruiquan, Haihua Hu, Chunru Xiong, and Guojun Han. "Artificial Neural Network Assisted Error Correction for MLC NAND Flash Memory." Micromachines 12, no. 8 (July 27, 2021): 879. http://dx.doi.org/10.3390/mi12080879.

Повний текст джерела
Анотація:
The multilevel per cell technology and continued scaling down process technology significantly improves the storage density of NAND flash memory but also brings about a challenge in that data reliability degrades due to the serious noise. To ensure the data reliability, many noise mitigation technologies have been proposed. However, they only mitigate one of the noises of the NAND flash memory channel. In this paper, we consider all the main noises and present a novel neural network-assisted error correction (ANNAEC) scheme to increase the reliability of multi-level cell (MLC) NAND flash memory. To avoid using retention time as an input parameter of the neural network, we propose a relative log-likelihood ratio (LLR) to estimate the actual LLR. Then, we transform the bit detection into a clustering problem and propose to employ a neural network to learn the error characteristics of the NAND flash memory channel. Therefore, the trained neural network has optimized performances of bit error detection. Simulation results show that our proposed scheme can significantly improve the performance of the bit error detection and increase the endurance of NAND flash memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Martin, Katherine I., and Nick C. Ellis. "THE ROLES OF PHONOLOGICAL SHORT-TERM MEMORY AND WORKING MEMORY IN L2 GRAMMAR AND VOCABULARY LEARNING." Studies in Second Language Acquisition 34, no. 3 (August 15, 2012): 379–413. http://dx.doi.org/10.1017/s0272263112000125.

Повний текст джерела
Анотація:
This study analyzed phonological short-term memory (PSTM) and working memory (WM) and their relationship with vocabulary and grammar learning in an artificial foreign language. Nonword repetition, nonword recognition, and listening span were used as memory measures. Participants learned the singular forms of vocabulary for an artificial foreign language before being exposed to plural forms in sentence contexts. Participants were tested on their ability to induce the grammatical forms and to generalize the forms to novel utterances. Individual differences in final abilities in vocabulary and grammar correlated between 0.44 and 0.76, depending on the measure. Despite these strong associations, the results demonstrated significant independent effects of PSTM and WM on L2 vocabulary learning and on L2 grammar learning, some of which were mediated by vocabulary and some of which were direct effects.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Chakraborty, Prabuddha, and Swarup Bhunia. "BINGO: brain-inspired learning memory." Neural Computing and Applications 34, no. 4 (October 20, 2021): 3223–47. http://dx.doi.org/10.1007/s00521-021-06484-8.

Повний текст джерела
Анотація:
AbstractStorage and retrieval of data in a computer memory play a major role in system performance. Traditionally, computer memory organization is ‘static’—i.e. it does not change based on the application-specific characteristics in memory access behaviour during system operation. Specifically, in the case of a content-operated memory (COM), the association of a data block with a search pattern (or cues) and the granularity (details) of a stored data do not evolve. Such a static nature of computer memory, we observe, not only limits the amount of data we can store in a given physical storage, but it also misses the opportunity for performance improvement in various applications. On the contrary, human memory is characterized by seemingly infinite plasticity in storing and retrieving data—as well as dynamically creating/updating the associations between data and corresponding cues. In this paper, we introduce BINGO, a brain-inspired learning memory paradigm that organizes the memory as a flexible neural memory network. In BINGO, the network structure, strength of associations, and granularity of the data adjust continuously during system operation, providing unprecedented plasticity and performance benefits. We present the associated storage/retrieval/retention algorithms in BINGO, which integrate a formalized learning process. Using an operational model, we demonstrate that BINGO achieves an order of magnitude improvement in memory access times and effective storage capacity using the CIFAR-10 dataset and the wildlife surveillance dataset when compared to traditional content-operated memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії