To see the other types of publications on this topic, follow the link: Neural language models.

Books on the topic 'Neural language models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 books for your research on the topic 'Neural language models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse books on a wide variety of disciplines and organise your bibliography correctly.

1

1957-, Houghton George, ed. Connectionist models in cognitive psychology. Hove: Psychology Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Miikkulainen, Risto. Subsymbolic natural language processing: An integrated model of scripts, lexicon, and memory. Cambridge, Mass: MIT Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bavaeva, Ol'ga. Metaphorical parallels of the neutral nomination "man" in modern English. ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1858259.

Full text
Abstract:
The monograph is devoted to a multidimensional analysis of metaphor in modern English as a parallel nomination that exists along with a neutral equivalent denoting a person. The problem of determining the essence of metaphorical names and their role in the language has attracted the attention of many foreign and domestic linguists on the material of various languages, but until now the fact of the parallel existence of metaphors and neutral nominations has not been emphasized. The research is in line with modern problems of linguistics related to the relationship of language, thinking and reflection of the surrounding reality. All these problems are integrated and resolved within the framework of linguistic semantics, in particular in the semantics of metaphor. Multilevel study of language material based on semantic, component, etymological analysis methods contributed to a systematic and comprehensive description of this most important part of the lexical system of the English language. Metaphorical parallels are considered as the result of the interaction of three complexes, which allows us to identify their associative-figurative base, as well as the types of metaphorical parallels, depending on the nature of the connection between direct and figurative meaning. Based on the analysis of various human character traits and behavior that evoke associations with animals, birds, objects, zoomorphic, artifact, somatic, floral and anthropomorphic metaphorical parallels of the neutral nomination "man" are distinguished. The social aspect of metaphorical parallels is also investigated as a reflection of gender, status and age characteristics of a person. It can be used in the training of philologists and translators when reading theoretical courses on lexicology, stylistics, word formation of the English language, as well as in practical classes, in lexicographic practice.
APA, Harvard, Vancouver, ISO, and other styles
4

Arbib, Michael. Neural Models of Language Processes. Elsevier Science & Technology Books, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cairns, Paul, Joseph P. Levy, Dimitrios Bairaktaris, and John A. Bullinaria. Connectionist Models of Memory and Language. Taylor & Francis Group, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Computational Neuroscience: Trends in Research, 1997 (Language of Science). Springer, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Houghton, George. Connectionist Models in Cognitive Psychology. Taylor & Francis Group, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Gomez-Perez, Jose Manuel, Ronald Denaux, and Andres Garcia-Silva. Practical Guide to Hybrid Natural Language Processing: Combining Neural Models and Knowledge Graphs for NLP. Springer International Publishing AG, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gomez-Perez, Jose Manuel, Ronald Denaux, and Andres Garcia-Silva. A Practical Guide to Hybrid Natural Language Processing: Combining Neural Models and Knowledge Graphs for NLP. Springer, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kumar, Rahul, Matthew Lamons, and Abhishek Nagaraja. Python Deep Learning Projects: 9 projects demystifying neural network and deep learning models for building intelligent systems. Packt Publishing, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Neural Control of Speech. MIT Press, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mishra, Pradeepta. PyTorch Recipes: A Problem-Solution Approach to Build, Train and Deploy Neural Network Models. Apress L. P., 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bali, Raghav, Dipanjan Sarkar, and Tamoghna Ghosh. Hands-On Transfer Learning with Python: Implement advanced deep learning and neural network models using TensorFlow and Keras. Packt Publishing, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
19

Reese, Richard M., and AshishSingh Bhatia. Natural Language Processing with Java: Techniques for building machine learning and neural network models for NLP, 2nd Edition. Packt Publishing - ebooks Account, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
20

Kalin, Josh. Generative Adversarial Networks Cookbook: Over 100 recipes to build generative models using Python, TensorFlow, and Keras. Packt Publishing, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Julian, David. Deep Learning with Pytorch Quick Start Guide: Learn to Train and Deploy Neural Network Models in Python. Packt Publishing, Limited, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
22

Whitenack, Daniel. Machine Learning With Go: Implement Regression, Classification, Clustering, Time-series Models, Neural Networks, and More using the Go Programming Language. Packt Publishing - ebooks Account, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hwang, Yoon Hyup. C# Machine Learning Projects: Nine real-world projects to build robust and high-performing machine learning models with C#. Packt Publishing, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

McNamara, Patrick, and Magda Giordano. Cognitive Neuroscience and Religious Language. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190636647.003.0005.

Full text
Abstract:
Communication between deities and human beings rests on the use of language. Religious language has peculiarities such as the use of a formal voice, reductions in first-person and elevation of third-person pronoun use, archaistic elements, and an abundance of speech acts—features that reflect and facilitate the binding of the individual to conceived ultimate reality and value, decentering the Self while focusing on the deity. Explorations of the neurologic correlates of these cognitive and linguistic processes may be useful to identify constraints on neurocognitive models of religious language, and metaphor. The key brain regions that may mediate religious language include neural networks known to be involved in computational assessments of value, future-oriented simulations, Self-agency, Self-reflection, and attributing intentionality of goals to others. Studies indicate that some of the areas involved in those processes are active during personal prayer, whereas brain regions related to habit formation appear active during formal prayer. By examining religious language, and the brain areas engaged by it, we aim to develop more comprehensive neurocognitive models of religious cognition.
APA, Harvard, Vancouver, ISO, and other styles
25

R Deep Learning Essentials: A step-by-step guide to building deep learning models using TensorFlow, Keras, and MXNet, 2nd Edition. Packt Publishing, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
26

Strevens, Michael. The Whole Story. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199685509.003.0005.

Full text
Abstract:
Causal explanations in the high-level sciences typically black-box the low-level details of the causal mechanisms that they invoke to account for their explananda: economists’ black-box psychological processes, psychologists’ black-box neural processes, and so on. Are these black-boxing explanatory models complete explanations of the phenomena in question, or are they just sketches of or templates for the whole explanatory story? This chapter poses a focused version of the question in the context of convergent evolution, the existence of which appears to show that underlying mechanisms are completely irrelevant to the explanation of high-level biological features, including perhaps thought and language—in which case a black-boxing model would be a complete explanation of such features rather than a mere sketch. Arguments for and against such a model’s explanatory completeness are considered; the chapter comes down tentatively against.
APA, Harvard, Vancouver, ISO, and other styles
27

Zerilli, John. The Adaptable Mind. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780190067885.001.0001.

Full text
Abstract:
What conception of mental architecture can survive the evidence of neuroplasticity and neural reuse in the human brain? In particular, what sorts of modules are compatible with this evidence? This book shows how developmental and adult neuroplasticity, as well as evidence of pervasive neural reuse, force a revision to the standard conceptions of modularity and spell the end of a hardwired and dedicated language module. It argues from principles of both neural reuse and neural redundancy that language is facilitated by a composite of modules (or module-like entities), few if any of which are likely to be linguistically special, and that neuroplasticity provides evidence that (in key respects and to an appreciable extent) few if any of them ought to be considered developmentally robust, though their development does seem to be constrained by features intrinsic to particular regions of cortex (manifesting as domain-specific predispositions or acquisition biases). In the course of doing so, the book articulates a schematically and neurobiologically precise framework for understanding modules and their supramodular interactions.
APA, Harvard, Vancouver, ISO, and other styles
28

Ratcliff, Roger, and Philip Smith. Modeling Simple Decisions and Applications Using a Diffusion Model. Edited by Jerome R. Busemeyer, Zheng Wang, James T. Townsend, and Ami Eidels. Oxford University Press, 2015. http://dx.doi.org/10.1093/oxfordhb/9780199957996.013.3.

Full text
Abstract:
The diffusion model is one of the major sequential-sampling models for two-choice decision-making and choice response time in psychology. The model conceives of decision-making as a process in which noisy evidence is accumulated until one of two response criteria is reached and the associated response is made. The criteria represent the amount of evidence needed to make each decision and reflect the decision maker’s response biases and speed-accuracy trade-off settings. In this chapter we examine the application of the diffusion model in a variety of different settings. We discuss the optimality of the model and review its applications to a number of cognitive tasks, including perception, memory, and language tasks. We also consider its applications to normal and special populations, to the cognitive foundations of individual differences, to value-based decisions, and its role in understanding the neural basis of decision-making.
APA, Harvard, Vancouver, ISO, and other styles
29

Papanicolaou, Andrew C., and Marina Kilintari. Imaging the Networks of Language. Edited by Andrew C. Papanicolaou. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199764228.013.15.

Full text
Abstract:
Among the “higher” functions, language and its cerebral networks is the most intensively explored through behavioral or clinical studies and, more recently, through functional neuroimaging. From the former studies, several models (only partially congruent) have emerged during the past three centuries regarding the organization and topography of the brain mechanisms of the acoustic, phonological, semantic, syntactic, and pragmatic operations in which psycholinguists have divided the language function. The main task of this chapter is to extract from the vast functional neuroimaging literature of language reliable evidence that would be used to disconfirm the various hypotheses comprising the current language models. Most of these hypotheses concern the anatomical structures that could be considered nodes or hubs of the neuronal networks mediating the above-mentioned linguistic operations. Using the same criteria, the authors present neuroimaging evidence relevant to the issue of the neuronal mediation of sign languages, reading, and dyslexia.
APA, Harvard, Vancouver, ISO, and other styles
30

Bergen, Benjamin, and Nancy Chang. Embodied Construction Grammar. Edited by Thomas Hoffmann and Graeme Trousdale. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780195396683.013.0010.

Full text
Abstract:
This chapter focuses on Embodied Construction Grammar (ECG), another computational implementation of Construction Grammar. It points out that the driving question of this framework is how language is used in actual physical and social contexts, and explains that ECG is an attempt to computationally model the cognitive and neural mechanisms that underlie human linguistic behavior. The chapter evaluates the role of mental simulation in processing and outlines how language can be seen as in interface to simulation. It also shows how constructions are represented in ECG and describes an ECG-based model of language comprehension.
APA, Harvard, Vancouver, ISO, and other styles
31

Seneque, Gareth, and Darrell Chua. Hands-On Deep Learning with Go: A Practical Guide to Building and Implementing Neural Network Models Using Go. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

Butz, Martin V., and Esther F. Kutter. Language, Concepts, and Abstract Thought. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198739692.003.0013.

Full text
Abstract:
Language is probably the most complex form of universal communication. A finite set of words enables us to express a mere infinite number of thoughts and ideas, which we set together by obeying grammatical rules and compositional, semantic knowledge. This chapter addresses how human language abilities have evolved and how they develop. A short introduction to linguistics covers the most important conceptualized aspects, including language production, phonology, syntax, semantics, and pragmatics. The brain considers these linguistic aspects seemingly in parallel when producing and comprehending sentences. The brain develops some dedicated language modules, which strongly interact with other modules. Evolution appears to have recruited prelinguistic developmental neural structures and modified them into maximally language-suitable structures. Moreover, evolution has most likely evolved language to further facilitate social cooperation and coordination, including the further development of theories of the minds of others. Language develops in a human child building on prelinguistic concepts, which are based on motor control-oriented structures detailed in the previous chapter. A final look at actual linguistic communication emphasizes that an imaginary common ground and individual private grounds unfold between speaker and listener, characterizing what is actually commonly and privately communicated and understood.
APA, Harvard, Vancouver, ISO, and other styles
33

Wiley, Joshua F., Yuxi (Hayden) Liu, Pablo Maldonado, and Mark Hodnett. Deep Learning with R for Beginners: Design Neural Network Models in R 3. 5 Using TensorFlow, Keras, and MXNet. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Full text
Abstract:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip its readers with a comprehensive understanding of AI and its subsets, machine learning and deep learning, with a particular emphasis on neural networks. It is designed for novices venturing into the field, as well as experienced learners who desire to solidify their knowledge base or delve deeper into advanced topics. In Chapter 1, we provide a thorough introduction to the world of AI, exploring its definition, historical trajectory, and categories. We delve into the applications of AI, and underscore the ethical implications associated with its proliferation. Chapter 2 introduces machine learning, elucidating its types and basic algorithms. We examine the practical applications of machine learning and delve into challenges such as overfitting, underfitting, and model validation. Deep learning and neural networks, an integral part of AI, form the crux of Chapter 3. We provide a lucid introduction to deep learning, describe the structure of neural networks, and explore forward and backward propagation. This chapter also delves into the specifics of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). In Chapter 4, we outline the steps to train neural networks, including data preprocessing, cost functions, gradient descent, and various optimizers. We also delve into regularization techniques and methods for evaluating a neural network model. Chapter 5 focuses on specialized topics in neural networks such as autoencoders, Generative Adversarial Networks (GANs), Long Short-Term Memory Networks (LSTMs), and Neural Architecture Search (NAS). In Chapter 6, we illustrate the practical applications of neural networks, examining their role in computer vision, natural language processing, predictive analytics, autonomous vehicles, and the healthcare industry. Chapter 7 gazes into the future of AI and neural networks. It discusses the current challenges in these fields, emerging trends, and future ethical considerations. It also examines the potential impacts of AI and neural networks on society. Finally, Chapter 8 concludes the book with a recap of key learnings, implications for readers, and resources for further study. This book aims not only to provide a robust theoretical foundation but also to kindle a sense of curiosity and excitement about the endless possibilities AI and neural networks offer. The journ
APA, Harvard, Vancouver, ISO, and other styles
35

Butz, Martin V., and Esther F. Kutter. How the Mind Comes into Being. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780198739692.001.0001.

Full text
Abstract:
For more than 2000 years Greek philosophers have thought about the puzzling introspectively assessed dichotomy between our physical bodies and our seemingly non-physical minds. How is it that we can think highly abstract thoughts, seemingly fully detached from actual, physical reality? Despite the obvious interactions between mind and body (we get tired, we are hungry, we stay up late despite being tired, etc.), until today it remains puzzling how our mind controls our body, and vice versa, how our body shapes our mind. Despite a big movement towards embodied cognitive science over the last 20 years or so, introductory books with a functional and computational perspective on how human thought and language capabilities may actually have come about – and are coming about over and over again – are missing. This book fills that gap. Starting with a historical background on traditional cognitive science and resulting fundamental challenges that have not been resolved, embodied cognitive science is introduced and its implications for how human minds have come and continue to come into being are detailed. In particular, the book shows that evolution has produced biological bodies that provide “morphologically intelligent” structures, which foster the development of suitable behavioral and cognitive capabilities. While these capabilities can be modified and optimized given positive and negative reward as feedback, to reach abstract cognitive capabilities, evolution has furthermore produced particular anticipatory control-oriented mechanisms, which cause the development of particular types of predictive encodings, modularizations, and abstractions. Coupled with an embodied motivational system, versatile, goal-directed, self-motivated behavior, learning becomes possible. These lines of thought are introduced and detailed from interdisciplinary, evolutionary, ontogenetic, reinforcement learning, and anticipatory predictive encoding perspectives in the first part of the book. A short excursus then provides an introduction to neuroscience, including general knowledge about brain anatomy, and basic neural and brain functionality, as well as the main research methodologies. With reference to this knowledge, the subsequent chapters then focus on how the human brain manages to develop abstract thought and language. Sensory systems, motor systems, and their predictive, control-oriented interactions are detailed from a functional and computational perspective. Bayesian information processing is introduced along these lines as are generative models. Moreover, it is shown how particular modularizations can develop. When control and attention come into play, these structures develop also dependent on the available motor capabilities. Vice versa, the development of more versatile motor capabilities depends on structural development. Event-oriented abstractions enable conceptualizations and behavioral compositions, paving the path towards abstract thought and language. Also evolutionary drives towards social interactions play a crucial role. Based on the developing sensorimotor- and socially-grounded structures, the human mind becomes language ready. The development of language in each human child then further facilitates the self-motivated generation of abstract, compositional, highly flexible thought about the present, past, and future, as well as about others. In conclusion, the book gives an overview over how the human mind comes into being – sketching out a developmental pathway towards the mastery of abstract and reflective thought, while detailing the critical body and neural functionalities, and computational mechanisms, which enable this development.
APA, Harvard, Vancouver, ISO, and other styles
36

Bourhis, Richard Y., and Annie Montreuil. Acculturation, Vitality, and Bilingual Healthcare. Edited by Seth J. Schwartz and Jennifer Unger. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780190215217.013.27.

Full text
Abstract:
This chapter provides a conceptual framework for examining the delivery of bilingual healthcare for linguistic minorities in Canada’s Bilingual Belt. First, the chapter provides an overview of the ethnolinguistic vitality framework accounting for the sociostructural factors affecting the strength of minority and majority language communities within multilingual countries. Second, the interactive acculturation model (IAM) helps account for relations between high- and low-vitality group speakers whose intercultural relations may be harmonious, problematic, or conflictual. Third, the chapter provides a case study of a pluralist setting that offers three distinct bilingual healthcare systems for French and English communities in Canada’s Bilingual Belt. While the delivery of bilingual healthcare is cost neutral relative to unilingual healthcare systems, at issue is whether minority language patients achieve better health outcomes when they are cared for in their own language than in the language of the dominant majority.
APA, Harvard, Vancouver, ISO, and other styles
37

Mariani, Giorgio. The Rhetorical Equivalent of War. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252039751.003.0003.

Full text
Abstract:
This chapter examines how the rhetoric of war may be turned against war by focusing on the views of William James, Kenneth Burke, and Stephen Crane. Recent literary criticism has suggested that, far from being powerless or simply neutral vis-à-vis the armed conflicts it seeks to represent, language is complicit with violence. This understanding of the relationship between language and violence has been filed by James Dawes under the rubric of “the disciplinary model”—a model that conceives language and violence “as mutually constitutive.” This chapter first considers the ways in which the hard facts of war and violence may be both acknowledged and worked through before discussing Burke's template for understanding the tension as well as the cooperation between war and peace. It also analyzes James's “The Moral Equivalent of War” and concludes by testing the usefulness of some of Burke's recommendations for literary studies through a reading of Crane's “A Mystery of Heroism.”
APA, Harvard, Vancouver, ISO, and other styles
38

van der Hulst, Harry. Palatal harmony. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198813576.003.0004.

Full text
Abstract:
This chapter applies the model that was developed in Chapters 2 and 3 to some well-known and well-studied cases of palatal vowel harmony. Palatal harmony is common in, and almost confined to, Finno-Ugric and Altaic languages. In the latter, palatal harmony often co-occurs with labial harmony. The chapter first discusses variation in the behavior of neutral vowels in Balto-Finnic languages and some special cases in this family. It then analyzes palatal harmony systems in Hungarian and considers the systems of several other languages. The focus is on asymmetries in vowel harmony involving disharmonic suffixes, anti-harmonic roots, disharmonic roots, and non-alternating suffixes.
APA, Harvard, Vancouver, ISO, and other styles
39

Nolte, David D. Introduction to Modern Dynamics. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198844624.001.0001.

Full text
Abstract:
Introduction to Modern Dynamics: Chaos, Networks, Space and Time (2nd Edition) combines the topics of modern dynamics—chaos theory, dynamics on complex networks and the geometry of dynamical spaces—into a coherent framework. This text is divided into four parts: Geometric Mechanics, Nonlinear Dynamics, Complex Systems, and Relativity. These topics share a common and simple mathematical language that helps students gain a unified physical intuition. Geometric mechanics lays the foundation and sets the tone for the rest of the book by emphasizing dynamical spaces, like state space and phase space, whose geometric properties define the set of all trajectories through those spaces. The section on nonlinear dynamics has chapters on chaos theory, synchronization, and networks. Chaos theory provides the language and tools to understand nonlinear systems, introducing fixed points that are classified through stability analysis and nullclines that shepherd system trajectories. Synchronization and networks are central paradigms in this book because they demonstrate how collective behavior emerges from the interactions of many individual nonlinear elements. The section on complex systems contains chapters on neural dynamics, evolutionary dynamics, and economic dynamics. The final section contains chapters on metric spaces and the special and general theories of relativity. In the second edition, sections on conventional topics, like applications of Lagrangians, have been strengthened, as well as being updated to provide a modern perspective. Several of the introductory chapters have been rearranged for improved logical flow and there are expanded homework problems at the end of each chapter.
APA, Harvard, Vancouver, ISO, and other styles
40

Barañano, Kristin W. Angelman Syndrome. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199937837.003.0055.

Full text
Abstract:
Angelman syndrome (AS) is a severe neurodevelopmental disorder caused by maternal deficiency of the epigenetically imprinted gene UBE3A. It is characterized by severe developmental delay, an ataxic gait disorder, an apparent happy demeanor with frequent smiling or laughing, and severe expressive language impairments. Understanding the neurobiology of AS has focused on understanding how UBE3A is regulated by neuronal activity, as well as the targets of its ubiquitin E3 ligase activity. This has led to a model of the role of UBE3A in the regulation of experience-dependent sculpting of synaptic circuits. At this time, treatment is largely supportive, but efforts directed toward reversing the epigenetic silencing machinery may lead to improved synaptic function in AS patients.
APA, Harvard, Vancouver, ISO, and other styles
41

Chilton, Paul, and David Cram. Hoc est corpus. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190636647.003.0016.

Full text
Abstract:
This chapter, which has both a historical and an analytic dimension, concerns the ritual of the ‘Eucharist’ or ‘mass’, best known in the Catholic variant of Christianity. The first part of the paper outlines the part of the ritual’s complex history that is concerned with various theological attempts to explain or justify particular interpretations of the ritual that have been the subject of conflict. In particular, it outlines the intellectual history of efforts to apply sophisticated theories of language developed in the medieval period and the early modern period. These approaches already involved a theory of deixis that foreshadows modern theories in linguistics that are entirely non-theological. It is a recent linguistic theory, Deictic Space Theory, that is outlined and applied in second part of the paper. This is a cognitive approach to core aspects of linguistic meaning that are grounded in spatial cognition. The overall aim is to investigate, in context, the possible cognitive and emotional effects that may be brought about by the interaction among linguistic formulae and other features of the ritual. Close linguistic and multimodal analysis of the crucial and most controversial moment of the Eucharist is speculatively linked with known psychological, cognitive, and neural processes.
APA, Harvard, Vancouver, ISO, and other styles
42

Bencke, Ida, and Jørgen Bruhn, eds. Multispecies Storytelling in Intermedial Practices. punctum books, 2022. http://dx.doi.org/10.53288/0338.1.00.

Full text
Abstract:
Multispecies Storytelling in Intermedial Practices is a speculative endeavor asking how we may represent, relay, and read worlds differently by seeing other species as protagonists in their own rights. What other stories are to be invented and told from within those many-tongued chatters of multispecies collectives? Could such stories teach us how to become human otherwise? Often, the human is defined as the sole creature who holds language, and consequently is capable of articulating, representing, and reflecting upon the world. And yet, the world is made and remade by ongoing and many-tongued conversations between various organisms reverberating with sound, movement, gestures, hormones, and electrical signals. Everywhere, life is making itself known, heard, and understood in a wide variety of media and modalities. Some of these registers are available to our human senses, while some are not. Facing a not-so-distant future catastrophe, which in many ways and for many of us is already here, it is becoming painstakingly clear that our imaginaries are in dire need of corrections and replacements. How do we cultivate and share other kinds of stories and visions of the world that may hold promises of modest, yet radical hope? If we keep reproducing the same kind of languages, the same kinds of scientific gatekeeping, the same kinds of stories about “our” place in nature, we remain numb in the face of collapse. Multispecies Storytelling in Intermedial Practices offers steps toward a (self)critical multispecies philosophy which interrogates and qualifies the broad and seemingly neutral concept of humanity utilized in and around conversations grounded within Western science and academia. Artists, activists, writers, and scientists give a myriad of different interpretations of how to tell our worlds using different media – and possibly gives hints as to how to change it, too.
APA, Harvard, Vancouver, ISO, and other styles
43

Hilgurt, S. Ya, and O. A. Chemerys. Reconfigurable signature-based information security tools of computer systems. PH “Akademperiodyka”, 2022. http://dx.doi.org/10.15407/akademperiodyka.458.297.

Full text
Abstract:
The book is devoted to the research and development of methods for combining computational structures for reconfigurable signature-based information protection tools for computer systems and networks in order to increase their efficiency. Network security tools based, among others, on such AI-based approaches as deep neural networking, despite the great progress shown in recent years, still suffer from nonzero recognition error probability. Even a low probability of such an error in a critical infrastructure can be disastrous. Therefore, signature-based recognition methods with their theoretically exact matching feature are still relevant when creating information security systems such as network intrusion detection systems, antivirus, anti-spam, and wormcontainment systems. The real time multi-pattern string matching task has been a major performance bottleneck in such systems. To speed up the recognition process, developers use a reconfigurable hardware platform based on FPGA devices. Such platform provides almost software flexibility and near-ASIC performance. The most important component of a signature-based information security system in terms of efficiency is the recognition module, in which the multipattern matching task is directly solved. It must not only check each byte of input data at speeds of tens and hundreds of gigabits/sec against hundreds of thousand or even millions patterns of signature database, but also change its structure every time a new signature appears or the operating conditions of the protected system change. As a result of the analysis of numerous examples of the development of reconfigurable information security systems, three most promising approaches to the construction of hardware circuits of recognition modules were identified, namely, content-addressable memory based on digital comparators, Bloom filter and Aho–Corasick finite automata. A method for fast quantification of components of recognition module and the entire system was proposed. The method makes it possible to exclude resource-intensive procedures for synthesizing digital circuits on FPGAs when building complex reconfigurable information security systems and their components. To improve the efficiency of the systems under study, structural-level combinational methods are proposed, which allow combining into single recognition device several matching schemes built on different approaches and their modifications, in such a way that their advantages are enhanced and disadvantages are eliminated. In order to achieve the maximum efficiency of combining methods, optimization methods are used. The methods of: parallel combining, sequential cascading and vertical junction have been formulated and investigated. The principle of multi-level combining of combining methods is also considered and researched. Algorithms for the implementation of the proposed combining methods have been developed. Software has been created that allows to conduct experiments with the developed methods and tools. Quantitative estimates are obtained for increasing the efficiency of constructing recognition modules as a result of using combination methods. The issue of optimization of reconfigurable devices presented in hardware description languages is considered. A modification of the method of affine transformations, which allows parallelizing such cycles that cannot be optimized by other methods, was presented. In order to facilitate the practical application of the developed methods and tools, a web service using high-performance computer technologies of grid and cloud computing was considered. The proposed methods to increase efficiency of matching procedure can also be used to solve important problems in other fields of science as data mining, analysis of DNA molecules, etc. Keywords: information security, signature, multi-pattern matching, FPGA, structural combining, efficiency, optimization, hardware description language.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography