Littérature scientifique sur le sujet « Language Representation »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Language Representation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Language Representation"

1

Andrade, Cláudia Braga de. « THE SPECIFICITY OF LANGUAGE IN PSYCHOANALYSIS ». Ágora : Estudos em Teoria Psicanalítica 19, no 2 (août 2016) : 279–94. http://dx.doi.org/10.1590/s1516-14982016002009.

Texte intégral
Résumé :
ABSTRACT: The following article aims to discuss the conceptions of language proposed in Freud's work, considering the specificity of the notion of representation, and to point out its consequences in clinical practice. In the representation theory, it is possible to conceive a notion of language characterized by heterogeneity (representation of the word and representation of the thing) that its effect of meaning is produced through a complex of associations and their linkage to an intensive approach (instinctual representative). From this model, it is assumed that the language exceeds its semantic function and the representational system is not restricted to the field of rationality and verbalization.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rutten, Geert-Jan, et Nick Ramsey. « Language Representation ». Journal of Neurosurgery 106, no 4 (avril 2007) : 726–27. http://dx.doi.org/10.3171/jns.2007.106.4.726.

Texte intégral
Résumé :
Dissociated language functions are largely invalidated by standard techniques such as the amobarbital test and cortical stimulation. Language studies in which magnetoencephalography (MEG) and functional magnetic resonance (fMR) imaging are used to record data while the patient performs lexicosemantic tasks have enabled researchers to perform independent brain mapping for temporal and frontal language functions (MEG is used for temporal and fMR imaging for frontal functions). In this case report, the authors describe a right-handed patient in whom a right-sided insular glioma was diagnosed. The patient had a right-lateralized receptive language area, but expressive language function was identified in the left hemisphere on fMR imaging– and MEG-based mapping. Examinations were performed in 20 right-handed patients with low-grade gliomas (control group) for careful comparison with and interpretation of this patient's results. In these tests, all patients were asked to generate verbs related to acoustically presented nouns (verb generation) for fMR imaging, and to categorize as abstract or concrete a set of visually presented words consisting of three Japanese letters for fMR imaging and MEG. The most prominent display of fMR imaging activation by the verb-generation task was observed in the left inferior and middle frontal gyri in all participants, including the patient presented here. Estimated dipoles identified with the abstract/concrete categorization task were concentrated in the superior temporal and supra-marginal gyri in the left hemisphere in all control patients. In this patient, however, the right superior temporal region demonstrated significantly stronger activations on MEG and fMR imaging with the abstract/concrete categorization task. Suspected dissociation of the language functions was successfully mapped with these two imaging modalities and was validated by the modified amobarbital test and the postoperative neurological status. The authors describe detailed functional profiles obtained in this patient and review the cases of four previously described patients in whom dissociated language functions were found.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tomasello, Michael. « Language and Representation ». Contemporary Psychology : A Journal of Reviews 42, no 12 (décembre 1997) : 1080–83. http://dx.doi.org/10.1037/000637.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Jing, Chenchen, Yuwei Wu, Xiaoxun Zhang, Yunde Jia et Qi Wu. « Overcoming Language Priors in VQA via Decomposed Linguistic Representations ». Proceedings of the AAAI Conference on Artificial Intelligence 34, no 07 (3 avril 2020) : 11181–88. http://dx.doi.org/10.1609/aaai.v34i07.6776.

Texte intégral
Résumé :
Most existing Visual Question Answering (VQA) models overly rely on language priors between questions and answers. In this paper, we present a novel method of language attention-based VQA that learns decomposed linguistic representations of questions and utilizes the representations to infer answers for overcoming language priors. We introduce a modular language attention mechanism to parse a question into three phrase representations: type representation, object representation, and concept representation. We use the type representation to identify the question type and the possible answer set (yes/no or specific concepts such as colors or numbers), and the object representation to focus on the relevant region of an image. The concept representation is verified with the attended region to infer the final answer. The proposed method decouples the language-based concept discovery and vision-based concept verification in the process of answer inference to prevent language priors from dominating the answering process. Experiments on the VQA-CP dataset demonstrate the effectiveness of our method.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ben-Yami, Hanoch. « Word, Sign and Representation in Descartes ». Journal of Early Modern Studies 10, no 1 (2021) : 29–46. http://dx.doi.org/10.5840/jems20211012.

Texte intégral
Résumé :
In the first chapter of his The World, Descartes compares light to words and discusses signs and ideas. This made scholars read into that passage our views of language as a representational medium and consider it Descartes’ model for representation in perception. I show, by contrast, that Descartes does not ascribe there any representational role to language; that to be a sign is for him to have a kind of causal role; and that he is concerned there only with the cause’s lack of resemblance to its effect, not with the representation’s lack of resemblance to what it represents. I support this interpretation by comparisons with other places in Descartes’ corpus and with earlier authors, Descartes’ likely sources. This interpretation may shed light both on Descartes’ understanding of the functioning of language and on the development of his theory of representation in perception.
Styles APA, Harvard, Vancouver, ISO, etc.
6

최숙이. « Language as Volition and Representation : Representation of Volition in Language ». Journal of japanese Language and Culture ll, no 26 (décembre 2013) : 321–45. http://dx.doi.org/10.17314/jjlc.2013..26.016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Navigli, Roberto, Rexhina Blloshmi et Abelardo Carlos Martínez Lorenzo. « BabelNet Meaning Representation : A Fully Semantic Formalism to Overcome Language Barriers ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 11 (28 juin 2022) : 12274–79. http://dx.doi.org/10.1609/aaai.v36i11.21490.

Texte intégral
Résumé :
Conceptual representations of meaning have long been the general focus of Artificial Intelligence (AI) towards the fundamental goal of machine understanding, with innumerable efforts made in Knowledge Representation, Speech and Natural Language Processing, Computer Vision, inter alia. Even today, at the core of Natural Language Understanding lies the task of Semantic Parsing, the objective of which is to convert natural sentences into machine-readable representations. Through this paper, we aim to revamp the historical dream of AI, by putting forward a novel, all-embracing, fully semantic meaning representation, that goes beyond the many existing formalisms. Indeed, we tackle their key limits by fully abstracting text into meaning and introducing language-independent concepts and semantic relations, in order to obtain an interlingual representation. Our proposal aims to overcome the language barrier, and connect not only texts across languages, but also images, videos, speech and sound, and logical formulas, across many fields of AI.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Inozemtsev, V. A. « Deductive logic in solving computer knowledge representation ». Izvestiya MGTU MAMI 8, no 1-5 (10 septembre 2014) : 121–26. http://dx.doi.org/10.17816/2074-0530-67477.

Texte intégral
Résumé :
The article develops the concept of computer representology, which is the philosophical and methodological analysis of deductive models of knowledge representation. These models are one of the varieties of logical models of knowledge representation. These latter knowledge representations together with a logical languages form the important concept of the computer knowledge representation - logical. Under the concepts of computer representation of knowledge are understood aggregates of computer models of representation of domain knowledge of reality, and the corresponding to these models language means, which are developed by artificial intelligence. These concepts are different ways to solve the problems of computer knowledge representations.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Gilbert, Stephen B., et Whitman Richards. « The Classification of Representational Forms ». Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no 1 (novembre 2019) : 2244–48. http://dx.doi.org/10.1177/1071181319631530.

Texte intégral
Résumé :
Knowledge access and ease of problem-solving, using technology or not, depends upon our choice of representation. Because of our unique facility with language and pictures, these two descriptions are often used to characterize most representational forms, or their combinations, such as flow charts, tables, trees, graphs, or lists. Such a characterization suggests that language and pictures are the principal underlying cognitive dimensions for representational forms. However, we show that when similarity-based scaling methods (multidimensional scaling, hierarchical clustering, and trajectory mapping) are used to relate user tasks that are supported by different representations, then a new categorization appears, namely, tables, trees, and procedures. This new arrangement of knowledge representations may aid interface designers in choosing an appropriate representation for their users' tasks.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Miller, R. A., R. H. Baud, J. R. Scherrer et A. M. Rassinoux. « Modeling Concepts in Medicine for Medical Language Understanding ». Methods of Information in Medicine 37, no 04/05 (octobre 1998) : 361–72. http://dx.doi.org/10.1055/s-0038-1634561.

Texte intégral
Résumé :
AbstractOver the past two decades, the construction of models for medical concept representation and for understanding of the deep meaning of medical narrative texts have been challenging areas of medical informatics research. This review highlights how these two inter-related domains have evolved, emphasizing aspects of medical modeling as a tool for medical language understanding. A representation schema, which balances partially but accurately with complete but complex representations of domainspecific knowledge, must be developed to facilitate language understanding. Representative examples are drawn from two major independent efforts undertaken by the authors: the elaboration and the subsequent adjustment of the RECIT multilingual analyzer to include a robust medical concept model, and the recasting of a frame-based interlingua system, originally developed to map equivalent concepts between controlled clinical vocabularies, to invoke a similar concept model.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Language Representation"

1

Sukkarieh, Jana Zuheir. « Natural language for knowledge representation ». Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620452.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wilhelmson, Mika. « Representations of culture in EIL : Cultural representation in Swedish EFL textbooks ». Thesis, Högskolan Dalarna, Engelska, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:du-21120.

Texte intégral
Résumé :
The English language has become an international language and is globally used as a lingua franca. Therefore, there has been a shift in English-language education toward teaching English as an interna-tional language (EIL). Teaching from the EIL paradigm means that English is seen as an international language used in communication by people from different linguistic and cultural backgrounds. As the approach to English-language education changes from the traditional native-speaker, target country context, so does the role of culture within English-language teaching. The aim of this thesis is to in-vestigate and analyse cultural representations in two Swedish EFL textbooks used in upper-secondary school to see how they correspond with the EIL paradigm. This is done by focusing on the geograph-ical origin of the cultural content as well as looking at what kinds of culture are represented in the textbooks. A content analysis of the textbooks is conducted, using Kachru’s Concentric Circles of English as the model for the analysis of the geographical origin. Horibe’s model of the three different kinds of culture in EIL is the model used for coding the second part of the analysis. The results of the analysis show that culture of target countries and "Culture as social custom" dominate the cultural content of the textbook. Thus, although there are some indications that the EIL paradigm has influ-enced the textbooks, the traditional approach to culture in language teaching still prevails in the ana-lysed textbooks. Because of the relatively small sample included in the thesis, further studies need to be conducted in order to make conclusions regarding the Swedish context as a whole.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Luebbering, Candice Rae. « The Cartographic Representation of Language : Understanding language map construction and visualizing language diversity ». Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/37543.

Texte intégral
Résumé :
Language maps provide illustrations of linguistic and cultural diversity and distribution, appearing in outlets ranging from textbooks and news articles to websites and wall maps. They are valuable visual aids that accompany discussions of our cultural climate. Despite the prevalent use of language maps as educational tools, little recent research addresses the difficult task of map construction for this fluid cultural characteristic. The display and analysis capabilities of current geographic information systems (GIS) provide a new opportunity for revisiting and challenging the issues of language mapping. In an effort to renew language mapping research and explore the potential of GIS, this dissertation is composed of three studies that collectively present a progressive work on language mapping. The first study summarizes the language mapping literature, addressing the difficulties and limitations of assigning language to space before describing contemporary language mapping projects as well as future research possibilities with current technology. In an effort to identify common language mapping practices, the second study is a map survey documenting the cartographic characteristics of existing language maps. The survey not only consistently categorizes language map symbology, it also captures unique strategies observed for handling locations with linguistic plurality as well as representing language data uncertainty. A new typology of language map symbology is compiled based on the map survey results. Finally, the third study specifically addresses two gaps in the language mapping literature: the issue of visualizing linguistic diversity and the scarcity of GIS applications in language mapping research. The study uses census data for the Washington, D.C. Metropolitan Statistical Area to explore visualization possibilities for representing the linguistic diversity. After recreating mapping strategies already in use for showing linguistic diversity, the study applies an existing statistic (a linguistic diversity index) as a new mapping variable to generate a new visualization type: a linguistic diversity surface. The overall goal of this dissertation is to provide the impetus for continued language mapping research and contribute to the understanding and creation of language maps in education, research, politics, and other venues.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Perfors, Amy (Amy Francesca). « Learnability, representation, and language : a Bayesian approach ». Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45601.

Texte intégral
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 225-243).
Within the metaphor of the "mind as a computation device" that dominates cognitive science, understanding human cognition means understanding learnability not only what (and how) the brain learns, but also what data is available to it from the world. Ideal learnability arguments seek to characterize what knowledge is in theory possible for an ideal reasoner to acquire, which illuminates the path towards understanding what human reasoners actually do acquire. The goal of this thesis is to exploit recent advances in machine learning to revisit three common learnability arguments in language acquisition. By formalizing them in Bayesian terms and evaluating them given realistic, real-world datasets, we achieve insight about what must be assumed about a child's representational capacity, learning mechanism, and cognitive biases. Exploring learnability in the context of an ideal learner but realistic (rather than ideal) datasets enables us to investigate what could be learned in practice rather than noting what is impossible in theory. Understanding how higher-order inductive constraints can themselves be learned permits us to reconsider inferences about innate inductive constraints in a new light. And realizing how a learner who evaluates theories based on a simplicity/goodness-of-fit tradeoff can handle sparse evidence may lead to a new perspective on how humans reason based on the noisy and impoverished data in the world. The learnability arguments I consider all ultimately stem from the impoverishment of the input either because it lacks negative evidence, it lacks a certain essential kind of positive evidence, or it lacks suffcient quantity of evidence necessary for choosing from an infinite set of possible generalizations.
(cont.) I focus on these learnability arguments in the context of three major topics in language acquisition: the acquisition of abstract linguistic knowledge about hierarchical phrase structure, the acquisition of verb argument structures, and the acquisition of word leaning biases.
by Amy Perfors.
Ph.D.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Nayak, Sunita. « Representation and learning for sign language recognition ». [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002362.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jarosiewicz, Eugenio. « Natural language parsing and representation in XML ». [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000707.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dawborn, Timothy James. « DOCREP : Document Representation for Natural Language Processing ». Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/14767.

Texte intégral
Résumé :
The field of natural language processing (NLP) revolves around the computational interpretation and generation of natural language. The language typically processed in NLP occurs in paragraphs or documents rather than in single isolated sentences. Despite this, most NLP tools operate over one sentence at a time, not utilising the context outside of the sentence nor any of the metadata associated with the underlying document. One pragmatic reason for this disparity is that representing documents and their annotations through an NLP pipeline is difficult with existing infrastructure. Representing linguistic annotations for a text document using a plain text markupbased format is not sufficient to capture arbitrarily nested and overlapping annotations. Despite this, most linguistic text corpora and NLP tools still operate in this fashion. A document representation framework (DRF) supports the creation of linguistic annotations stored separately to the original document, overcoming this nesting and overlapping annotations problem. Despite the prevalence of pipelines in NLP, there is little published work on, or implementations of, DRFs. The main DRFs, GATE and UIMA, exhibit usability issues which have limited their uptake by the NLP community. This thesis aims to solve this problem through a novel, modern DRF, DOCREP; a portmanteau of document representation. DOCREP is designed to be efficient, programming language and environment agnostic, and most importantly, easy to use. We want DOCREP to be powerful and simple enough to use that NLP researchers and language technology application developers would even use it in their own small projects instead of developing their own ad hoc solution. This thesis begins by presenting the design criteria for our new DRF, extending upon existing requirements from the literature with additional usability and efficiency requirements that should lead to greater use of DRFs. We outline how our new DRF, DOCREP, differs from existing DRFs in terms of the data model, serialisation strategy, developer interactions, support for rapid prototyping, and the expected runtime and environment requirements. We then describe our provided implementations of DOCREP in Python, C++, and Java, the most common languages in NLP; outlining their efficiency, idiomaticity, and the ways in which these implementations satisfy our design requirements. We then present two different evaluations of DOCREP. First, we evaluate its ability to model complex linguistic corpora through the conversion of the OntoNotes 5 corpus to DOCREP and UIMA, outlining the differences in modelling approaches required and efficiency when using these two DRFs. Second, we evaluate DOCREP against our usability requirements from the perspective of a computational linguist who is new to DOCREP. We walk through a number of common use cases for working with text corpora and contrast traditional approaches again their DOCREP counterpart. These two evaluations conclude that DOCREP satisfies our outlined design requirements and outperforms existing DRFs in terms of efficiency, and most importantly, usability. With DOCREP designed and evaluated, we then show how NLP applications can harness document structure. We present a novel document structureaware tokenization framework for the first stage of fullstack NLP systems. We then present a new structureaware NER system which achieves stateoftheart results on multiple standard NER evaluations. The tokenization framework produces its tokenization, sentence boundary, and document structure annotations as native DOCREP annotations. The NER system consumes DOCREP annotations and utilises many components of the DOCREP runtime. We believe that the adoption of DOCREP throughout the NLP community will assist in the reproducibility of results, substitutability of components, and overall quality assurance of NLP systems and corpora, all of which are problematic areas within NLP research and applications. This adoption will make developing and combining NLP components into applications faster, more efficient, and more reliable.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ramos, González Juan José. « PML - A modeling Language for Physical Knowledge Representation ». Doctoral thesis, Universitat Autònoma de Barcelona, 2003. http://hdl.handle.net/10803/5801.

Texte intégral
Résumé :
Esta tesis versa sobre la automatización del proceso de modelado de sistemas físicos. La automatización del modelado ha sido el objetivo común en la mayor parte de las principales herramientas disponibles hoy en día. La reutilización de modelos es probablemente el principal enfoque adoptado por dichas herramientas con el objeto de reducir el coste asociado a la tarea de modelado. No obtante, permitir la reutilización de modelos predefinidos no es una cuestión trivial y, como se discute con profucdidad en la tesis, la reutilización de modelos no puede ser garantizada cuando han sido predefinidos para representar la dinámica del sistima en un contextor físico concreto. Con el fin de evitar las restricciones sobre la reutilización derivadas de la formylación matemática de las dinámicas de interés, el lenguaje de modelado debe establecer un clara separación entre los aspectos relacionados con la representación del comportamiento físico (conocimiento declarativo) y los aspectos matemáticos relacionados con las herramientas de simulación (conocimiento procedural). El conomiento declarativo representará el comportamiento físico y será utilizado para analizar el contexto físico de reutilización de los modelos con el objeto de establecer la formulación adecuada de las dinámicas de interés.
El propósito de este trabajo ha sido el diseño de un lenguaje de modelado, PML, capaz de automatizar el proceso de modelado asegurando la reusabilidad de modelos que pueden ser predefinidos de manera independiente al contexto físico don seran reutilizados. La reutilización de modelos se contempla tanto en la contrucción de nuevos modelos (modelado estructurado) como en su utilización para diferentes objetivos de experimentación. Los nuevos modelos son contruidos acoplando modelos predefinidos de acurdo a la topología física del sistema modelado. Tales modelos pueden ser manipulados para adecuarlos a distintos objetivos de experimentación, adecuándose la formulación matemática de la dinámicas de interés marcadas por dichos objetivos.
PML es un lenguaje de modelado orientado a objetos diseñado para describir el comportamiento del sistema físico mediante estructuras de representación modulares (clases de modelado). La clases PML representan conceptos físicos que son familiares al modelador. El conocimiento físico declarado por la clases se utiliza para analizar los modelos estructurados, obteniéndose de manera automatizada la representación matemática de las dinámicas de interés.
The topic of this thesis is the automated modeling of physical systems. Modeling automation has been a common objective in many of the present modeling tools. Reuse of predefined models is probably the main approach adopted by many of them in order to reduce the modeling burden. However, to facilitate reuse is difficult to achieve and, as it is discussed thoroughly in the thesis, reusability of models can not be assured when they are predefined to represent the system dynamics in a particular physical context. In order to avoid the reuse constraints due to the system dynamics formulation, a modeling language should be defined with a clear separation between the physical behaviour representation aspects (declarative physical knowledge) and the computational aspects concerning to model simulation (procedural computational knowledge). The physical knowledge will represent the system behaviour and it will support the analysis of the model reusing context in order to set the system dynamics formulation.
The aim of this work is the design of a modeling language, PML, able to automate the modeling process by assuring the reusability of ready-made models independently of the physical context where they have been defined. The reuse of a predefined model contemplates both the construction of new models (structured modeling) and the model usage for different experimentation purposes. New models are constructed by coupling predefined models according to the physical system topology. Such structured models are manipulated in order to obtain the representation of the system dynamics which are of interest for the experimentation purposes.
PML is an object oriented modeling language designed to represent system behaviour by means of modular structures (modeling classes). The PML modeling classes describe physical concepts well-known by the modeller. The physical knowledge declared by the modeling classes is used to analyze structured models in order to generate automatically the mathematical representation of the system dynamics. The simulation model is obtained by means of an equation-based object oriented modeling language.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Stephens, Robert Andrew. « Representation and knowledge acquisition : the problem of language ». Thesis, University of the West of England, Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321831.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Rocktaschel, Tim. « Combining representation learning with logic for language processing ». Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10040845/.

Texte intégral
Résumé :
The current state-of-the-art in many natural language processing and automated knowledge base completion tasks is held by representation learning methods which learn distributed vector representations of symbols via gradient based optimization. They require little or no hand-crafted features, thus avoiding the need for most preprocessing steps and task-specific assumptions. However, in many cases representation learning requires a large amount of annotated training data to generalize well to unseen data. Such labeled training data is provided by human annotators who often use formal logic as the language for specifying annotations. This thesis investigates different combinations of representation learning methods with logic for reducing the need for annotated training data, and for improving generalization. We introduce a mapping of function-free first-order logic rules to loss functions that we combine with neural link prediction models. Using this method, logical prior knowledge is directly embedded in vector representations of predicates and constants. We find that this method learns accurate predicate representations for which no or little training data is available, while at the same time generalizing to other predicates not explicitly stated in rules. However, this method relies on grounding first-order logic rules, which does not scale to large rule sets. To overcome this limitation, we propose a scalable method for embedding implications in a vector space by only regularizing predicate representations. Subsequently, we explore a tighter integration of representation learning and logical deduction. We introduce an end-to-end differentiable prover – a neural network that is recursively constructed from Prolog’s backward chaining algorithm. The constructed network allows us to calculate the gradient of proofs with respect to symbol representations and to learn these representations from proving facts in a knowledge base. In addition to incorporating complex first-order rules, it induces interpretable logic programs via gradient descent. Lastly, we propose recurrent neural networks with conditional encoding and a neural attention mechanism for determining the logical relationship between two natural language sentences.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Language Representation"

1

Language, thought, and representation. Chichester : J. Wiley & Sons, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Larrazabal, Jesús M., et Luis A. Pérez Miranda, dir. Language, Knowledge, and Representation. Dordrecht : Springer Netherlands, 2004. http://dx.doi.org/10.1007/978-1-4020-2783-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

M, Larrazabal Jesús, et Pérez Miranda Luis A, dir. Language, knowledge, and representation. Boston, Mass : Kluwer Academic Publishers, 2004.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Crumplin, Mary-Ann. Problems of democracy : Language and speaking. Freeland, Oxfordshire : Inter-Disciplinary Press, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Problems of democracy : Language and speaking. Freeland, Oxfordshire : Inter-Disciplinary Press, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Divjak, Dagmar, et Stefan Th Gries, dir. Frequency Effects in Language Representation. Berlin, Boston : DE GRUYTER, 2012. http://dx.doi.org/10.1515/9783110274073.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

MacGregor, R. The Loom Knowledge representation language. Marina Del Ray : University of Southern California, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Frequency effects in language representation. Berlin : De Gruyter Mouton, 2012.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Peter, Robinson, Jungheim Nicholas O et Pacific Second Language Research Forum., dir. Representation and process. Tokyo [Japan] : Pacific Second Language Research Forum, 1999.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Liu, Zhiyuan, Yankai Lin et Maosong Sun. Representation Learning for Natural Language Processing. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5573-2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Language Representation"

1

Buhmann, M. D., Prem Melville, Vikas Sindhwani, Novi Quadrianto, Wray L. Buntine, Luís Torgo, Xinhua Zhang et al. « Representation Language ». Dans Encyclopedia of Machine Learning, 863. Boston, MA : Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_725.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Simchen, Ori. « Semantics and Ordinary Language ». Dans Philosophical Representation, 61–80. New York : Routledge, 2023. http://dx.doi.org/10.4324/9781003306443-4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Coupland, Nikolas. « ‘Other’ representation ». Dans Society and Language Use, 241–60. Amsterdam : John Benjamins Publishing Company, 2010. http://dx.doi.org/10.1075/hoph.7.16cou.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Johnson, Michael L. « Form, Representation, Presence ». Dans Mind, Language, Machine, 80–85. London : Palgrave Macmillan UK, 1988. http://dx.doi.org/10.1007/978-1-349-19404-9_15.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bosch, Peter. « Indexicality and representation ». Dans Natural Language and Logic, 50–61. Berlin, Heidelberg : Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/3-540-53082-7_16.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Zhiyuan, Yankai Lin et Maosong Sun. « Word Representation ». Dans Representation Learning for Natural Language Processing, 13–41. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5573-2_2.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Liu, Zhiyuan, Yankai Lin et Maosong Sun. « Sentence Representation ». Dans Representation Learning for Natural Language Processing, 59–89. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5573-2_4.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Liu, Zhiyuan, Yankai Lin et Maosong Sun. « Document Representation ». Dans Representation Learning for Natural Language Processing, 91–123. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5573-2_5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Liu, Zhiyuan, Yankai Lin et Maosong Sun. « Network Representation ». Dans Representation Learning for Natural Language Processing, 217–83. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5573-2_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Scott, Bernard. « The SAL Representation Language ». Dans Translation, Brains and the Computer, 205–41. Cham : Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76629-4_9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Language Representation"

1

Chen, Zhenpeng, Sheng Shen, Ziniu Hu, Xuan Lu, Qiaozhu Mei et Xuanzhe Liu. « Emoji-Powered Representation Learning for Cross-Lingual Sentiment Classification (Extended Abstract) ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/649.

Texte intégral
Résumé :
Sentiment classification typically relies on a large amount of labeled data. In practice, the availability of labels is highly imbalanced among different languages. To tackle this problem, cross-lingual sentiment classification approaches aim to transfer knowledge learned from one language that has abundant labeled examples (i.e., the source language, usually English) to another language with fewer labels (i.e., the target language). The source and the target languages are usually bridged through off-the-shelf machine translation tools. Through such a channel, cross-language sentiment patterns can be successfully learned from English and transferred into the target languages. This approach, however, often fails to capture sentiment knowledge specific to the target language. In this paper, we employ emojis, which are widely available in many languages, as a new channel to learn both the cross-language and the language-specific sentiment patterns. We propose a novel representation learning method that uses emoji prediction as an instrument to learn respective sentiment-aware representations for each language. The learned representations are then integrated to facilitate cross-lingual sentiment classification.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Cheng, Nancy Yen-wen. « Teaching CAD with Language Learning Methods ». Dans ACADIA 1997 : Representation and Design. ACADIA, 1997. http://dx.doi.org/10.52842/conf.acadia.1997.173.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Achsas, Sanae, et El Habib Nfaoui. « Language representation learning models ». Dans SITA'20 : Theories and Applications. New York, NY, USA : ACM, 2020. http://dx.doi.org/10.1145/3419604.3419773.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Muji, Muji. « Language : Representation of Mind ». Dans Proceedings of the 1st Konferensi Internasional Berbahasa Indonesia Universitas Indraprasta PGRI, KIBAR 2020, 28 October 2020, Jakarta, Indonesia. EAI, 2022. http://dx.doi.org/10.4108/eai.28-10-2020.2315327.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Levialdi, S., et C. E. Bernardelli. « Representation : Relationship between Language and Image ». Dans Conference on Representation : Relationship between Language and Image. WORLD SCIENTIFIC, 1994. http://dx.doi.org/10.1142/9789814534659.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kountchev, R., Vl Todorov et R. Kountcheva. « Efficient sign language video representation ». Dans 2008 International Conference on Systems, Signals and Image Processing (IWSSIP). IEEE, 2008. http://dx.doi.org/10.1109/iwssip.2008.4604396.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Yian, et Hai Zhao. « Pre-training Universal Language Representation ». Dans Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1 : Long Papers). Stroudsburg, PA, USA : Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.acl-long.398.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kollar, Thomas, Danielle Berry, Lauren Stuart, Karolina Owczarzak, Tagyoung Chung, Lambert Mathias, Michael Kayser, Bradford Snow et Spyros Matsoukas. « The Alexa Meaning Representation Language ». Dans Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies, Volume 3 (Industry Papers). Stroudsburg, PA, USA : Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/n18-3022.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Brown, Paul C. « A Concept Representation Language (CRL) ». Dans 2018 IEEE 12th International Conference on Semantic Computing (ICSC). IEEE, 2018. http://dx.doi.org/10.1109/icsc.2018.00010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Neville, Dorothy, et Leo Joskowicz. « A Representation Language for Mechanical Behavior ». Dans ASME 1993 Design Technical Conferences. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/detc1993-0001.

Texte intégral
Résumé :
Abstract Automating mechanism design requires developing a representation language for describing mechanism behavior. The language is necessary to specify design requirements, to describe existing mechanisms, and to catalog them for design reuse. This paper presents a simple and expressive language for describing the behavior of fixed-axes mechanisms. The language uses predicates and algebraic relations to describe the positions and motions of each part of the mechanism and the relationships between them. It allows both accurate and complete descriptions and partial, abstract, and under-specified descriptions. We show that the language is computationally viable by describing how to automatically derive behavioral descriptions stated in the language from the mechanism structure. To test its usefulness, we describe a design validation algorithm that determines if a given mechanism structure can produce desired behaviors stated in the language.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Language Representation"

1

Moore, Robert C. Knowledge Representation and Natural-Language Semantics. Fort Belvoir, VA : Defense Technical Information Center, novembre 1986. http://dx.doi.org/10.21236/ada181422.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Moore, Robert C. Knowledge Representation and Natural-Language Semantics. Fort Belvoir, VA : Defense Technical Information Center, août 1985. http://dx.doi.org/10.21236/ada162389.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Allen, James F. Natural Language, Knowledge Representation, and Logical Form. Fort Belvoir, VA : Defense Technical Information Center, janvier 1991. http://dx.doi.org/10.21236/ada247389.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Sidner, C. Research in Knowledge Representation for Natural Language Understanding. Fort Belvoir, VA : Defense Technical Information Center, février 1985. http://dx.doi.org/10.21236/ada152260.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Delugach, Harry S., Lissa C. Cox et David J. Skipper. Dependency Language Representation Using Conceptual Graphs. Autonomic Information Systems. Fort Belvoir, VA : Defense Technical Information Center, août 2001. http://dx.doi.org/10.21236/ada399504.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kuehne, Sven E. On the Representation of Physical Quantities in Natural Language Text. Fort Belvoir, VA : Defense Technical Information Center, janvier 2004. http://dx.doi.org/10.21236/ada465872.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Birkholz, H., C. Vigano et C. Bormann. Concise Data Definition Language (CDDL) : A Notational Convention to Express Concise Binary Object Representation (CBOR) and JSON Data Structures. RFC Editor, juin 2019. http://dx.doi.org/10.17487/rfc8610.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zelenskyi, Arkadii A. Relevance of research of programs for semantic analysis of texts and review of methods of their realization. [б. в.], décembre 2018. http://dx.doi.org/10.31812/123456789/2884.

Texte intégral
Résumé :
One of the main tasks of applied linguistics is the solution of the problem of high-quality automated processing of natural language. The most popular methods for processing natural-language text responses for the purpose of extraction and representation of semantics should be systems that are based on the efficient combination of linguistic analysis technologies and analysis methods. Among the existing methods for analyzing text data, a valid method is used by the method using a vector model. Another effective and relevant means of extracting semantics from the text and its representation is the method of latent semantic analysis (LSA). The LSA method was tested and confirmed its effectiveness in such areas of processing the native language as modeling the conceptual knowledge of the person; information search, the implementation of which LSA shows much better results than conventional vector methods.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Tarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan et Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, novembre 2020. http://dx.doi.org/10.31812/123456789/4421.

Texte intégral
Résumé :
The article deals with the analysis of the impact of the using AR technology in the study of a foreign language by university students. It is stated out that AR technology can be a good tool for learning a foreign language. The use of elements of AR in the course of studying a foreign language, in particular in the form of virtual excursions, is proposed. Advantages of using AR technology in the study of the German language are identified, namely: the possibility of involvement of different channels of information perception, the integrity of the representation of the studied object, the faster and better memorization of new vocabulary, the development of communicative foreign language skills. The ease and accessibility of using QR codes to obtain information about the object of study from open Internet sources is shown. The results of a survey of students after virtual tours are presented. A reorientation of methodological support for the study of a foreign language at universities is proposed. Attention is drawn to the use of AR elements in order to support students with different learning styles (audio, visual, kinesthetic).
Styles APA, Harvard, Vancouver, ISO, etc.
10

Tarasenko, Rostyslav O., Svitlana M. Amelina, Yuliya M. Kazhan et Olga V. Bondarenko. The use of AR elements in the study of foreign languages at the university. CEUR Workshop Proceedings, novembre 2020. http://dx.doi.org/10.31812/123456789/4421.

Texte intégral
Résumé :
The article deals with the analysis of the impact of the using AR technology in the study of a foreign language by university students. It is stated out that AR technology can be a good tool for learning a foreign language. The use of elements of AR in the course of studying a foreign language, in particular in the form of virtual excursions, is proposed. Advantages of using AR technology in the study of the German language are identified, namely: the possibility of involvement of different channels of information perception, the integrity of the representation of the studied object, the faster and better memorization of new vocabulary, the development of communicative foreign language skills. The ease and accessibility of using QR codes to obtain information about the object of study from open Internet sources is shown. The results of a survey of students after virtual tours are presented. A reorientation of methodological support for the study of a foreign language at universities is proposed. Attention is drawn to the use of AR elements in order to support students with different learning styles (audio, visual, kinesthetic).
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie