Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Language Representation.

Статті в журналах з теми "Language Representation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Language Representation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Andrade, Cláudia Braga de. "THE SPECIFICITY OF LANGUAGE IN PSYCHOANALYSIS." Ágora: Estudos em Teoria Psicanalítica 19, no. 2 (August 2016): 279–94. http://dx.doi.org/10.1590/s1516-14982016002009.

Повний текст джерела
Анотація:
ABSTRACT: The following article aims to discuss the conceptions of language proposed in Freud's work, considering the specificity of the notion of representation, and to point out its consequences in clinical practice. In the representation theory, it is possible to conceive a notion of language characterized by heterogeneity (representation of the word and representation of the thing) that its effect of meaning is produced through a complex of associations and their linkage to an intensive approach (instinctual representative). From this model, it is assumed that the language exceeds its semantic function and the representational system is not restricted to the field of rationality and verbalization.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rutten, Geert-Jan, and Nick Ramsey. "Language Representation." Journal of Neurosurgery 106, no. 4 (April 2007): 726–27. http://dx.doi.org/10.3171/jns.2007.106.4.726.

Повний текст джерела
Анотація:
Dissociated language functions are largely invalidated by standard techniques such as the amobarbital test and cortical stimulation. Language studies in which magnetoencephalography (MEG) and functional magnetic resonance (fMR) imaging are used to record data while the patient performs lexicosemantic tasks have enabled researchers to perform independent brain mapping for temporal and frontal language functions (MEG is used for temporal and fMR imaging for frontal functions). In this case report, the authors describe a right-handed patient in whom a right-sided insular glioma was diagnosed. The patient had a right-lateralized receptive language area, but expressive language function was identified in the left hemisphere on fMR imaging– and MEG-based mapping. Examinations were performed in 20 right-handed patients with low-grade gliomas (control group) for careful comparison with and interpretation of this patient's results. In these tests, all patients were asked to generate verbs related to acoustically presented nouns (verb generation) for fMR imaging, and to categorize as abstract or concrete a set of visually presented words consisting of three Japanese letters for fMR imaging and MEG. The most prominent display of fMR imaging activation by the verb-generation task was observed in the left inferior and middle frontal gyri in all participants, including the patient presented here. Estimated dipoles identified with the abstract/concrete categorization task were concentrated in the superior temporal and supra-marginal gyri in the left hemisphere in all control patients. In this patient, however, the right superior temporal region demonstrated significantly stronger activations on MEG and fMR imaging with the abstract/concrete categorization task. Suspected dissociation of the language functions was successfully mapped with these two imaging modalities and was validated by the modified amobarbital test and the postoperative neurological status. The authors describe detailed functional profiles obtained in this patient and review the cases of four previously described patients in whom dissociated language functions were found.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tomasello, Michael. "Language and Representation." Contemporary Psychology: A Journal of Reviews 42, no. 12 (December 1997): 1080–83. http://dx.doi.org/10.1037/000637.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Jing, Chenchen, Yuwei Wu, Xiaoxun Zhang, Yunde Jia, and Qi Wu. "Overcoming Language Priors in VQA via Decomposed Linguistic Representations." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11181–88. http://dx.doi.org/10.1609/aaai.v34i07.6776.

Повний текст джерела
Анотація:
Most existing Visual Question Answering (VQA) models overly rely on language priors between questions and answers. In this paper, we present a novel method of language attention-based VQA that learns decomposed linguistic representations of questions and utilizes the representations to infer answers for overcoming language priors. We introduce a modular language attention mechanism to parse a question into three phrase representations: type representation, object representation, and concept representation. We use the type representation to identify the question type and the possible answer set (yes/no or specific concepts such as colors or numbers), and the object representation to focus on the relevant region of an image. The concept representation is verified with the attended region to infer the final answer. The proposed method decouples the language-based concept discovery and vision-based concept verification in the process of answer inference to prevent language priors from dominating the answering process. Experiments on the VQA-CP dataset demonstrate the effectiveness of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ben-Yami, Hanoch. "Word, Sign and Representation in Descartes." Journal of Early Modern Studies 10, no. 1 (2021): 29–46. http://dx.doi.org/10.5840/jems20211012.

Повний текст джерела
Анотація:
In the first chapter of his The World, Descartes compares light to words and discusses signs and ideas. This made scholars read into that passage our views of language as a representational medium and consider it Descartes’ model for representation in perception. I show, by contrast, that Descartes does not ascribe there any representational role to language; that to be a sign is for him to have a kind of causal role; and that he is concerned there only with the cause’s lack of resemblance to its effect, not with the representation’s lack of resemblance to what it represents. I support this interpretation by comparisons with other places in Descartes’ corpus and with earlier authors, Descartes’ likely sources. This interpretation may shed light both on Descartes’ understanding of the functioning of language and on the development of his theory of representation in perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

최숙이. "Language as Volition and Representation : Representation of Volition in Language." Journal of japanese Language and Culture ll, no. 26 (December 2013): 321–45. http://dx.doi.org/10.17314/jjlc.2013..26.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Navigli, Roberto, Rexhina Blloshmi, and Abelardo Carlos Martínez Lorenzo. "BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12274–79. http://dx.doi.org/10.1609/aaai.v36i11.21490.

Повний текст джерела
Анотація:
Conceptual representations of meaning have long been the general focus of Artificial Intelligence (AI) towards the fundamental goal of machine understanding, with innumerable efforts made in Knowledge Representation, Speech and Natural Language Processing, Computer Vision, inter alia. Even today, at the core of Natural Language Understanding lies the task of Semantic Parsing, the objective of which is to convert natural sentences into machine-readable representations. Through this paper, we aim to revamp the historical dream of AI, by putting forward a novel, all-embracing, fully semantic meaning representation, that goes beyond the many existing formalisms. Indeed, we tackle their key limits by fully abstracting text into meaning and introducing language-independent concepts and semantic relations, in order to obtain an interlingual representation. Our proposal aims to overcome the language barrier, and connect not only texts across languages, but also images, videos, speech and sound, and logical formulas, across many fields of AI.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Inozemtsev, V. A. "Deductive logic in solving computer knowledge representation." Izvestiya MGTU MAMI 8, no. 1-5 (September 10, 2014): 121–26. http://dx.doi.org/10.17816/2074-0530-67477.

Повний текст джерела
Анотація:
The article develops the concept of computer representology, which is the philosophical and methodological analysis of deductive models of knowledge representation. These models are one of the varieties of logical models of knowledge representation. These latter knowledge representations together with a logical languages form the important concept of the computer knowledge representation - logical. Under the concepts of computer representation of knowledge are understood aggregates of computer models of representation of domain knowledge of reality, and the corresponding to these models language means, which are developed by artificial intelligence. These concepts are different ways to solve the problems of computer knowledge representations.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Gilbert, Stephen B., and Whitman Richards. "The Classification of Representational Forms." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 2244–48. http://dx.doi.org/10.1177/1071181319631530.

Повний текст джерела
Анотація:
Knowledge access and ease of problem-solving, using technology or not, depends upon our choice of representation. Because of our unique facility with language and pictures, these two descriptions are often used to characterize most representational forms, or their combinations, such as flow charts, tables, trees, graphs, or lists. Such a characterization suggests that language and pictures are the principal underlying cognitive dimensions for representational forms. However, we show that when similarity-based scaling methods (multidimensional scaling, hierarchical clustering, and trajectory mapping) are used to relate user tasks that are supported by different representations, then a new categorization appears, namely, tables, trees, and procedures. This new arrangement of knowledge representations may aid interface designers in choosing an appropriate representation for their users' tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Miller, R. A., R. H. Baud, J. R. Scherrer, and A. M. Rassinoux. "Modeling Concepts in Medicine for Medical Language Understanding." Methods of Information in Medicine 37, no. 04/05 (October 1998): 361–72. http://dx.doi.org/10.1055/s-0038-1634561.

Повний текст джерела
Анотація:
AbstractOver the past two decades, the construction of models for medical concept representation and for understanding of the deep meaning of medical narrative texts have been challenging areas of medical informatics research. This review highlights how these two inter-related domains have evolved, emphasizing aspects of medical modeling as a tool for medical language understanding. A representation schema, which balances partially but accurately with complete but complex representations of domainspecific knowledge, must be developed to facilitate language understanding. Representative examples are drawn from two major independent efforts undertaken by the authors: the elaboration and the subsequent adjustment of the RECIT multilingual analyzer to include a robust medical concept model, and the recasting of a frame-based interlingua system, originally developed to map equivalent concepts between controlled clinical vocabularies, to invoke a similar concept model.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Pugh, Zachary H., and Douglas J. Gillan. "Nodes Afford Connection: A Pilot Study Examining the Design and Use of a Graphical Modeling Language." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 1024–28. http://dx.doi.org/10.1177/1071181321651150.

Повний текст джерела
Анотація:
External representations such as diagrams facilitate reasoning. Many diagramming systems and notations are amenable to manipulation by actual or imagined intervention (e.g., transposing terms in an equation). Such manipulation is constrained by user-enforced constraints, including rules of syntax and semantics which help preserve the representation’s validity. We argue that the concepts of affordances and signifiers can be applied to understand such representations, and we suggest the term graphical affordance to refer to rule-constrained syntactic manipulation of an external representation. Following this argument, we examine a graphical modeling language in terms of these graphical affordances, and we present a pilot study examining how participants interact with the modeling language. Preliminary results suggest that using the modeling language, as opposed to prose representation, influences user behavior in a manner aligned with the graphical affordances and signifiers of the modeling language.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ivascu, Laura. "The Perception of Representation in Visual Graphic Language." New Trends and Issues Proceedings on Humanities and Social Sciences 4, no. 11 (December 28, 2017): 249–58. http://dx.doi.org/10.18844/prosoc.v4i11.2881.

Повний текст джерела
Анотація:
This paper addresses an interesting field of communication, namely visual graphic language. The principles of this special communication language (visual graphic language) are presented in a case study that highlights graphic visual perception of the subjects regarding some representations explained in this article. This research paper follows the understanding of the proposed representation by the case study subjects, which are illustrated through charts and percentages that explain how they perceived and translated these graphic representations. Future research will focus on the subject’s ability to represent themselves new graphic representations, starting with those proposed by the authors of this article. Keywords: Perception, visual graphic language, graphic comparative study, representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Bjerva, Johannes, Robert Östling, Maria Han Veiga, Jörg Tiedemann, and Isabelle Augenstein. "What Do Language Representations Really Represent?" Computational Linguistics 45, no. 2 (June 2019): 381–89. http://dx.doi.org/10.1162/coli_a_00351.

Повний текст джерела
Анотація:
A neural language model trained on a text corpus can be used to induce distributed representations of words, such that similar words end up with similar representations. If the corpus is multilingual, the same model can be used to learn distributed representations of languages, such that similar languages end up with similar representations. We show that this holds even when the multilingual corpus has been translated into English, by picking up the faint signal left by the source languages. However, just as it is a thorny problem to separate semantic from syntactic similarity in word representations, it is not obvious what type of similarity is captured by language representations. We investigate correlations and causal relationships between language representations learned from translations on one hand, and genetic, geographical, and several levels of structural similarity between languages on the other. Of these, structural similarity is found to correlate most strongly with language representation similarity, whereas genetic relationships—a convenient benchmark used for evaluation in previous work—appears to be a confounding factor. Apart from implications about translation effects, we see this more generally as a case where NLP and linguistic typology can interact and benefit one another.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Sun, Jingyuan, Shaonan Wang, Jiajun Zhang, and Chengqing Zong. "Towards Sentence-Level Brain Decoding with Distributed Representations." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7047–54. http://dx.doi.org/10.1609/aaai.v33i01.33017047.

Повний текст джерела
Анотація:
Decoding human brain activities based on linguistic representations has been actively studied in recent years. However, most previous studies exclusively focus on word-level representations, and little is learned about decoding whole sentences from brain activation patterns. This work is our effort to mend the gap. In this paper, we build decoders to associate brain activities with sentence stimulus via distributed representations, the currently dominant sentence representation approach in natural language processing (NLP). We carry out a systematic evaluation, covering both widely-used baselines and state-of-the-art sentence representation models. We demonstrate how well different types of sentence representations decode the brain activation patterns and give empirical explanations of the performance difference. Moreover, to explore how sentences are neurally represented in the brain, we further compare the sentence representation’s correspondence to different brain areas associated with high-level cognitive functions. We find the supervised structured representation models most accurately probe the language atlas of human brain. To the best of our knowledge, this work is the first comprehensive evaluation of distributed sentence representations for brain decoding. We hope this work can contribute to decoding brain activities with NLP representation models, and understanding how linguistic items are neurally represented.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Sunik, Boris. "Knowledge representation with T." Artificial Intelligence Research 7, no. 2 (December 5, 2018): 55. http://dx.doi.org/10.5430/air.v7n2p55.

Повний текст джерела
Анотація:
The universal representation language T proposed in the article is the set of linguistic items employed in the manner of a natural language with the purpose of information exchange between various communicators. The language is not confined to any particular representation domain, implementation, communicator or discourse type. Assuming there is sufficient vocabulary, each text composed in any of the human languages can be adequately translated to T in the same way as it can be translated to another human language. The semantics transmitted by T code consist of conventional knowledge regarding objects, actions, properties, states and so on.T allows the explicit expression of kinds of information traditionally considered as inexpressible, like tacit knowledge or even non-human knowledge.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Tsafnat, Guy. "The Field Representation Language." Journal of Biomedical Informatics 41, no. 1 (February 2008): 46–57. http://dx.doi.org/10.1016/j.jbi.2007.03.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Topp, L. "Architecture, Language, and Representation." Oxford Art Journal 32, no. 2 (June 1, 2009): 321–24. http://dx.doi.org/10.1093/oxartj/kcp016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hirnyi, Oleh Ihorovych. "The term “predstavlennia” in philosophical texts and in its translations." Filosofiya osvity. Philosophy of Education 26, no. 1 (December 24, 2020): 230–49. http://dx.doi.org/10.31874/2309-1606-2020-26-1-14.

Повний текст джерела
Анотація:
The article is devoted to the terminological problem of adequate Ukrainian translation of the Polish term "przedstawienie", which is a generic term to denote visual and abstract works of human intelligence. In the available Ukrainian texts, visual works are usually denoted by the term "ujavlennia", abstract – by the term "poniattia". However, these two species terms still do not have in the practice of Ukrainian word usage an established generic term that generalizes them. In general, there are two versions of the Ukrainian translation of the term "przedstawienie": "predstavlennia" (representation) and "ujavlennia" (idea). Both options from a formal-grammatical point of view have both advantages and disadvantages. Their comparative consideration, involving the experience of using these terms in Polish (for the translation of their English, German and other equivalents) – is the main content of this article. The causes and consequences of differences in the use of the term "representation" in Ukrainian dictionaries – both encyclopedic and specialized, linguistic and psychological - are compared and analyzed. The author attempts to analyze the philosophical aspects of the use of the term "representation" in its relation to representations as concrete (visual) representations and concepts as abstract (non-visual) representations. Analyzing the influences on Ukrainian terminology from the Polish and Russian languages, the author argues in favor of the actual Ukrainian etymology of the term "representation" as a generic term. The Ukrainian language often involves the interchangeable use of the terms "predstavlennia" (representation) and "ujavlennia" (idea) as generic. Also in recent years, there has been a discussion in Ukrainian science about the possible use of the term "image" as a derivative of the term "image". Mostly Russian language influences as a basis for such use are proved. However, arguments are given regarding the priority of using the term "representation" as a generic language in the Ukrainian language. This is closer to the Polish language. The author presents the main arguments of the philosophical discussion on the term "representation", which took place in Polish philosophy.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wolfe, Rosalee, John C. McDonald, Thomas Hanke, Sarah Ebling, Davy Van Landuyt, Frankie Picron, Verena Krausneker, Eleni Efthimiou, Evita Fotinea, and Annelies Braffort. "Sign Language Avatars: A Question of Representation." Information 13, no. 4 (April 18, 2022): 206. http://dx.doi.org/10.3390/info13040206.

Повний текст джерела
Анотація:
Given the achievements in automatically translating text from one language to another, one would expect to see similar advancements in translating between signed and spoken languages. However, progress in this effort has lagged in comparison. Typically, machine translation consists of processing text from one language to produce text in another. Because signed languages have no generally-accepted written form, translating spoken to signed language requires the additional step of displaying the language visually as animation through the use of a three-dimensional (3D) virtual human commonly known as an avatar. Researchers have been grappling with this problem for over twenty years, and it is still an open question. With the goal of developing a deeper understanding of the challenges posed by this question, this article gives a summary overview of the unique aspects of signed languages, briefly surveys the technology underlying avatars and performs an in-depth analysis of the features in a textual representation for avatar display. It concludes with a comparison of these features and makes observations about future research directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Vehkavaara, Tommi. "Natural self-interest, interactive representation, and the emergence of objects and Umwelt: An outline of basic semiotic concepts for biosemiotics." Sign Systems Studies 31, no. 2 (December 31, 2003): 547–87. http://dx.doi.org/10.12697/sss.2003.31.2.14.

Повний текст джерела
Анотація:
In biosemiotics, life and living phenomena are described by means of originally anthropomorphic semiotic concepts. This can be justified if we can show that living systems as self-maintaining far from equilibrium systems create and update some kind of representation about the conditions of their self-maintenance. The point of view is the one of semiotic realism where signs and representations are considered as real and objective natural phenomena without any reference to the specifically human interpreter. It is argued that the most basic concept of representation must be forward looking and that both C. Peirce’s and J. v. Uexküll’s concepts of sign assume an unnecessarily complex semiotic agent. The simplest representative systems do not have phenomenal objects or Umwelten at all. Instead, the minimal concept of representation and the source of normativity that is needed in its interpretation can be based on M. Bickhard’s interactivism. The initial normativity or natural self-interest is based on the ‘utility-concept’ of function: anything that contributes to the maintenance of a far from equilibrium system is functional to that system — every self-maintaining far from equilibrium system has a minimal natural self-interest to serve that function, it is its existential precondition. Minimal interactive representation emerges when such systems become able to switch appropriately between two or more means of maintaining themselves. At the level of such representations, a potentiality to detect an error may develop although no objects of representation for the system are provided. Phenomenal objects emerge in systems that are more complex. If a system creates a set of ongoingly updated ’situation images’ and can detect temporal invariances in the updating process, these invariances constitute objects for the system itself. Within them, a representative system gets an Umwelt and becomes capable of experiencing triadic signs. The relation between representation and its object is either iconic or indexical at this level. Correspondingly as in Peirce’s semeiotic, symbolic signs appear as more developed — for the symbolic signs, a more complex system is needed.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Temple, Bogusia. "Nice and Tidy: Translation and Representation." Sociological Research Online 10, no. 2 (July 2005): 45–54. http://dx.doi.org/10.5153/sro.1058.

Повний текст джерела
Анотація:
Across many disciplines, including sociology, anthropology, philosophy, cultural studies and sociolinguistics, writers and researchers are concerned with how language is used to construct representations of people in written and oral accounts. There is also increasing interest in cross-disciplinary approaches to language and representation in research. Within health, social care and housing research there is a rapidly growing volume of writing on, and sometimes with, people whose first language is not English. However, much empirical research in these fields remains at the level of ‘findings’ about groups of people with the issue of how they are represented remaining unexamined. In this article I discuss some of the different ways researchers have looked at issues of translation and representation across languages. As I show, some researchers have attempted to ignore or by-pass these issues in their research, some have given up the task as impossible and others have attempted the impossible. I argue that, although there can be no single ‘correct’ way for researchers to represent people who speak different languages, choices about how to do this have epistemological and ethical implications.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Lin, Zehao, Guodun Li, Jingfeng Zhang, Yue Deng, Xiangji Zeng, Yin Zhang, and Yao Wan. "XCode : Towards Cross-Language Code Representation with Large-Scale Pre-Training." ACM Transactions on Software Engineering and Methodology 31, no. 3 (July 31, 2022): 1–44. http://dx.doi.org/10.1145/3506696.

Повний текст джерела
Анотація:
Source code representation learning is the basis of applying artificial intelligence to many software engineering tasks such as code clone detection, algorithm classification, and code summarization. Recently, many works have tried to improve the performance of source code representation from various perspectives, e.g., introducing the structural information of programs into latent representation. However, when dealing with rapidly expanded unlabeled cross-language source code datasets from the Internet, there are still two issues. Firstly, deep learning models for many code-specific tasks still suffer from the lack of high-quality labels. Secondly, the structural differences among programming languages make it more difficult to process multiple languages in a single neural architecture. To address these issues, in this article, we propose a novel Cross -language Code representation with a large-scale pre-training ( XCode ) method. Concretely, we propose to use several abstract syntax trees and ELMo-enhanced variational autoencoders to obtain multiple pre-trained source code language models trained on about 1.5 million code snippets. To fully utilize the knowledge across programming languages, we further propose a Shared Encoder-Decoder (SED) architecture which uses the multi-teacher single-student method to transfer knowledge from the aforementioned pre-trained models to the distilled SED. The pre-trained models and SED will cooperate to better represent the source code. For evaluation, we examine our approach on three typical downstream cross-language tasks, i.e., source code translation, code clone detection, and code-to-code search, on a real-world dataset composed of programming exercises with multiple solutions. Experimental results demonstrate the effectiveness of our proposed approach on cross-language code representations. Meanwhile, our approach performs significantly better than several code representation baselines on different downstream tasks in terms of multiple automatic evaluation metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

BURNS, TRACEY C., KATHERINE A. YOSHIDA, KAREN HILL, and JANET F. WERKER. "The development of phonetic representation in bilingual and monolingual infants." Applied Psycholinguistics 28, no. 3 (June 11, 2007): 455–74. http://dx.doi.org/10.1017/s0142716407070257.

Повний текст джерела
Анотація:
The development of native language phonetic representations in bilingual infants was compared to that of monolingual infants. Infants (ages 6–8, 10–12, and 14–20 months) from English–French or English-only environments were tested on their ability to discriminate a French and an English voice onset time distinction. Although 6- to 8-month-olds responded similarly irrespective of language environment, by 10–12 months both groups of infants displayed language-specific perceptual abilities: the monolinguals demonstrated realignment to the native English boundary whereas the bilinguals began discriminating both native boundaries. This suggests that infants exposed to two languages from birth are equipped to phonetically process each as a native language and the development of phonetic representation is neither delayed nor compromised by additional languages.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Da Silva, Renato Caixeta. "Representações do aprender línguas em narrativas visuais: o que mostram as imagens de sites de escolas de inglês." Ilha do Desterro A Journal of English Language, Literatures in English and Cultural Studies 73, no. 1 (January 31, 2020): 129–52. http://dx.doi.org/10.5007/2175-8026.2020v73n1p129.

Повний текст джерела
Анотація:
This article is about the discourse on learning languages construed and transmitted by images in websites of private schools specialized in foreign languages teaching, evidencing the representations construed by these establishments to society. The argument favors the consideration of images in language studies, and the theoretical basis adopted includes the socio-semiotic approach to language, the Grammar of Visual Design, and the concept of social representation. The corpus analysis evidences the representation of the learning in optimal conditions of comfort, homogeneity, with the presence of resources and diverse instruments, in which the student is an agent of processes related to linguistic reception. It is concluded that these visual narratives contribute to the idea widely spread in society that the ideal language learning happens in private specialized schools and is reserved for few people.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Clark, Stephen R. L. "The Evolution of Language: Truth and Lies." Philosophy 75, no. 3 (July 2000): 401–21. http://dx.doi.org/10.1017/s0031819100000474.

Повний текст джерела
Анотація:
There is both theoretical and experimental reason to suppose that no-one could ever have learned to speak without an environment of language-users. How then did the first language-users learn? Animal communication systems provide no help, since human languages aren't constituted as a natural system of signs, and are essentially recursive and syntactic. Such languages aren't demanded by evolution, since most creatures, even intelligent creatures, manage very well without them. I propose that representations, and even public representations like sculptures, precede full languages, which were devised by the first human children as secret tongues to create fantasy realms inaccessible to their proto-human parents. Language, in brief, is not required for truth-telling or for the convenience of hunters. It is a peculiar modification of public representation, which permits us to construct new public worlds.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Ben-Sasson, Hillel. "Representation and Presence: Divine Names in Judaism and Islam." Harvard Theological Review 114, no. 2 (April 2021): 219–40. http://dx.doi.org/10.1017/s0017816021000158.

Повний текст джерела
Анотація:
AbstractDivine names are linguistic objects that underlie the grammar of religious language. They serve as both representations and presentations of the divine. As representations, divine names carry information pertaining to God’s nature or actions, and his unique will, in a manner that adequately represents him. As presentations, divine names are believed to somehow effect divine presence in proximity to the believer, opening a path of direct connection to God. This paper seeks to analyze the interaction between presentation and representation concerning divine names in major trends within Judaism and Islam, from the Hebrew Bible and the Qur’an to medieval theological debates. It aims to demonstrate how central currents within both traditions shaped the intricate relation between divine presentation and representation through the prism of divine names. Whereas positions in philosophy of language focus on either the representational or the presentational functions of proper names, Jewish and Islamic theologies suggest ways to combine the two functions with regard to divine names.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

PĂUN, GHEORGHE, MARIO J. PÉREZ-JIMÉNEZ, and TAKASHI YOKOMORI. "REPRESENTATIONS AND CHARACTERIZATIONS OF LANGUAGES IN CHOMSKY HIERARCHY BY MEANS OF INSERTION-DELETION SYSTEMS." International Journal of Foundations of Computer Science 19, no. 04 (August 2008): 859–71. http://dx.doi.org/10.1142/s0129054108006005.

Повний текст джерела
Анотація:
Insertion-deletion operations are much investigated in linguistics and in DNA computing and several characterizations of Turing computability and characterizations or representations of languages in Chomsky hierarchy were obtained in this framework. In this note we contribute to this research direction with a new characterization of this type, as well as with representations of regular and context-free languages, mainly starting from context-free insertion systems of as small as possible complexity. For instance, each recursively enumerable language L can be represented in a way similar to the celebrated Chomsky-Schützenberger representation of context-free languages, i.e., in the form L = h(L(γ) ∩ D), where γ is an insertion system of weight (3, 0) (at most three symbols are inserted in a context of length zero), h is a projection, and D is a Dyck language. A similar representation can be obtained for regular languages, involving insertion systems of weight (2,0) and star languages, as well as for context-free languages – this time using insertion systems of weight (3, 0) and star languages.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Joukhadar, Alaa, Nada Ghneim, and Ghaida Rebdawi. "Impact of Using Bidirectional Encoder Representations from Transformers (BERT) Models for Arabic Dialogue Acts Identification." Ingénierie des systèmes d information 26, no. 5 (October 31, 2021): 469–75. http://dx.doi.org/10.18280/isi.260506.

Повний текст джерела
Анотація:
In Human-Computer dialogue systems, the correct identification of the intent underlying a speaker's utterance is crucial to the success of a dialogue. Several researches have studied the Dialogue Act Classification (DAC) task to identify Dialogue Acts (DA) for different languages. Recently, the emergence of Bidirectional Encoder Representations from Transformers (BERT) models, enabled establishing state-of-the-art results for a variety of natural language processing tasks in different languages. Very few researches have been done in the Arabic Dialogue acts identification task. The BERT representation model has not been studied yet in Arabic Dialogue acts detection task. In this paper, we propose a model using BERT language representation to identify Arabic Dialogue Acts. We explore the impact of using different BERT models: AraBERT Original (v0.1, v1), AraBERT Base (v0.2, and v2) and AraBERT Large (v0.2, and v2), which are pretrained on different Arabic corpora (different in size, morphological segmentation, language model window, …). The comparison was performed on two available Arabic datasets. Using AraBERTv0.2-base model for dialogue representations outperformed all other pretrained models. Moreover, we compared the performance of AraBERTv0.2-base model to the state-of-the-art approaches applied on the two datasets. The comparison showed that this representation model outperformed the performance both state-of-the-art models.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Stevenson, Patrick. "Language, Migration, and Spaces of Representation." Journal of Germanic Linguistics 23, no. 4 (December 2011): 401–12. http://dx.doi.org/10.1017/s1470542711000213.

Повний текст джерела
Анотація:
The study of relationships between language and place has a long tradition in the context of Germanic languages, from 19th century dialect geography to late 20th century contact linguistics. However, thecontemporary processes of migration, coupled with the emergence of new communication technologies and structural changes in the economies of states and regions, have created challenges for the study of linguistic practices and their place in the lives of individuals and socialgroups. The preceding papers in this volume take these challenges as an opportunity to reflect in new ways on past migrations. This concluding paper discusses the contributions they make to the study of language, migration, and place in relation to (speakers of) Germanic language varieties in North America and suggests ways in which they open up different spaces of representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

VAN GOMPEL, ROGER P. G., and MANABU ARAI. "Structural priming in bilinguals." Bilingualism: Language and Cognition 21, no. 3 (October 5, 2017): 448–55. http://dx.doi.org/10.1017/s1366728917000542.

Повний текст джерела
Анотація:
In this review, we examine how structural priming has been used to investigate the representation of first and second language syntactic structures in bilinguals. Most experiments suggest that structures that are identical in the first and second language have a single, shared mental representation. The results from structures that are similar but not fully identical are less clear, but they may be explained by assuming that first and second language representations are merely connected rather than fully shared. Some research has also used structural priming to investigate the representation of cognate words. We will also consider whether cross-linguistic structural priming taps into long-term implicit learning effects. Finally, we discuss recent research that has investigated how second language syntactic representations develop as learners’ proficiency increases.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Zhang, Zhuosheng, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. "Semantics-Aware BERT for Language Understanding." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9628–35. http://dx.doi.org/10.1609/aaai.v34i05.6510.

Повний текст джерела
Анотація:
The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks. However, the existing language representation models including ELMo, GPT and BERT only exploit plain context-sensitive features such as character or word embeddings. They rarely consider incorporating structured semantic information which can provide rich semantics for language representation. To promote natural language understanding, we propose to incorporate explicit contextual semantics from pre-trained semantic role labeling, and introduce an improved language representation model, Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing contextual semantics over a BERT backbone. SemBERT keeps the convenient usability of its BERT precursor in a light fine-tuning way without substantial task-specific modifications. Compared with BERT, semantics-aware BERT is as simple in concept but more powerful. It obtains new state-of-the-art or substantially improves results on ten reading comprehension and language inference tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Wang, William S. Y., and Jinyun Ke. "Language heterogeneity and self-organizing consciousness." Behavioral and Brain Sciences 25, no. 3 (June 2002): 358–59. http://dx.doi.org/10.1017/s0140525x02520069.

Повний текст джерела
Анотація:
While the current generative paradigm in linguistics leans heavily toward computation, investigations on conscious representations are much welcome. The SOC model examines the acquisition of complex representations in individuals. We note that heterogeneity of representation in populations is a central issue that must be addressed as well. In addition to the self-organizing processes proposed for the individual, interactions among individuals must be incorporated in any comprehensive account of language.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

ANTHONY, JASON L., RACHEL G. AGHARA, EMILY J. SOLARI, MARTHA J. DUNKELBERGER, JEFFREY M. WILLIAMS, and LAN LIANG. "Quantifying phonological representation abilities in Spanish-speaking preschool children." Applied Psycholinguistics 32, no. 1 (October 7, 2010): 19–49. http://dx.doi.org/10.1017/s0142716410000275.

Повний текст джерела
Анотація:
ABSTRACTIndividual differences in abilities to form, access, and hone phonological representations of words are implicated in the development of oral and written language. This study addressed three important gaps in the literature concerning measurement of individual differences in phonological representation. First, we empirically examined the dimensionality of phonological representation abilities. Second, we empirically compared how well typical measures index various representation-related phonological processing abilities. Third, we supply data on Spanish phonological representation abilities of incipient Spanish–English bilingual children to address the need for information on phonological representation across languages. Specifically, nine measures of accessibility to and precision of phonological presentations were administered to 129 preschool children in the United States. Confirmatory factor analyses validated three separate but correlated a priori phonological processing abilities, that is, efficiency of accessing phonological codes, precision of phonological codes as reflected in speech production, and precision of phonological codes as reflected in speech perception. Most prototypic measures were strong indicators of their respective representation-related phonological ability. We discuss how the current data in Spanish compares to limited data in English, and the implications for the organization of phonological representations abilities.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

CUI, Zhan-Ling, and De-Qiang Wang. "Language Representation and Language Association of Ethnic Bilinguals." Advances in Psychological Science 20, no. 8 (June 7, 2013): 1222–28. http://dx.doi.org/10.3724/sp.j.1042.2012.01222.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Schomacker, Thorben, and Marina Tropmann-Frick. "Language Representation Models: An Overview." Entropy 23, no. 11 (October 28, 2021): 1422. http://dx.doi.org/10.3390/e23111422.

Повний текст джерела
Анотація:
In the last few decades, text mining has been used to extract knowledge from free texts. Applying neural networks and deep learning to natural language processing (NLP) tasks has led to many accomplishments for real-world language problems over the years. The developments of the last five years have resulted in techniques that have allowed for the practical application of transfer learning in NLP. The advances in the field have been substantial, and the milestone of outperforming human baseline performance based on the general language understanding evaluation has been achieved. This paper implements a targeted literature review to outline, describe, explain, and put into context the crucial techniques that helped achieve this milestone. The research presented here is a targeted review of neural language models that present vital steps towards a general language representation model.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Barnes, Colin. "Disability, cultural representation and language." Critical Public Health 6, no. 2 (April 1995): 9–20. http://dx.doi.org/10.1080/09581599508409048.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kopec, G. "The signal representation language SRL." IEEE Transactions on Acoustics, Speech, and Signal Processing 33, no. 4 (August 1985): 921–32. http://dx.doi.org/10.1109/tassp.1985.1164636.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Simos, Panagiotis G., Joshua I. Breier, William W. Maggio, William B. Gormley, George Zouridakis, L. James Willmore, James W. Wheless, Jules E. C. Constantinou, and Andrew C. Papanicolaou. "Atypical temporal lobe language representation." NeuroReport 10, no. 1 (January 1999): 139–42. http://dx.doi.org/10.1097/00001756-199901180-00026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

CUTLER, ANNE. "Representation of second language phonology." Applied Psycholinguistics 36, no. 1 (January 2015): 115–28. http://dx.doi.org/10.1017/s0142716414000459.

Повний текст джерела
Анотація:
ABSTRACTOrthographies encode phonological information only at the level of words (chiefly, the information encoded concerns phonetic segments; in some cases, tonal information or default stress may be encoded). Of primary interest to second language (L2) learners is whether orthography can assist in clarifying L2 phonological distinctions that are particularly difficult to perceive (e.g., where one native-language phonemic category captures two L2 categories). A review of spoken-word recognition evidence suggests that orthographic information can install knowledge of such a distinction in lexical representations but that this does not affect learners’ ability to perceive the phonemic distinction in speech. Words containing the difficult phonemes become even harder for L2 listeners to recognize, because perception maps less accurately to lexical content.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Thrane, Torben. "Symbolic Representation and Natural Language." Nordic Journal of Linguistics 11, no. 1-2 (June 1988): 151–73. http://dx.doi.org/10.1017/s0332586500001797.

Повний текст джерела
Анотація:
The notion of symbolizability is taken as the second requisite of computation (the first being ‘algorithmizability’), and it is shown that symbols, qua symbols, are not symbolizable. This has far-reaching consequences for the computational study of language and for AI-research in language understanding. The representation hypothesis is formulated, and its various assumptions and goals are examined. A research strategy for the computational study of natural language understanding is outlined.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Gentilucci, Maurizio. "Object motor representation and language." Experimental Brain Research 153, no. 2 (November 1, 2003): 260–65. http://dx.doi.org/10.1007/s00221-003-1600-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Sukkarieh, J. Z. "Natural Language and Knowledge Representation." Journal of Logic and Computation 18, no. 3 (December 5, 2007): 319–21. http://dx.doi.org/10.1093/logcom/exm068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Hamid, Nabeel. "Hume's (Berkeleyan) Language of Representation." Hume Studies 41, no. 2 (2015): 171–200. http://dx.doi.org/10.1353/hms.2015.0008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Komaba, Yuichi, Michio Senda, Takahiro Mori, Kenji Ishii, Masahiro Mishina, Shin Kitamura, and Akiro Terashi. "Bilateral Representation of Language Function." Journal of Neuroimaging 8, no. 4 (October 1998): 246–49. http://dx.doi.org/10.1111/jon199884246.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hayward, William G., and Michael J. Tarr. "Spatial language and spatial representation." Cognition 55, no. 1 (April 1995): 39–84. http://dx.doi.org/10.1016/0010-0277(94)00643-y.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Geminiani, Giuliano, Edoardo Bisiach, Anna Berti, and Maria Luisa Rusconi. "Analogical representation and language structure." Neuropsychologia 33, no. 11 (November 1995): 1565–74. http://dx.doi.org/10.1016/0028-3932(95)00081-d.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Qi, Xianglong, Yang Gao, Ruibin Wang, Minghua Zhao, Shengjia Cui, and Mohsen Mortazavi. "Learning High-Order Semantic Representation for Intent Classification and Slot Filling on Low-Resource Language via Hypergraph." Mathematical Problems in Engineering 2022 (September 16, 2022): 1–16. http://dx.doi.org/10.1155/2022/8407713.

Повний текст джерела
Анотація:
Representation of language is the first and critical task for Natural Language Understanding (NLU) in a dialogue system. Pretraining, embedding model, and fine-tuning for intent classification and slot-filling are popular and well-performing approaches but are time consuming and inefficient for low-resource languages. Concretely, the out-of-vocabulary and transferring to different languages are two tough challenges for multilingual pretrained and cross-lingual transferring models. Furthermore, quality-proved parallel data are necessary for the current frameworks. Stepping over these challenges, different from the existing solutions, we propose a novel approach, the Hypergraph Transfer Encoding Network “HGTransEnNet. The proposed model leverages off-the-shelf high-quality pretrained word embedding models of resource-rich languages to learn the high-order semantic representation of low-resource languages in a transductive clustering manner of hypergraph modeling, which does not need parallel data. The experiments show that the representations learned by “HGTransEnNet” for low-resource language are more effective than the state-of-the-art language models, which are pretrained on a large-scale multilingual or monolingual corpus, in intent classification and slot-filling tasks on Indonesian and English datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Gurunath Shivakumar, Prashanth, Panayiotis Georgiou, and Shrikanth Narayanan. "Confusion2Vec 2.0: Enriching ambiguous spoken language representations with subwords." PLOS ONE 17, no. 3 (March 4, 2022): e0264488. http://dx.doi.org/10.1371/journal.pone.0264488.

Повний текст джерела
Анотація:
Word vector representations enable machines to encode human language for spoken language understanding and processing. Confusion2vec, motivated from human speech production and perception, is a word vector representation which encodes ambiguities present in human spoken language in addition to semantics and syntactic information. Confusion2vec provides a robust spoken language representation by considering inherent human language ambiguities. In this paper, we propose a novel word vector space estimation by unsupervised learning on lattices output by an automatic speech recognition (ASR) system. We encode each word in Confusion2vec vector space by its constituent subword character n-grams. We show that the subword encoding helps better represent the acoustic perceptual ambiguities in human spoken language via information modeled on lattice-structured ASR output. The usefulness of the proposed Confusion2vec representation is evaluated using analogy and word similarity tasks designed for assessing semantic, syntactic and acoustic word relations. We also show the benefits of subword modeling for acoustic ambiguity representation on the task of spoken language intent detection. The results significantly outperform existing word vector representations when evaluated on erroneous ASR outputs, providing improvements up-to 13.12% relative to previous state-of-the-art in intent detection on ATIS benchmark dataset. We demonstrate that Confusion2vec subword modeling eliminates the need for retraining/adapting the natural language understanding models on ASR transcripts.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Ok, Changwon, Geonseok Lee, and Kichun Lee. "Informative Language Encoding by Variational Autoencoders Using Transformer." Applied Sciences 12, no. 16 (August 9, 2022): 7968. http://dx.doi.org/10.3390/app12167968.

Повний текст джерела
Анотація:
In natural language processing (NLP), Transformer is widely used and has reached the state-of-the-art level in numerous NLP tasks such as language modeling, summarization, and classification. Moreover, a variational autoencoder (VAE) is an efficient generative model in representation learning, combining deep learning with statistical inference in encoded representations. However, the use of VAE in natural language processing often brings forth practical difficulties such as a posterior collapse, also known as Kullback–Leibler (KL) vanishing. To mitigate this problem, while taking advantage of the parallelization of language data processing, we propose a new language representation model as the integration of two seemingly different deep learning models, which is a Transformer model solely coupled with a variational autoencoder. We compare the proposed model with previous works, such as a VAE connected with a recurrent neural network (RNN). Our experiments with four real-life datasets show that implementation with KL annealing mitigates posterior collapses. The results also show that the proposed Transformer model outperforms RNN-based models in reconstruction and representation learning, and that the encoded representations of the proposed model are more informative than other tested models.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Lima Júnior, Ronaldo Mangueira. "Complexity in second language phonology acquisition." Revista Brasileira de Linguística Aplicada 13, no. 2 (June 11, 2013): 549–76. http://dx.doi.org/10.1590/s1984-63982013005000006.

Повний текст джерела
Анотація:
This paper aims at situating the representation and investigation of second language phonology acquisition in light of complexity theory. The first section presents a brief historical panorama of complexity and chaos theory on second language acquisition, followed by the possible phonological representations and analyses aligned with such perspective. Finally, the issue of second language phonology acquisition is revisited.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії