Journal articles on the topic 'Computational linguistic models'

To see the other types of publications on this topic, follow the link: Computational linguistic models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computational linguistic models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Phong, Phạm Hồng, and Bùi Công Cường. "Symbolic Computational Models for Intuitionistic Linguistic Information." Journal of Computer Science and Cybernetics 32, no. 1 (June 7, 2016): 31–45. http://dx.doi.org/10.15625/1813-9663/32/1/5984.

Full text
Abstract:
In \cite{Cuong14, Phong14}, we first introduced the notion of intuitionistic linguistic labels. In this paper, we develop two symbolic computational models for intuitionistic linguistic labels (intuitionistic linguistic information). Various operators are proposed, their properties are also examined. Then, an application in group decision making using intuitionistic linguistic preference relations is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Hale, John T., Luca Campanelli, Jixing Li, Shohini Bhattasali, Christophe Pallier, and Jonathan R. Brennan. "Neurocomputational Models of Language Processing." Annual Review of Linguistics 8, no. 1 (January 14, 2022): 427–46. http://dx.doi.org/10.1146/annurev-linguistics-051421-020803.

Full text
Abstract:
Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplemental Appendix .
APA, Harvard, Vancouver, ISO, and other styles
3

BOSQUE-GIL, J., J. GRACIA, E. MONTIEL-PONSODA, and A. GÓMEZ-PÉREZ. "Models to represent linguistic linked data." Natural Language Engineering 24, no. 6 (October 4, 2018): 811–59. http://dx.doi.org/10.1017/s1351324918000347.

Full text
Abstract:
AbstractAs the interest of the Semantic Web and computational linguistics communities in linguistic linked data (LLD) keeps increasing and the number of contributions that dwell on LLD rapidly grows, scholars (and linguists in particular) interested in the development of LLD resources sometimes find it difficult to determine which mechanism is suitable for their needs and which challenges have already been addressed. This review seeks to present the state of the art on the models, ontologies and their extensions to represent language resources as LLD by focusing on the nature of the linguistic content they aim to encode. Four basic groups of models are distinguished in this work: models to represent the main elements of lexical resources (group 1), vocabularies developed as extensions to models in group 1 and ontologies that provide more granularity on specific levels of linguistic analysis (group 2), catalogues of linguistic data categories (group 3) and other models such as corpora models or service-oriented ones (group 4). Contributions encompassed in these four groups are described, highlighting their reuse by the community and the modelling challenges that are still to be faced.
APA, Harvard, Vancouver, ISO, and other styles
4

Srihari, Rohini K. "Computational models for integrating linguistic and visual information: A survey." Artificial Intelligence Review 8, no. 5-6 (1995): 349–69. http://dx.doi.org/10.1007/bf00849725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martin, Andrea E. "A Compositional Neural Architecture for Language." Journal of Cognitive Neuroscience 32, no. 8 (August 2020): 1407–27. http://dx.doi.org/10.1162/jocn_a_01552.

Full text
Abstract:
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
APA, Harvard, Vancouver, ISO, and other styles
6

HSIEH, CHIH HSUN. "LINGUISTIC INVENTORY PROBLEMS." New Mathematics and Natural Computation 07, no. 01 (March 2011): 1–49. http://dx.doi.org/10.1142/s179300571100186x.

Full text
Abstract:
The work presented in this paper has been motivated primarily by Zadeh's idea of linguistic variables intended to provide rigorous mathematical modeling of natural language and CWW, Computing With Words. This paper reports some modeling of the linguistic inventory problems where CWW have been implemented: linguistic production inventory, linguistic inventory models under linguistic demand and linguistic lead time, linguistic production inventory models based on the preference of a decision maker, and linguistic inventory model for fuzzy reorder point and fuzzy safety stock. Only studies that focus on CWW, two linguistic inventory models and two linguistic backorder inventory models, which each model is combined by the heuristic fuzzy total inventory cost based on the preference of a decision maker, are proposed in this paper. The heuristic fuzzy total inventory cost of each model is modeled by linguistic values in natural language, fuzzy numbers, and crisp real numbers. It is also computed and defuzzified by using some fuzzy arithmetical operations by Function Principle and Graded k-preference integration representation method, respectively. In addition, Extension of the LaGrangean method is used for solving inequality constrain problem in the proposed linguistic inventory environments. Furthermore, we find that our heuristic optimal solutions of the new introduced modeling the linguistic inventory problems can also be specified to meet the classical inventory models, when all linguistic variables are crisp real numbers, such as the previous proposed linguistic inventory models.
APA, Harvard, Vancouver, ISO, and other styles
7

Paul, Michael, and Roxana Girju. "A Two-Dimensional Topic-Aspect Model for Discovering Multi-Faceted Topics." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 545–50. http://dx.doi.org/10.1609/aaai.v24i1.7669.

Full text
Abstract:
This paper presents the Topic-Aspect Model (TAM), a Bayesian mixture model which jointly discovers topics and aspects. We broadly define an aspect of a document as a characteristic that spans the document, such as an underlying theme or perspective. Unlike previous models which cluster words by topic or aspect, our model can generate token assignments in both of these dimensions, rather than assuming words come from only one of two orthogonal models. We present two applications of the model. First, we model a corpus of computational linguistics abstracts, and find that the scientific topics identified in the data tend to include both a computational aspect and a linguistic aspect. For example, the computational aspect of GRAMMAR emphasizes parsing, whereas the linguistic aspect focuses on formal languages. Secondly, we show that the model can capture different viewpoints on a variety of topics in a corpus of editorials about the Israeli-Palestinian conflict. We show both qualitative and quantitative improvements in TAM over two other state-of-the-art topic models.
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Prashant K., Deepak Sharma, and Javier Andreu-Perez. "Enhanced linguistic computational models and their similarity with Yager’s computing with words." Information Sciences 574 (October 2021): 259–78. http://dx.doi.org/10.1016/j.ins.2021.05.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Goldstein, Ariel, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, et al. "Shared computational principles for language processing in humans and deep language models." Nature Neuroscience 25, no. 3 (March 2022): 369–80. http://dx.doi.org/10.1038/s41593-022-01026-4.

Full text
Abstract:
AbstractDeparting from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.
APA, Harvard, Vancouver, ISO, and other styles
10

SEGERS, NICOLE, and PIERRE LECLERCQ. "Computational linguistics for design, maintenance, and manufacturing." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 21, no. 2 (March 19, 2007): 99–101. http://dx.doi.org/10.1017/s0890060407070163.

Full text
Abstract:
Although graphic representations have proven to be of value in computer-aided support and have received much attention in both research and practice (Goldschmidt, 1991; Goel, 1995; Achten, 1997; Do, 2002), linguistic representations presently do not significantly contribute to improve the information handling related to the computer support of a design product. During its life cycle, engineers and designers make many representations of a product. The information and knowledge used to create the product are usually represented visually in sketches, models, (technical) drawings, and images. Linguistic information is complementary to graphic information and essential to create the corporate memory of products. Linguistic information (i.e., the use of words, abbreviations, vocal comments, annotations, notes, and reports) creates meaningful information for designers and engineers as well as for computers (Segers, 2004; Juchmes et al., 2005). Captions, plain text, and keyword indexing are now common to support the communication between design actors (Lawson & Loke, 1997; Wong & Kvan, 1999; Heylighen, 2001; Boujut, 2003). Nevertheless, it is currently scarcely used to its full potential in design, maintenance, and manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
11

Zorzi, Marco, and Gabriella Vigliocco. "Dissociation between regular and irregular in connectionist architectures: Two processes, but still no special linguistic rules." Behavioral and Brain Sciences 22, no. 6 (December 1999): 1045–46. http://dx.doi.org/10.1017/s0140525x99552229.

Full text
Abstract:
Dual-mechanism models of language maintain a distinction between a lexicon and a computational system of linguistic rules. In his target article, Clahsen provides support for such a distinction, presenting evidence from German inflections. He argues for a structured lexicon, going beyond the strict lexicon versus rules dichotomy. We agree with the author in assuming a dual mechanism; however, we argue that a next step must be taken, going beyond the notion of the computational system as specific rules applying to a linguistic domain. By assuming a richer lexicon, the computational system can be conceived as a more general binding process that applies to different linguistic levels: syntax, morphology, reading, and spelling.
APA, Harvard, Vancouver, ISO, and other styles
12

Iomdin, Leonid. "Microsyntactic Annotation of Corpora and its Use in Computational Linguistics Tasks." Journal of Linguistics/Jazykovedný casopis 68, no. 2 (December 1, 2017): 169–78. http://dx.doi.org/10.1515/jazcas-2017-0027.

Full text
Abstract:
Abstract Microsyntax is a linguistic discipline dealing with idiomatic elements whose important properties are strongly related to syntax. In a way, these elements may be viewed as transitional entities between the lexicon and the grammar, which explains why they are often underrepresented in both of these resource types: the lexicographer fails to see such elements as full-fledged lexical units, while the grammarian finds them too specific to justify the creation of individual well-developed rules. As a result, such elements are poorly covered by linguistic models used in advanced modern computational linguistic tasks like high-quality machine translation or deep semantic analysis. A possible way to mend the situation and improve the coverage and adequate treatment of microsyntactic units in linguistic resources is to develop corpora with microsyntactic annotation, closely linked to specially designed lexicons. The paper shows how this task is solved in the deeply annotated corpus of Russian, SynTagRus.
APA, Harvard, Vancouver, ISO, and other styles
13

Feltes, Heloísa Pedroso de Moraes. "Embodiment in cognitive linguistic: from experientialism to computational neuroscience." DELTA: Documentação de Estudos em Lingüística Teórica e Aplicada 26, spe (2010): 503–33. http://dx.doi.org/10.1590/s0102-44502010000300006.

Full text
Abstract:
The aim of this paper is to reflect on the character of embodiment in the framework of Cognitive Linguistics based on Lakoff, collaborators and interlocutors. Initially I characterize the embodied mind, via cognitive experientialism. In these terms, the theory shapes how human beings build and process knowledge structures which regulate their individual and collective lives. Next, the Neural Theory of Language in which embodiment is rebuilt from a five level paradigm, where structured connectionism carries on the very burden of computational description and explanation is discussed. From these assumption, classical problems about computational implementations for models of natural language functioning as reductionist-physicalist approaches, I then conclude by assuming that embodiment, as an investigation phenomenon, shouldn't be formulated in terms of levels, being treated as interfaces instead, at such manner that: (a) the epistemological commitments should be synchronically sustained in all interfaces of the investigation paradigm; (b) the conventional computational level should be taken as one of the problems which has to be treated in the structured connectionism plan; (c) the strategic reduction levels paradigm and the results obtained from it might imply a kind of modularization of the program of research itself; e (d) the modules would be interdependent only as a result of the reductionist proposal. As a result, I assume that it is possible to do Cognitive Linguistics without adhering to structured connectionism, or to neurocomputacional simulation, as long as one would operate with interfaces constructions between domains of investigation and not with a reductionist features paradigm treated in terms of "levels".
APA, Harvard, Vancouver, ISO, and other styles
14

Heuser, Annika, and Polina Tsvilodub. "Comparing Symbolic Models of Language via Bayesian Inference (Student Abstract)." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 18 (May 18, 2021): 15799–800. http://dx.doi.org/10.1609/aaai.v35i18.17896.

Full text
Abstract:
Given recurring interest in structured representations in computational cognitive models, we extend a Bayesian scoring procedure for comparing symbolic models of language grammar. We conduct a case-study of modeling syntactic principles in German, providing preliminary results consistent with linguistic theory. We also note that dataset and part-of-speech (POS) tagger quality should not be taken for granted.
APA, Harvard, Vancouver, ISO, and other styles
15

Pavlick, Ellie. "Semantic Structure in Deep Learning." Annual Review of Linguistics 8, no. 1 (January 14, 2022): 447–71. http://dx.doi.org/10.1146/annurev-linguistics-031120-122924.

Full text
Abstract:
Deep learning has recently come to dominate computational linguistics, leading to claims of human-level performance in a range of language processing tasks. Like much previous computational work, deep learning–based linguistic representations adhere to the distributional meaning-in-use hypothesis, deriving semantic representations from word co-occurrence statistics. However, current deep learning methods entail fundamentally new models of lexical and compositional meaning that are ripe for theoretical analysis. Whereas traditional distributional semantics models take a bottom-up approach in which sentence meaning is characterized by explicit composition functions applied to word meanings, new approaches take a top-down approach in which sentence representations are treated as primary and representations of words and syntax are viewed as emergent. This article summarizes our current understanding of how well such representations capture lexical semantics, world knowledge, and composition. The goal is to foster increased collaboration on testing the implications of such representations as general-purpose models of semantics.
APA, Harvard, Vancouver, ISO, and other styles
16

Lenci, Alessandro. "Spazi di parole: metafore e rappresentazioni semantiche." PARADIGMI, no. 1 (May 2009): 83–100. http://dx.doi.org/10.3280/para2009-001007.

Full text
Abstract:
- The aim of this paper is to analyse the analogy of the lexicon with a space defined by words, which is common to a number of computational models of meaning in cognitive science. This can be regarded as a case of constitutive scientific metaphor in the sense of Boyd (1979) and is grounded in the so-called Distributional Hypothesis, stating that the semantic similarity between two words is a function of the similarity of the linguistic contexts in which they typically co-occur. The meaning of words is represented in terms of their topological relations in a high-dimensional space, defined by their combinatorial behaviour in texts. A key consequence of adopting the metaphor of word spaces is that semantic representations are modelled as highly context-sensitive entities. Moreover, word space models promise to open interesting perspectives for the study of metaphorical uses in language, as well as of lexical dynamics in general. Keywords: Cognitive sciences, Computational linguistics, Distributional models of the lexicon, Metaphor, Semantics, Word spaces.
APA, Harvard, Vancouver, ISO, and other styles
17

van Tiel, Bob, Michael Franke, and Uli Sauerland. "Probabilistic pragmatics explains gradience and focality in natural language quantification." Proceedings of the National Academy of Sciences 118, no. 9 (February 22, 2021): e2005453118. http://dx.doi.org/10.1073/pnas.2005453118.

Full text
Abstract:
An influential view in philosophy and linguistics equates the meaning of a sentence to the conditions under which it is true. But it has been argued that this truth-conditional view is too rigid and that meaning is inherently gradient and revolves around prototypes. Neither of these abstract semantic theories makes direct predictions about quantitative aspects of language use. Hence, we compare these semantic theories empirically by applying probabilistic pragmatic models as a link function connecting linguistic meaning and language use. We consider the use of quantity words (e.g., “some,” “all”), which are fundamental to human language and thought. Data from a large-scale production study suggest that quantity words are understood via prototypes. We formulate and compare computational models based on the two views on linguistic meaning. These models also take into account cognitive factors, such as salience and numerosity representation. Statistical and empirical model comparison show that the truth-conditional model explains the production data just as well as the prototype-based model, when the semantics are complemented by a pragmatic module that encodes probabilistic reasoning about the listener’s uptake.
APA, Harvard, Vancouver, ISO, and other styles
18

ABUTALEBI, JUBIN, and HARALD CLAHSEN. "Computational approaches to word retrieval in bilinguals." Bilingualism: Language and Cognition 22, no. 04 (July 18, 2019): 655–56. http://dx.doi.org/10.1017/s1366728919000221.

Full text
Abstract:
The cognitive architecture of human language processing has been studied for decades, but using computational modeling for such studies is a relatively recent topic. Indeed, computational approaches to language processing have become increasingly popular in our field, mainly due to advances in computational modeling techniques and the availability of large collections of experimental data. Language learning, particularly child language learning, has been the subject of many computational models. By simulating the process of child language learning, computational models may indeed teach us which linguistic representations are learnable from the input that children have access to (and which are not), as well as which mechanisms yield the same patterns of behavior that are found in children's language performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Fang, GaoLin, Wen Gao, and ZhaoQi Wang. "Incorporating linguistic structure into maximum entropy language models." Journal of Computer Science and Technology 18, no. 1 (January 2003): 131–36. http://dx.doi.org/10.1007/bf02946662.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Ten Oever, Sanne, Karthikeya Kaushik, and Andrea E. Martin. "Inferring the nature of linguistic computations in the brain." PLOS Computational Biology 18, no. 7 (July 28, 2022): e1010269. http://dx.doi.org/10.1371/journal.pcbi.1010269.

Full text
Abstract:
Sentences contain structure that determines their meaning beyond that of individual words. An influential study by Ding and colleagues (2016) used frequency tagging of phrases and sentences to show that the human brain is sensitive to structure by finding peaks of neural power at the rate at which structures were presented. Since then, there has been a rich debate on how to best explain this pattern of results with profound impact on the language sciences. Models that use hierarchical structure building, as well as models based on associative sequence processing, can predict the neural response, creating an inferential impasse as to which class of models explains the nature of the linguistic computations reflected in the neural readout. In the current manuscript, we discuss pitfalls and common fallacies seen in the conclusions drawn in the literature illustrated by various simulations. We conclude that inferring the neural operations of sentence processing based on these neural data, and any like it, alone, is insufficient. We discuss how to best evaluate models and how to approach the modeling of neural readouts to sentence processing in a manner that remains faithful to cognitive, neural, and linguistic principles.
APA, Harvard, Vancouver, ISO, and other styles
21

Yi, Irene. "Sociolinguistically-aware computational models of Mandarin-English codeswitching." Proceedings of the Linguistic Society of America 7, no. 1 (May 5, 2022): 5247. http://dx.doi.org/10.3765/plsa.v7i1.5247.

Full text
Abstract:
Current research on computational modeling of codeswitching has focused on the use of syntactic constraints as model predictors (Li & Fung 2014; Li & Vu 2019). However, proposed syntactic constraints (Poplack 1978; Poplack 1980; Myers-Scotton 1993; Belazi et al. 1994) are largely based around Spanish-English codeswitching, and are violated repeatedly (and potentially systematically) by codeswitching involving other languages. Thus, a computational model trained on these syntactic constraints, when applied to codeswitching involving languages that are not Spanish-English, may not capture the naturalistic patterns of those languages in codeswitching contexts. This paper demonstrates the value of sociolinguistic factors as predictors in training a Classification and Regression Tree (CART) model on novel Mandarin-English codeswitch data, which come from 12 bilingual speakers of two different generations from Grand Rapids, Michigan. Participants also answered metalinguistic questions about their own language practices and attitudes and completed a written Language History Questionnaire (LHQ) (Li et al. 2020), which asked for self-evaluations of language habits (proficiency, immersion, and dominance in the two languages). LHQ responses were then quantified into numerical scores serving as sociolinguistic predictors in the CART model. The model, which highlighted that age, L2 Dominance, and L1 Immersion were among the top predictors, achieved an accuracy of 0.804 with the area under its ROC curve being 0.692. This is comparable to, if not more powerful than, previous computational studies (e.g. Li & Fung 2014) that trained models using only proposed syntactic constraints as predictors. This paper shows the importance of sociolinguistic factors in computational research previously focused on syntactic constraints; the intersection of these methodologies could improve a cross-linguistic and computational understanding of codeswitching patterns.
APA, Harvard, Vancouver, ISO, and other styles
22

Costa, Lucio. "Towards a Lexically Oriented Automatic Parsing." Lingvisticæ Investigationes. International Journal of Linguistics and Language Resources 15, no. 1 (January 1, 1991): 1–40. http://dx.doi.org/10.1075/li.15.1.02cos.

Full text
Abstract:
RIASSUNTO La ricerca sul linguaggio naturale condotta in Intelligenza Artificiale si è sviluppata, malgrado le apparenze, in modo alquanto indipendente dal la-voro dei linguisti. Da un lato sono stati elaborati modelli computazionali delle facoltà di lunguaggio che si configurano come largamente autonomi rispetto a quelli sviluppati in linguistica. D'altro lato, l'implementazione dei sistemi è stata influenzata da soluzioni pragmatiche connesse all'efficacia computazionale delle regole indipendenti dal contesto, alla necessità di evitare componenti trasformazionali inversi e ad una concezione rappresenta-zionale del significato. Il presente articolo propone l'interesse dei lavori lin-guistici di Z. S. Harris e M. Gross ai fini dello sviluppo di un'analisi sintat-tica automatica che sia a controllo diffuso e incentrata sul comportamento idiosincratico delle unità lessicali. Essa è anche inquadrata nel tentativo di gettare luce sulla natura del processo denotazionale. SUMMARY In spite of the claim on the interactions between artificial intelligence (AI) and linguistics, AI research on natural language has developed independently from the work of linguists. On one hand, computational models of the faculties of language which are independent from the models developed in linguistics have been worked out. On the other hand, the AI system design has been oriented towards practical solutions, whose main motivations where to use context-free rules, to avoid an inverse transformational component, and to represent meanings by some data structures. This paper is about the linguistic works of Z.S. Harris and M. Gross to develop automatic distributed control parsing which takes seriously into account the indiosyncratic behaviour of the lexical items. The general framework for the discussion is the procedural nature of the denotational process.
APA, Harvard, Vancouver, ISO, and other styles
23

Valenzuela, Javier, Joseph Hilferty, and Oscar Vilarroya. "Why are embodied experiments relevant to cognitive linguistics?" Computational Construction Grammar and Constructional Change 30 (December 19, 2016): 265–86. http://dx.doi.org/10.1075/bjl.30.12val.

Full text
Abstract:
Computational simulation models of cognitive linguistics are relatively scarce (cf Valenzuela, 2010). This is due, among other things, to the inherent complexity of the movement’s conception of language. Cognitive linguistics places great emphasis on the integration of language with sensorimotor and conceptual structure, as well as on the embodied nature of cognition and the perspective of language as a social construct. This has made it difficult for cognitive linguistics to take advantage of the benefits of computational simulation (cf McClelland 2009). The robotic paradigm of Luc Steels (Steels 1998, 2000, 2004, 2005) offers one of the most complete implementations of cognitive linguistics to date. In this paradigm, autonomous robotic agents play communication games in which linguistic information is represented by a version of construction grammar called “Fluid Construction Grammar”. The present chapter explains how this simulation is a true implementation of the theoretical proposals made by cognitive linguistics. More specifically, we show how these proposals have been operationalized for their use in the system. Computational simulations like the one described here should be of great interest to any cognitive linguist. They provide an excellent testing ground for any theoretical proposal, bringing cognitive linguistics even closer to the cognitive-science enterprise.
APA, Harvard, Vancouver, ISO, and other styles
24

Ito, Noriko, Toru Sugimoto, Yusuke Takahashi, Shino Iwashita, and Michio Sugeno. "Computational Models of Language Within Context and Context-Sensitive Language Understanding." Journal of Advanced Computational Intelligence and Intelligent Informatics 10, no. 6 (November 20, 2006): 782–90. http://dx.doi.org/10.20965/jaciii.2006.p0782.

Full text
Abstract:
We propose two computational models - one of a language within context based on systemic functional linguistic theory and one of context-sensitive language understanding. The model of a language within context called the Semiotic Base characterizes contextual, semantic, lexicogrammatical, and graphological aspects of input texts. The understanding process is divided into shallow and deep analyses. Shallow analysis consists of morphological and dependency analyses and word concept and case relation assignment, mainly by existing natural language processing tools and machine-readable dictionaries. Results are used to detect the contextual configuration of input text in contextual analysis. This is followed by deep analyses of lexicogrammar, semantics, and concepts, conducted by referencing a subset of resources related to the detected context. Our proposed models have been implemented in Java and verified by integrating them into such applications as dialog-based question-and-answer (Q&A).
APA, Harvard, Vancouver, ISO, and other styles
25

VAN HELL, JANET G. "Words only go so far: Linguistic context affects bilingual word processing." Bilingualism: Language and Cognition 22, no. 04 (July 3, 2018): 689–90. http://dx.doi.org/10.1017/s1366728918000706.

Full text
Abstract:
In their keynote paper, Dijkstra, Wahl, Buytenhuijs, van Halem, Al-jibouri, de Korte, and Rekké (2018) present a computational model of bilingual word recognition and translation, Multilink, that integrates and further refines the architecture and processing principles of two influential models of bilingual word processing: the Bilingual Activation Model (BIA/BIA+) and the Revised Hierarchical model (RHM). Unlike the earlier models, Multilink has been implemented as a computational model so its design principles and assumptions can be compared with human processing data in simulation studies – which is an important step forward in model development and refinement. But Multilink also leaves behind an important theoretical advancement that was touched upon in extending BIA to BIA+ (Dijkstra & Van Heuven, 2002): how linguistic context influences word processing. In their presentation of BIA+, Dijkstra and Van Heuven (2002) hypothesized that syntactic and semantic aspects of sentence context may affect the word identification system. Theoretically, this was an important step forward, as none of the bilingual word processing models (and few monolingual word processing models, for that matter) had incorporated linguistic context, and at that time only a handful of empirical studies had examined how linguistic context affects bilingual word processing. However, in the past 15 years a significant body of empirical work has been published that examines how semantic and syntactic information in sentences impacts word processing in bilinguals. These important insights are not incorporated in the Multilink model.
APA, Harvard, Vancouver, ISO, and other styles
26

Kirby, Simon. "Natural Language From Artificial Life." Artificial Life 8, no. 2 (April 2002): 185–215. http://dx.doi.org/10.1162/106454602320184248.

Full text
Abstract:
This article aims to show that linguistics, in particular the study of the lexico-syntactic aspects of language, provides fertile ground for artificial life modeling. A survey of the models that have been developed over the last decade and a half is presented to demonstrate that ALife techniques have a lot to offer an explanatory theory of language. It is argued that this is because much of the structure of language is determined by the interaction of three complex adaptive systems: learning, culture, and biological evolution. Computational simulation, informed by theoretical linguistics, is an appropriate response to the challenge of explaining real linguistic data in terms of the processes that underpin human language.
APA, Harvard, Vancouver, ISO, and other styles
27

Popescu, M., P. Gader, and J. M. Keller. "Fuzzy spatial pattern processing using linguistic hidden Markov models." IEEE Transactions on Fuzzy Systems 14, no. 1 (February 2006): 81–92. http://dx.doi.org/10.1109/tfuzz.2005.861615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Campello, R. J. G. B., and W. Caradori do Amaral. "Hierarchical fuzzy relational models: linguistic interpretation and universal approximation." IEEE Transactions on Fuzzy Systems 14, no. 3 (June 2006): 446–53. http://dx.doi.org/10.1109/tfuzz.2006.876365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Beinborn, Lisa, and Rochelle Choenni. "Semantic Drift in Multilingual Representations." Computational Linguistics 46, no. 3 (November 2020): 571–603. http://dx.doi.org/10.1162/coli_a_00382.

Full text
Abstract:
Multilingual representations have mostly been evaluated based on their performance on specific tasks. In this article, we look beyond engineering goals and analyze the relations between languages in computational representations. We introduce a methodology for comparing languages based on their organization of semantic concepts. We propose to conduct an adapted version of representational similarity analysis of a selected set of concepts in computational multilingual representations. Using this analysis method, we can reconstruct a phylogenetic tree that closely resembles those assumed by linguistic experts. These results indicate that multilingual distributional representations that are only trained on monolingual text and bilingual dictionaries preserve relations between languages without the need for any etymological information. In addition, we propose a measure to identify semantic drift between language families. We perform experiments on word-based and sentence-based multilingual models and provide both quantitative results and qualitative examples. Analyses of semantic drift in multilingual representations can serve two purposes: They can indicate unwanted characteristics of the computational models and they provide a quantitative means to study linguistic phenomena across languages.
APA, Harvard, Vancouver, ISO, and other styles
30

JAROSZ, GAJA. "Implicational markedness and frequency in constraint-based computational models of phonological learning." Journal of Child Language 37, no. 3 (March 22, 2010): 565–606. http://dx.doi.org/10.1017/s0305000910000103.

Full text
Abstract:
ABSTRACTThis study examines the interacting roles of implicational markedness and frequency from the joint perspectives of formal linguistic theory, phonological acquisition and computational modeling. The hypothesis that child grammars are rankings of universal constraints, as in Optimality Theory (Prince & Smolensky, 1993/2004), that learning involves a gradual transition from an unmarked initial state to the target grammar, and that order of acquisition is guided by frequency, along the lines of Levelt, Schiller & Levelt (2000), is investigated. The study reviews empirical findings on syllable structure acquisition in Dutch, German, French and English, and presents novel findings on Polish. These comparisons reveal that, to the extent allowed by implicational markedness universals, frequency covaries with acquisition order across languages. From the computational perspective, the paper shows that interacting roles of markedness and frequency in a class of constraint-based phonological learning models embody this hypothesis, and their predictions are illustrated via computational simulation.
APA, Harvard, Vancouver, ISO, and other styles
31

Bisazza, Arianna, and Marcello Federico. "A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena." Computational Linguistics 42, no. 2 (June 2016): 163–205. http://dx.doi.org/10.1162/coli_a_00245.

Full text
Abstract:
Word reordering is one of the most difficult aspects of statistical machine translation (SMT), and an important factor of its quality and efficiency. Despite the vast amount of research published to date, the interest of the community in this problem has not decreased, and no single method appears to be strongly dominant across language pairs. Instead, the choice of the optimal approach for a new translation task still seems to be mostly driven by empirical trials. To orient the reader in this vast and complex research area, we present a comprehensive survey of word reordering viewed as a statistical modeling challenge and as a natural language phenomenon. The survey describes in detail how word reordering is modeled within different string-based and tree-based SMT frameworks and as a stand-alone task, including systematic overviews of the literature in advanced reordering modeling. We then question why some approaches are more successful than others in different language pairs. We argue that besides measuring the amount of reordering, it is important to understand which kinds of reordering occur in a given language pair. To this end, we conduct a qualitative analysis of word reordering phenomena in a diverse sample of language pairs, based on a large collection of linguistic knowledge. Empirical results in the SMT literature are shown to support the hypothesis that a few linguistic facts can be very useful to anticipate the reordering characteristics of a language pair and to select the SMT framework that best suits them.
APA, Harvard, Vancouver, ISO, and other styles
32

Dorado, Rubén. "Statistical models for languaje representation." Revista Ontare 1, no. 1 (September 16, 2015): 29. http://dx.doi.org/10.21158/23823399.v1.n1.2013.1208.

Full text
Abstract:
ONTARE. REVISTA DE INVESTIGACIÓN DE LA FACULTAD DE INGENIERÍAThis paper discuses several models for the computational representation of language. First, some n-gram models that are based on Markov models are introduced. Second, a family of models known as the exponential models is taken into account. This family in particular allows the incorporation of several features to model. Third, a recent current of research, the probabilistic Bayesian approach, is discussed. In this kind of models, language is modeled as a probabilistic distribution. Several distributions and probabilistic processes, such as the Dirichlet distribution and the Pitman- Yor process, are used to approximate the linguistic phenomena. Finally, the problem of sparseness of the language and its common solution known as smoothing is discussed. RESUMENEste documento discute varios modelos para la representación computacional del lenguaje. En primer lugar, se introducen los modelos de n-gramas que son basados en los modelos Markov. Luego, se toma en cuenta una familia de modelos conocido como el modelo exponencial. Esta familia en particular permite la incorporación de varias funciones para modelar. Como tercer punto, se discute una corriente reciente de la investigación, el enfoque probabilístico Bayesiano. En este tipo de modelos, el lenguaje es modelado como una distribución probabilística. Se utilizan varias distribuciones y procesos probabilísticos para aproximar los fenómenos lingüísticos, tales como la distribución de Dirichlet y el proceso de Pitman-Yor. Finalmente, se discute el problema de la escasez del lenguaje y su solución más común conocida como smoothing o redistribución.
APA, Harvard, Vancouver, ISO, and other styles
33

Brighton, Henry, and Simon Kirby. "Understanding Linguistic Evolution by Visualizing the Emergence of Topographic Mappings." Artificial Life 12, no. 2 (January 2006): 229–42. http://dx.doi.org/10.1162/artl.2006.12.2.229.

Full text
Abstract:
We show how cultural selection for learnability during the process of linguistic evolution can be visualized using a simple iterated learning model. Computational models of linguistic evolution typically focus on the nature of, and conditions for, stable states. We take a novel approach and focus on understanding the process of linguistic evolution itself. What kind of evolutionary system is this process? Using visualization techniques, we explore the nature of replicators in linguistic evolution, and argue that replicators correspond to local regions of regularity in the mapping between meaning and signals. Based on this argument, we draw parallels between phenomena observed in the model and linguistic phenomena observed across languages. We then go on to identify issues of replication and selection as key points of divergence in the parallels between the processes of linguistic evolution and biological evolution.
APA, Harvard, Vancouver, ISO, and other styles
34

Wei, Guiwu, Xiaofei Zhao, Hongjun Wang, and Rui Lin. "Models for Multiple Attribute Group Decision Making with 2-Tuple Linguistic Assessment Information." International Journal of Computational Intelligence Systems 3, no. 3 (2010): 315. http://dx.doi.org/10.2991/ijcis.2010.3.3.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wei, Guiwu, Rui Lin, Xiaofei Zhao, and Hongjun Wang. "Models for Multiple Attribute Group Decision Making with 2-Tuple Linguistic Assessment Information." International Journal of Computational Intelligence Systems 3, no. 3 (September 2010): 315–24. http://dx.doi.org/10.1080/18756891.2010.9727702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Baroni, Marco. "Linguistic generalization and compositionality in modern artificial neural networks." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1791 (December 16, 2019): 20190307. http://dx.doi.org/10.1098/rstb.2019.0307.

Full text
Abstract:
In the last decade, deep artificial neural networks have achieved astounding performance in many natural language-processing tasks. Given the high productivity of language, these models must possess effective generalization abilities. It is widely assumed that humans handle linguistic productivity by means of algebraic compositional rules: are deep networks similarly compositional? After reviewing the main innovations characterizing current deep language-processing networks, I discuss a set of studies suggesting that deep networks are capable of subtle grammar-dependent generalizations, but also that they do not rely on systematic compositional rules. I argue that the intriguing behaviour of these devices (still awaiting a full understanding) should be of interest to linguists and cognitive scientists, as it offers a new perspective on possible computational strategies to deal with linguistic productivity beyond rule-based compositionality, and it might lead to new insights into the less systematic generalization patterns that also appear in natural language. This article is part of the theme issue ‘Towards mechanistic models of meaning composition’.
APA, Harvard, Vancouver, ISO, and other styles
37

Budzynska, Katarzyna, Michał Araszkiewicz, Agnieszka Budzyńska-Daca, Martin Hinton, John Lawrence, Sanjay Modgil, Matthias Thimm, et al. "Warsaw Argumentation Week (Waw 2018) Organised by the Polish School of Argumentation and Our Colleagues from Germany and the UK, 6th-16th September 2018." Studies in Logic, Grammar and Rhetoric 55, no. 1 (September 1, 2018): 231–39. http://dx.doi.org/10.2478/slgr-2018-0036.

Full text
Abstract:
Abstract In September 2018, the ArgDiaP association, along with colleagues from Germany and the UK, organised one of the longest and most interdisciplinary series of events ever dedicated to argumentation - Warsaw Argumentation Week, WAW 2018. The eleven-day ‘week’ featured a five day graduate school on computational and linguistic perspectives on argumentation (3rd SSA school); five workshops: on systems and algorithms for formal argumentation (2nd SAFA), argumentation in relation to society (1st ArgSoc), philosophical approaches to argumentation (1st ArgPhil), legal argumentation (2ndMET-ARG) and argumentation in rhetoric (1st MET-RhET); and two conferences: on computational models of argumentation (7th COMMA conference) and on argumentation and corpus linguistics (16th ArgDiaP conference). WAW hosted twelve tutorials and eight invited talks as well as welcoming over 130 participants. All the conferences and workshops publish pre- or post-proceedings in the top journals and book series in the field.
APA, Harvard, Vancouver, ISO, and other styles
38

Montrul, Silvina. "Representational and Computational Changes in Heritage Language Grammars." Heritage Language Journal 18, no. 2 (November 10, 2021): 1–30. http://dx.doi.org/10.1163/15507076-12340011.

Full text
Abstract:
Abstract The notion of complexity has been applied to descriptions and comparisons of languages and to explanations related to ease and difficulty of various linguistic phenomena in first and second language acquisition. It has been noted that compared to baseline grammars, heritage language grammars are less complex, displaying morphological simplification and structural shrinking, especially among heritage speakers with lower proficiency in the language. On some recent proposals of gender agreement in Spanish and Norwegian (Fuchs et al., 2015; Lohndal & Putnam, 2020), these differences are representational, affecting the projection of functional categories and feature specifications in the syntax. An alternative possibility is that differences between baseline and heritage grammars arise from computational considerations related to bilingualism, affecting speed of lexical access and feature reassembly online in the minority language. We illustrate this proposal with empirical data from gender agreement and differential object marking. Although presented as alternatives, the representational and computational explanations are not incompatible, and may both be adequate to capture varying levels of variability modulated by linguistic proficiency. These proposals formalize bilingual acquisition models of grammar competition and directly relate the availability and type of input (the acquisition evidence) to the locus and nature of the grammatical differences between heritage and baseline grammars.
APA, Harvard, Vancouver, ISO, and other styles
39

Schilperoord, Joost. "Grammaticale Constructies En Micro-Planning Bij Tekstproductie." Toegepaste Taalwetenschap in Artikelen 50 (January 1, 1994): 45–56. http://dx.doi.org/10.1075/ttwia.50.05sch.

Full text
Abstract:
In this paper it is argued that, contrary to computational models of language production, in the production system grammatical knowledge takes the form of conventionalized declarative schemes. Such schemes can be identified as a particular function word and an obliged element, for instance, a noun and a determiner. The argument is based on a particular pause pattern observed written language production. A cognitive linguistic account of the notion 'grammatical scheme' is given through a dicussion of Langacker's Usage based model of linguistic knowledge and the 'mental grammar'.
APA, Harvard, Vancouver, ISO, and other styles
40

Ma, Xiao, Trishala Neeraj, and Mor Naaman. "A Computational Approach to Perceived Trustworthiness of Airbnb Host Profiles." Proceedings of the International AAAI Conference on Web and Social Media 11, no. 1 (May 3, 2017): 604–7. http://dx.doi.org/10.1609/icwsm.v11i1.14937.

Full text
Abstract:
We developed a novel computational framework to predict the perceived trustworthiness of host profile texts in the context of online lodging marketplaces. To achieve this goal, we developed a dataset of 4,180 Airbnb host profiles annotated with perceived trustworthiness. To the best of our knowledge, the dataset along with our models allow for the first computational evaluation of perceived trustworthiness of textual profiles, which are ubiquitous in online peer-to-peer marketplaces. We provide insights into the linguistic factors that contribute to higher and lower perceived trustworthiness for profiles of different lengths.
APA, Harvard, Vancouver, ISO, and other styles
41

Poeppel, David, William J. Idsardi, and Virginie van Wassenhove. "Speech perception at the interface of neurobiology and linguistics." Philosophical Transactions of the Royal Society B: Biological Sciences 363, no. 1493 (September 21, 2007): 1071–86. http://dx.doi.org/10.1098/rstb.2007.2160.

Full text
Abstract:
Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20–80 ms, approx. 150–300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an ‘analysis-by-synthesis’ approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.
APA, Harvard, Vancouver, ISO, and other styles
42

HUENERFAUTH, MATT. "SPATIAL, TEMPORAL, AND SEMANTIC MODELS FOR AMERICAN SIGN LANGUAGE GENERATION: IMPLICATIONS FOR GESTURE GENERATION." International Journal of Semantic Computing 02, no. 01 (March 2008): 21–45. http://dx.doi.org/10.1142/s1793351x08000336.

Full text
Abstract:
Software to generate animations of American Sign Language (ASL) has important accessibility benefits for the significant number of deaf adults with low levels of written language literacy. We have implemented a prototype software system to generate an important subset of ASL phenomena called "classifier predicates," complex and spatially descriptive types of sentences. The output of this prototype system has been evaluated by native ASL signers. Our generator includes several novel models of 3D space, spatial semantics, and temporal coordination motivated by linguistic properties of ASL. These classifier predicates have several similarities to iconic gestures that often co-occur with spoken language; these two phenomena will be compared. This article explores implications of the design of our system for research in multimodal gesture generation systems. A conceptual model of multimodal communication signals is introduced to show how computational linguistic research on ASL relates to the field of multimodal natural language processing.
APA, Harvard, Vancouver, ISO, and other styles
43

Yang, Charles. "A formalist perspective on language acquisition." Epistemological issue with keynote article “A Formalist Perspective on Language Acquisition” by Charles Yang 8, no. 6 (November 26, 2018): 665–706. http://dx.doi.org/10.1075/lab.18014.yan.

Full text
Abstract:
Abstract Language acquisition is a computational process by which linguistic experience is integrated into the learner’s initial stage of knowledge. To understand language acquisition thus requires precise statements about these components and their interplay, stepping beyond the philosophical and methodological disputes such as the generative vs. usage-based approaches. I review several mathematical models that have guided the study of child language acquisition: How learners integrate experience with their prior knowledge of linguistic structures, How researchers assess the progress of language acquisition with rigor and clarity, and How children form the rules of language even in the face of exceptions. I also suggest that these models are applicable to second language acquisition (L2), yielding potentially important insights on the continuities and differences between child and adult language.
APA, Harvard, Vancouver, ISO, and other styles
44

Anderson, Andrew J., Douwe Kiela, Stephen Clark, and Massimo Poesio. "Visually Grounded and Textual Semantic Models Differentially Decode Brain Activity Associated with Concrete and Abstract Nouns." Transactions of the Association for Computational Linguistics 5 (December 2017): 17–30. http://dx.doi.org/10.1162/tacl_a_00043.

Full text
Abstract:
Important advances have recently been made using computational semantic models to decode brain activity patterns associated with concepts; however, this work has almost exclusively focused on concrete nouns. How well these models extend to decoding abstract nouns is largely unknown. We address this question by applying state-of-the-art computational models to decode functional Magnetic Resonance Imaging (fMRI) activity patterns, elicited by participants reading and imagining a diverse set of both concrete and abstract nouns. One of the models we use is linguistic, exploiting the recent word2vec skipgram approach trained on Wikipedia. The second is visually grounded, using deep convolutional neural networks trained on Google Images. Dual coding theory considers concrete concepts to be encoded in the brain both linguistically and visually, and abstract concepts only linguistically. Splitting the fMRI data according to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based models for the most abstract nouns. More generally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain.
APA, Harvard, Vancouver, ISO, and other styles
45

Labella, Álvaro, Rosa M. Rodríguez, Ahmad A. Alzahrani, and Luis Martínez. "A Consensus Model for Extended Comparative Linguistic Expressions with Symbolic Translation." Mathematics 8, no. 12 (December 10, 2020): 2198. http://dx.doi.org/10.3390/math8122198.

Full text
Abstract:
Consensus Reaching Process (CRP) is a necessary process to achieve agreed solutions in group decision making (GDM) problems. Usually, these problems are defined in uncertain contexts, in which experts do not have a full and precise knowledge about all aspects of the problem. In real-world GDM problems under uncertainty, it is usual that experts express their preferences by using linguistic expressions. Consequently, different methodologies have modelled linguistic information, in which computing with words stands out and whose basis is the fuzzy linguistic approach and their extensions. Even though, multiple consensus approaches under fuzzy linguistic environments have been proposed in the specialized literature, there are still some areas where their performance must be improved because of several persistent drawbacks. The drawbacks include the use of single linguistic terms that are not always enough to model the uncertainty in experts’ knowledge or the oversimplification of fuzzy information during the computational processes by defuzzification processes into crisp values, which usually implies a loss of information and precision in the results and also a lack of interpretability. Therefore, to improving the effects of previous drawbacks, this paper aims at presenting a novel CRP for GDM problems dealing with Extended Comparative Linguistic Expressions with Symbolic Translation (ELICIT) for modelling experts’ linguistic preferences. Such a CRP will overcome previous limitations because ELICIT information allows both fuzzy modelling of the experts’ uncertainty including hesitancy and performs comprehensive fuzzy computations to, ultimately, obtain precise and understandable linguistic results. Additionally, the proposed CRP model is implemented and integrated into the CRP support system so-called A FRamework for the analYsis of Consensus Approaches (AFRYCA) 3.0 that facilitates the application of the proposed CRP and its comparison with previous models.
APA, Harvard, Vancouver, ISO, and other styles
46

Schnelle, Helmut. "Grammar and brain." Behavioral and Brain Sciences 26, no. 6 (December 2003): 689–90. http://dx.doi.org/10.1017/s0140525x03450152.

Full text
Abstract:
Jackendoff's account of relating linguistic structure and brain structure is too restricted in concentrating on formal features of computational requirements, neglecting the achievements of various types of neuroscientific modelling. My own approaches to neuronal models of syntactic organization show how these requirements could be met. The book's lack of discussion of a sound philosophy of the relation is briefly mentioned.
APA, Harvard, Vancouver, ISO, and other styles
47

Burda, Michal, Pavel Rusnok, and Martin Štěpnička. "Mining Linguistic Associations for Emergent Flood Prediction Adjustment." Advances in Fuzzy Systems 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/131875.

Full text
Abstract:
Floods belong to the most hazardous natural disasters and their disaster management heavily relies on precise forecasts. These forecasts are provided by physical models based on differential equations. However, these models do depend on unreliable inputs such as measurements or parameter estimations which causes undesirable inaccuracies. Thus, an appropriate data-mining analysis of the physical model and its precision based on features that determine distinct situations seems to be helpful in adjusting the physical model. An application of fuzzy GUHA method in flood peak prediction is presented. Measured water flow rate data from a system for flood predictions were used in order to mine fuzzy association rules expressed in natural language. The provided data was firstly extended by a generation of artificial variables (features). The resulting variables were later on translated into fuzzy GUHA tables with help of Evaluative Linguistic Expressions in order to mine associations. The found associations were interpreted as fuzzy IF-THEN rules and used jointly with the Perception-based Logical Deduction inference method to predict expected time shift of flow rate peaks forecasted by the given physical model. Results obtained from this adjusted model were statistically evaluated and the improvement in the forecasting accuracy was confirmed.
APA, Harvard, Vancouver, ISO, and other styles
48

WANG, YINGXU. "COGNITIVE LINGUISTIC PERSPECTIVES ON THE CHINESE LANGUAGE." New Mathematics and Natural Computation 09, no. 02 (July 2013): 237–60. http://dx.doi.org/10.1142/s179300571340005x.

Full text
Abstract:
Chinese is one of the oldest and most widely used languages with an ideographic writing system. It is curious to analyze the advantages and disadvantages of Chinese in cognitive linguistics, cognitive informatics, and knowledge science. This paper presents a comparative study on the fundamental theories and formal models of Chinese and other languages. A number of interesting findings on the cognitive and social impacts of the widely used Chinese language are revealed. It is found that, although the idiographic languages are more efficient in language manipulation, the alphabetic languages contributed more to the development of the knowledge processing power of the brain. A set of fundamental properties of knowledge is elicited, which reveals that the knowledge space of an individual is proportional to both the number of concepts and the number of their relations developed in long-term memory of the brain. Toward a more powerful and efficient scientific language for rigorous inference, the expression means of the Chinese language may yet need to be extended in its abstraction mechanisms and a convergent approach to integrate and synergize observations and truths in order to form rigorous theories and a formal knowledge framework. The findings of this work provide a foundation for comparative studies on Chinese and other languages in particular, and for cognitive linguistics and knowledge science in general.
APA, Harvard, Vancouver, ISO, and other styles
49

Mairesse, François, and Marilyn A. Walker. "Controlling User Perceptions of Linguistic Style: Trainable Generation of Personality Traits." Computational Linguistics 37, no. 3 (September 2011): 455–88. http://dx.doi.org/10.1162/coli_a_00063.

Full text
Abstract:
Recent work in natural language generation has begun to take linguistic variation into account, developing algorithms that are capable of modifying the system's linguistic style based either on the user's linguistic style or other factors, such as personality or politeness. While stylistic control has traditionally relied on handcrafted rules, statistical methods are likely to be needed for generation systems to scale to the production of the large range of variation observed in human dialogues. Previous work on statistical natural language generation (SNLG) has shown that the grammaticality and naturalness of generated utterances can be optimized from data; however these data-driven methods have not been shown to produce stylistic variation that is perceived by humans in the way that the system intended. This paper describes Personage, a highly parameterizable language generator whose parameters are based on psychological findings about the linguistic reflexes of personality. We present a novel SNLG method which uses parameter estimation models trained on personality-annotated data to predict the generation decisions required to convey any combination of scalar values along the five main dimensions of personality. A human evaluation shows that parameter estimation models produce recognizable stylistic variation along multiple dimensions, on a continuous scale, and without the computational cost incurred by overgeneration techniques.
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, H., and J. M. Mendel. "On Choosing Models for Linguistic Connector Words for Mamdani Fuzzy Logic Systems." IEEE Transactions on Fuzzy Systems 12, no. 1 (February 2004): 29–44. http://dx.doi.org/10.1109/tfuzz.2003.822675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography