Добірка наукової літератури з теми "Computational linguistic models"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Computational linguistic models".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Computational linguistic models"

1

Phong, Phạm Hồng, and Bùi Công Cường. "Symbolic Computational Models for Intuitionistic Linguistic Information." Journal of Computer Science and Cybernetics 32, no. 1 (June 7, 2016): 31–45. http://dx.doi.org/10.15625/1813-9663/32/1/5984.

Повний текст джерела
Анотація:
In \cite{Cuong14, Phong14}, we first introduced the notion of intuitionistic linguistic labels. In this paper, we develop two symbolic computational models for intuitionistic linguistic labels (intuitionistic linguistic information). Various operators are proposed, their properties are also examined. Then, an application in group decision making using intuitionistic linguistic preference relations is discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hale, John T., Luca Campanelli, Jixing Li, Shohini Bhattasali, Christophe Pallier, and Jonathan R. Brennan. "Neurocomputational Models of Language Processing." Annual Review of Linguistics 8, no. 1 (January 14, 2022): 427–46. http://dx.doi.org/10.1146/annurev-linguistics-051421-020803.

Повний текст джерела
Анотація:
Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplemental Appendix .
Стилі APA, Harvard, Vancouver, ISO та ін.
3

BOSQUE-GIL, J., J. GRACIA, E. MONTIEL-PONSODA, and A. GÓMEZ-PÉREZ. "Models to represent linguistic linked data." Natural Language Engineering 24, no. 6 (October 4, 2018): 811–59. http://dx.doi.org/10.1017/s1351324918000347.

Повний текст джерела
Анотація:
AbstractAs the interest of the Semantic Web and computational linguistics communities in linguistic linked data (LLD) keeps increasing and the number of contributions that dwell on LLD rapidly grows, scholars (and linguists in particular) interested in the development of LLD resources sometimes find it difficult to determine which mechanism is suitable for their needs and which challenges have already been addressed. This review seeks to present the state of the art on the models, ontologies and their extensions to represent language resources as LLD by focusing on the nature of the linguistic content they aim to encode. Four basic groups of models are distinguished in this work: models to represent the main elements of lexical resources (group 1), vocabularies developed as extensions to models in group 1 and ontologies that provide more granularity on specific levels of linguistic analysis (group 2), catalogues of linguistic data categories (group 3) and other models such as corpora models or service-oriented ones (group 4). Contributions encompassed in these four groups are described, highlighting their reuse by the community and the modelling challenges that are still to be faced.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Srihari, Rohini K. "Computational models for integrating linguistic and visual information: A survey." Artificial Intelligence Review 8, no. 5-6 (1995): 349–69. http://dx.doi.org/10.1007/bf00849725.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Martin, Andrea E. "A Compositional Neural Architecture for Language." Journal of Cognitive Neuroscience 32, no. 8 (August 2020): 1407–27. http://dx.doi.org/10.1162/jocn_a_01552.

Повний текст джерела
Анотація:
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

HSIEH, CHIH HSUN. "LINGUISTIC INVENTORY PROBLEMS." New Mathematics and Natural Computation 07, no. 01 (March 2011): 1–49. http://dx.doi.org/10.1142/s179300571100186x.

Повний текст джерела
Анотація:
The work presented in this paper has been motivated primarily by Zadeh's idea of linguistic variables intended to provide rigorous mathematical modeling of natural language and CWW, Computing With Words. This paper reports some modeling of the linguistic inventory problems where CWW have been implemented: linguistic production inventory, linguistic inventory models under linguistic demand and linguistic lead time, linguistic production inventory models based on the preference of a decision maker, and linguistic inventory model for fuzzy reorder point and fuzzy safety stock. Only studies that focus on CWW, two linguistic inventory models and two linguistic backorder inventory models, which each model is combined by the heuristic fuzzy total inventory cost based on the preference of a decision maker, are proposed in this paper. The heuristic fuzzy total inventory cost of each model is modeled by linguistic values in natural language, fuzzy numbers, and crisp real numbers. It is also computed and defuzzified by using some fuzzy arithmetical operations by Function Principle and Graded k-preference integration representation method, respectively. In addition, Extension of the LaGrangean method is used for solving inequality constrain problem in the proposed linguistic inventory environments. Furthermore, we find that our heuristic optimal solutions of the new introduced modeling the linguistic inventory problems can also be specified to meet the classical inventory models, when all linguistic variables are crisp real numbers, such as the previous proposed linguistic inventory models.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Paul, Michael, and Roxana Girju. "A Two-Dimensional Topic-Aspect Model for Discovering Multi-Faceted Topics." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 3, 2010): 545–50. http://dx.doi.org/10.1609/aaai.v24i1.7669.

Повний текст джерела
Анотація:
This paper presents the Topic-Aspect Model (TAM), a Bayesian mixture model which jointly discovers topics and aspects. We broadly define an aspect of a document as a characteristic that spans the document, such as an underlying theme or perspective. Unlike previous models which cluster words by topic or aspect, our model can generate token assignments in both of these dimensions, rather than assuming words come from only one of two orthogonal models. We present two applications of the model. First, we model a corpus of computational linguistics abstracts, and find that the scientific topics identified in the data tend to include both a computational aspect and a linguistic aspect. For example, the computational aspect of GRAMMAR emphasizes parsing, whereas the linguistic aspect focuses on formal languages. Secondly, we show that the model can capture different viewpoints on a variety of topics in a corpus of editorials about the Israeli-Palestinian conflict. We show both qualitative and quantitative improvements in TAM over two other state-of-the-art topic models.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Gupta, Prashant K., Deepak Sharma, and Javier Andreu-Perez. "Enhanced linguistic computational models and their similarity with Yager’s computing with words." Information Sciences 574 (October 2021): 259–78. http://dx.doi.org/10.1016/j.ins.2021.05.038.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Goldstein, Ariel, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A. Nastase, et al. "Shared computational principles for language processing in humans and deep language models." Nature Neuroscience 25, no. 3 (March 2022): 369–80. http://dx.doi.org/10.1038/s41593-022-01026-4.

Повний текст джерела
Анотація:
AbstractDeparting from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

SEGERS, NICOLE, and PIERRE LECLERCQ. "Computational linguistics for design, maintenance, and manufacturing." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 21, no. 2 (March 19, 2007): 99–101. http://dx.doi.org/10.1017/s0890060407070163.

Повний текст джерела
Анотація:
Although graphic representations have proven to be of value in computer-aided support and have received much attention in both research and practice (Goldschmidt, 1991; Goel, 1995; Achten, 1997; Do, 2002), linguistic representations presently do not significantly contribute to improve the information handling related to the computer support of a design product. During its life cycle, engineers and designers make many representations of a product. The information and knowledge used to create the product are usually represented visually in sketches, models, (technical) drawings, and images. Linguistic information is complementary to graphic information and essential to create the corporate memory of products. Linguistic information (i.e., the use of words, abbreviations, vocal comments, annotations, notes, and reports) creates meaningful information for designers and engineers as well as for computers (Segers, 2004; Juchmes et al., 2005). Captions, plain text, and keyword indexing are now common to support the communication between design actors (Lawson & Loke, 1997; Wong & Kvan, 1999; Heylighen, 2001; Boujut, 2003). Nevertheless, it is currently scarcely used to its full potential in design, maintenance, and manufacturing.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Computational linguistic models"

1

Penton, Dave. "Linguistic data models : presentation and representation /." Connect to thesis, 2006. http://eprints.unimelb.edu.au/archive/00002875.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Tonkes, Bradley. "On the origins of linguistic structure : computational models of the evolution of language /." St. Lucia, Qld, 2001. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16529.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

vanCort, Tracy. "Computational Evolutionary Linguistics." Scholarship @ Claremont, 2001. https://scholarship.claremont.edu/hmc_theses/137.

Повний текст джерела
Анотація:
Languages and species both evolve by a process of repeated divergences, which can be described with the branching of a phylogenetic tree or phylogeny. Taking advantage of this fact, it is possible to study language change using computational tree building techniques developed for evolutionary biology. Mathematical approaches to the construction of phylogenies fall into two major categories: character based and distance based methods. Character based methods were used in prior work in the application of phylogenetic methods to the Indo-European family of languages by researchers at the University of Pennsylvania. Discussion of the limitations of character-based models leads to a similar presentation of distance based models. We present an adaptation of these methods to linguistic data, and the phylogenies generated by applying these methods to several modern Germanic languages and Spanish. We conclude that distance based for phylogenies are useful for historical linguistic reconstruction, and that it would be useful to extend existing tree drawing methods to better model the evolutionary effects of language contact.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Evans, Owain Rhys. "Bayesian computational models for inferring preferences." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101522.

Повний текст джерела
Анотація:
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 130-131).
This thesis is about learning the preferences of humans from observations of their choices. It builds on work in economics and decision theory (e.g. utility theory, revealed preference, utilities over bundles), Machine Learning (inverse reinforcement learning), and cognitive science (theory of mind and inverse planning). Chapter 1 lays the conceptual groundwork for the thesis and introduces key challenges for learning preferences that motivate chapters 2 and 3. I adopt a technical definition of 'preference' that is appropriate for inferring preferences from choices. I consider what class of objects preferences should be defined over. I discuss the distinction between actual preferences and informed preferences and the distinction between basic/intrinsic and derived/instrumental preferences. Chapter 2 focuses on the challenge of human 'suboptimality'. A person's choices are a function of their beliefs and plans, as well as their preferences. If they have inaccurate beliefs or make inefficient plans, then it will generally be more difficult to infer their preferences from choices. It is also more difficult if some of their beliefs might be inaccurate and some of their plans might be inefficient. I develop models for learning the preferences of agents subject to false beliefs and to time inconsistency. I use probabilistic programming to provide a concise, extendable implementation of preference inference for suboptimal agents. Agents performing suboptimal sequential planning are represented as functional programs. Chapter 3 considers how preferences vary under different combinations (or &compositions') of outcomes. I use simple mathematical functional forms to model composition. These forms are standard in microeconomics, where the outcomes in question are quantities of goods or services. These goods may provide the same purpose (and be substitutes for one another). Alternatively, they may combine together to perform some useful function (as with complements). I implement Bayesian inference for learning the preferences of agents making choices between different combinations of goods. I compare this procedure to empirical data for two different applications.
by Owain Rhys Evans.
Ph. D. in Linguistics
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Heiberg, Andrea Jeanine. "Features in optimality theory: A computational model." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/288983.

Повний текст джерела
Анотація:
This dissertation presents a computational model of Optimality Theory (OT) (Prince and Smolensky 1993). The model provides an efficient solution to the problem of candidate generation and evaluation, and is demonstrated for the realm of phonological features. Explicit object-oriented implementations are proposed for autosegmental representations (Goldsmith 1976 and many others) and violable OT constraints and Gen operations on autosegmental representations. Previous computational models of OT (Ellison 1995, Tesar 1995, Eisner 1997, Hammond 1997, Karttunen 1998) have not dealt in depth with autosegmental representations. The proposed model provides a full treatment of autosegmental representations and constraints on autosegmental representations (Akinlabi 1996, Archangeli and Pulleyblank 1994, Ito, Mester, and Padgett 1995, Kirchner 1993, Padgett 1995, Pulleyblank 1993, 1996, 1998). Implementing Gen, the candidate generation component of OT, is a seemingly intractable problem. Gen in principle performs unlimited insertion; therefore, it may produce an infinite candidate set. For autosegmental representations, however, it is not necessary to think of Gen as infinite. The Obligatory Contour Principle (Leben 1973, McCarthy 1979, 1986) restricts the number of tokens of any one feature type in a single representation; hence, Gen for autosegmental features is finite. However, a finite Gen may produce a candidate set of exponential size. Consider an input representation with four anchors for each of five features: there are (2⁴ + 1)⁵, more than one million, candidates for such an input. The proposed model implements a method for significantly reducing the exponential size of the candidate set. Instead of first creating all candidates (Gen) and then evaluating them against the constraint hierarchy (Eval), candidate creation and evaluation are interleaved (cf. Eisner 1997, Hammond 1997) in a Gen-Eval loop. At each pass through the Gen-Eval loop, Gen operations apply to create the minimal number of candidates needed for constraint evaluation; this candidate set is evaluated and culled, and the set of Gen operations is reduced. The loop continues until the hierarchy is exhausted; the remaining candidate(s) are optimal. In providing explicit implementations of autosegmental representations, constraints, and Gen operations, the model provides a coherent view of autosegmental theory, Optimality Theory, and the interaction between the two.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gwei, G. M. "New models of natural language for consultative computing." Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378986.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Clark, Stephen. "Class-based statistical models for lexical knowledge acquisition." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341541.

Повний текст джерела
Анотація:
This thesis is about the automatic acquisition of a particular kind of lexical knowledge, namely the knowledge of which noun senses can fill the argument slots of predicates. The knowledge is represented using probabilities, which agrees with the intuition that there are no absolute constraints on the arguments of predicates, but that the constraints are satisfied to a certain degree; thus the problem of knowledge acquisition becomes the problem of probability estimation from corpus data. The problem with defining a probability model in terms of senses is that this involves a huge number of parameters, which results in a sparse data problem. The proposal here is to define a probability model over senses in a semantic hierarchy, and exploit the fact that senses can be grouped into classes consisting of semantically similar senses. A novel class-based estimation technique is developed, together with a procedure that determines a suitable class for a sense (given a predicate and argument position). The problem of determining a suitable class can be thought of as finding a suitable level of generalisation in the hierarchy. The generalisation procedure uses a statistical test to locate areas consisting of semantically similar senses, and, as well as being used for probability estimation, is also employed as part of a re-estimation algorithm for estimating sense frequencies from incomplete data. The rest of the thesis considers how the lexical knowledge can be used to resolve structural ambiguities, and provides empirical evaluations. The estimation techniques are first integrated into a parse selection system, using a probabilistic dependency model to rank the alternative parses for a sentence. Then, a PP-attachment task is used to provide an evaluation which is more focussed on the class-based estimation technique, and, finally, a pseudo disambiguation task is used to compare the estimation technique with alternative approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Belz, Anja. "Computational learning of finite-state models for natural language processing." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311331.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tang, Haijiang. "Building phrase based language model from large corpus /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20TANG.

Повний текст джерела
Анотація:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 74-79). Also available in electronic version. Access restricted to campus users.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Mitchell, Jeffrey John. "Composition in distributional models of semantics." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/4927.

Повний текст джерела
Анотація:
Distributional models of semantics have proven themselves invaluable both in cognitive modelling of semantic phenomena and also in practical applications. For example, they have been used to model judgments of semantic similarity (McDonald, 2000) and association (Denhire and Lemaire, 2004; Griffiths et al., 2007) and have been shown to achieve human level performance on synonymy tests (Landuaer and Dumais, 1997; Griffiths et al., 2007) such as those included in the Test of English as Foreign Language (TOEFL). This ability has been put to practical use in automatic thesaurus extraction (Grefenstette, 1994). However, while there has been a considerable amount of research directed at the most effective ways of constructing representations for individual words, the representation of larger constructions, e.g., phrases and sentences, has received relatively little attention. In this thesis we examine this issue of how to compose meanings within distributional models of semantics to form representations of multi-word structures. Natural language data typically consists of such complex structures, rather than just individual isolated words. Thus, a model of composition, in which individual word meanings are combined into phrases and phrases combine to form sentences, is of central importance in modelling this data. Commonly, however, distributional representations are combined in terms of addition (Landuaer and Dumais, 1997; Foltz et al., 1998), without any empirical evaluation of alternative choices. Constructing effective distributional representations of phrases and sentences requires that we have both a theoretical foundation to direct the development of models of composition and also a means of empirically evaluating those models. The approach we take is to first consider the general properties of semantic composition and from that basis define a comprehensive framework in which to consider the composition of distributional representations. The framework subsumes existing proposals, such as addition and tensor products, but also allows us to define novel composition functions. We then show that the effectiveness of these models can be evaluated on three empirical tasks. The first of these tasks involves modelling similarity judgements for short phrases gathered in human experiments. Distributional representations of individual words are commonly evaluated on tasks based on their ability to model semantic similarity relations, e.g., synonymy or priming. Thus, it seems appropriate to evaluate phrase representations in a similar manner. We then apply compositional models to language modelling, demonstrating that the issue of composition has practical consequences, and also providing an evaluation based on large amounts of natural data. In our third task, we use these language models in an analysis of reading times from an eye-movement study. This allows us to investigate the relationship between the composition of distributional representations and the processes involved in comprehending phrases and sentences. We find that these tasks do indeed allow us to evaluate and differentiate the proposed composition functions and that the results show a reasonable consistency across tasks. In particular, a simple multiplicative model is best for a semantic space based on word co-occurrence, whereas an additive model is better for the topic based model we consider. More generally, employing compositional models to construct representations of multi-word structures typically yields improvements in performance over non-compositonal models, which only represent individual words.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Computational linguistic models"

1

Koverin, A. A. Ėksperimentalʹnai͡a︡ proverka lingvisticheskikh modeleĭ na ĖVM. Irkutsk: Izd-vo Irkutskogo universiteta, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

1951-, Young Steve, Bloothooft Gerrit, ELSNET, and European Summer School on Language and Speech Communication (2nd : 1994 : Utrecht, Belgium), eds. Corpus-based methods in language and speech processing. Dordrecht: Kluwer Academic, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ton, Dijkstra, and Smedt Koenraad de, eds. Computational psycholinguistics: AI and connectionist models of human language processing. London: Taylor & Francis Ltd., 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Tomoharu, Nakashima, and Nii Manabu, eds. Classification and modeling with linguistic information granules: Advanced approaches advanced approaches to linguistic data mining. New York: Springer, 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

The evidential basis of linguistic argumentation. Amsterdam: John Benjamins Publishing Company, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Antonis, Botinis, ed. Intonation: Analysis, modelling and technology. Dordrecht [Netherlands]: Kluwer Academic Publishers, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Language modeling for machine translation: Effects of long term context dependency language models for statistical machine translation. Saarbrücken: VDM Verlag Dr. Müller, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

1968-, Lawry Jonathan, Shanahan James G, and Ralescu Anca L. 1949-, eds. Modelling with words: Learning, fusion, and reasoning within a formal linguistic representation framework. Berlin: Springer, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Miezitis, Mara Anita. Generating lexical options by matching in a knowledge base. Toronto: Computer Systems Research Institute, University of Toronto, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

McRoy, Susan Weber. Abductive interpretation and reinterpretation of natural language utterances. Toronto: Computer Systems Research Institute, University of Toronto, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Computational linguistic models"

1

Vázquez-Larruscaín, Miguel. "Computational modelling of prototypicality in language change." In Competing Models of Linguistic Change, 183–210. Amsterdam: John Benjamins Publishing Company, 2006. http://dx.doi.org/10.1075/cilt.279.12vaz.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lappin, Shalom. "Cognitively Viable Computational Models of Linguistic Knowledge." In Deep Learning and Linguistic Representation, 89–112. Boca Raton: CRC Press, 2021.: Chapman and Hall/CRC, 2021. http://dx.doi.org/10.1201/9781003127086-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Karanth, Prathibha. "Neuropsychological Cognitive and Computational Models of Reading." In Cross-Linguistic Study of Acquired Reading Disorders, 7–21. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4419-8923-9_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Villaseñor-Pineda, Luis, Viet Bac Le, Manuel Montes-y-Gómez, and Manuel Pérez-Coutiño. "Toward Acoustic Models for Languages with Limited Linguistic Resources." In Computational Linguistics and Intelligent Text Processing, 433–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30586-6_47.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Srihari, Rohini K. "Computational Models for Integrating Linguistic and Visual Information: A Survey." In Integration of Natural Language and Vision Processing, 185–205. Dordrecht: Springer Netherlands, 1995. http://dx.doi.org/10.1007/978-94-011-0273-5_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Wang, Hai, and Zeshui Xu. "Representational Models and Computational Foundations of Some Types of Uncertain Linguistic Expressions." In Uncertainty and Operations Research, 35–72. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-3735-2_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mark, David M., David Comas, Max J. Egenhofer, Scott M. Freundschuh, Michael D. Gould, and Joan Nunes. "Evaluating and refining computational models of spatial relations through cross-linguistic human-subjects testing." In Lecture Notes in Computer Science, 553–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 1995. http://dx.doi.org/10.1007/3-540-60392-1_36.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Meduna, Alexander, and Ondřej Soukup. "Applications in Computational Linguistics." In Modern Language Models and Computation, 475–94. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63100-4_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Savitch, Walter J. "Computational complexity in language models." In Issues in Mathematical Linguistics, 183. Amsterdam: John Benjamins Publishing Company, 1999. http://dx.doi.org/10.1075/sfsl.47.11sav.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wintner, Shuly. "Computational Models of Language Acquisition." In Computational Linguistics and Intelligent Text Processing, 86–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12116-6_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Computational linguistic models"

1

Basic, Bojana Dalbelo, Zdravko Dovedan, Ida Raffaelli, Sanja Seljan, and Marko Tadic. "Computational Linguistic Models and Language Technologies for Croatian." In 2007 29th International Conference on Information Technology Interfaces. IEEE, 2007. http://dx.doi.org/10.1109/iti.2007.4283826.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ott, Myle. "Linguistic Models of Deceptive Opinion Spam." In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/w14-2606.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Heilbron, Micha, Benedikt Ehinger, Peter Hagoort, and Floris de Lange. "Tracking Naturalistic Linguistic Predictions with Deep Neural Language Models." In 2019 Conference on Cognitive Computational Neuroscience. Brentwood, Tennessee, USA: Cognitive Computational Neuroscience, 2019. http://dx.doi.org/10.32470/ccn.2019.1096-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

RODRÍGUEZ, R. M., and L. MARTÍNEZ. "A COMPARISON AMONG SYMBOLIC COMPUTATIONAL MODELS IN LINGUISTIC DECISION MAKING." In Proceedings of the 9th International FLINS Conference. WORLD SCIENTIFIC, 2010. http://dx.doi.org/10.1142/9789814324700_0074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ilin, Roman. "Combined linguistic and sensor models for machine learning." In 2014 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB). IEEE, 2014. http://dx.doi.org/10.1109/ccmb.2014.7020690.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mueller, Aaron, Garrett Nicolai, Panayiota Petrou-Zeniou, Natalia Talmina, and Tal Linzen. "Cross-Linguistic Syntactic Evaluation of Word Prediction Models." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.acl-main.490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Otmakhova, Yulia, Karin Verspoor, and Jey Han Lau. "Cross-linguistic Comparison of Linguistic Feature Encoding in BERT Models for Typologically Different Languages." In Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP. Stroudsburg, PA, USA: Association for Computational Linguistics, 2022. http://dx.doi.org/10.18653/v1/2022.sigtyp-1.4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Durrani, Nadir, Hassan Sajjad, and Fahim Dalvi. "How transfer learning impacts linguistic knowledge in deep NLP models?" In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-acl.438.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Rouhizadeh, Masoud, Emily Prud'hommeaux, Jan van Santen, and Richard Sproat. "Detecting linguistic idiosyncratic interests in autism using distributional semantic models." In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality. Stroudsburg, PA, USA: Association for Computational Linguistics, 2014. http://dx.doi.org/10.3115/v1/w14-3206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sarti, Gabriele, Dominique Brunato, and Felice Dell’Orletta. "That Looks Hard: Characterizing Linguistic Complexity in Humans and Language Models." In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.cmcl-1.5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Computational linguistic models"

1

Jurafsky, Daniel. An On-Line Computational Model of Human Sentence Interpretation: A Theory of the Representation and Use of Linguistic Knowledge. Fort Belvoir, VA: Defense Technical Information Center, March 1992. http://dx.doi.org/10.21236/ada604298.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Moreno Pérez, Carlos, and Marco Minozzo. “Making Text Talk”: The Minutes of the Central Bank of Brazil and the Real Economy. Madrid: Banco de España, November 2022. http://dx.doi.org/10.53479/23646.

Повний текст джерела
Анотація:
This paper investigates the relationship between the views expressed in the minutes of the meetings of the Central Bank of Brazil’s Monetary Policy Committee (COPOM) and the real economy. It applies various computational linguistic machine learning algorithms to construct measures of the minutes of the COPOM. First, we create measures of the content of the paragraphs of the minutes using Latent Dirichlet Allocation (LDA). Second, we build an uncertainty index for the minutes using Word Embedding and K-Means. Then, we combine these indices to create two topic-uncertainty indices. The first one is constructed from paragraphs with a higher probability of topics related to “general economic conditions”. The second topic-uncertainty index is constructed from paragraphs that have a higher probability of topics related to “inflation” and the “monetary policy discussion”. Finally, we employ a structural VAR model to explore the lasting effects of these uncertainty indices on certain Brazilian macroeconomic variables. Our results show that greater uncertainty leads to a decline in inflation, the exchange rate, industrial production and retail trade in the period from January 2000 to July 2019.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії