Journal articles on the topic 'Grammatical formalisms'

To see the other types of publications on this topic, follow the link: Grammatical formalisms.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Grammatical formalisms.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Carlson, Lauri, and Krister Linden. "Unification as a Grammatical Tool." Nordic Journal of Linguistics 10, no. 2 (December 1987): 111–36. http://dx.doi.org/10.1017/s033258650000161x.

Full text
Abstract:
The present paper is an introduction to unification as a formalism for writing grammars for natural languages. The paper is structured as follows. Section 1 briefly describes the history and the current scene of unification based grammar formalisms. Sections 2–3 describe the basic design of current formalisms. Section 4 constitutes a tutorial introduction to a representative unification based grammar formalism, the D–PATR system of Karttunen (1986). Sections 5—6 consider extensions of the unification formalism and its limitations. Section 7 examines implementation questions and addresses the question of the computational complexity of unification. — Some notes on terminology.
APA, Harvard, Vancouver, ISO, and other styles
2

Keller, Bill. "Formalisms for grammatical knowledge representation." Artificial Intelligence Review 6, no. 4 (1992): 365–81. http://dx.doi.org/10.1007/bf00123690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

RANTA, AARNE. "Grammatical Framework." Journal of Functional Programming 14, no. 2 (January 22, 2004): 145–89. http://dx.doi.org/10.1017/s0956796803004738.

Full text
Abstract:
Grammatical Framework (GF) is a special-purpose functional language for defining grammars. It uses a Logical Framework (LF) for a description of abstract syntax, and adds to this a notation for defining concrete syntax. GF grammars themselves are purely declarative, but can be used both for linearizing syntax trees and parsing strings. GF can describe both formal and natural languages. The key notion of this description is a grammatical object, which is not just a string, but a record that contains all information on inflection and inherent grammatical features such as number and gender in natural languages, or precedence in formal languages. Grammatical objects have a type system, which helps to eliminate run-time errors in language processing. In the same way as a LF, GF uses dependent types in abstract syntax to express semantic conditions, such as well-typedness and proof obligations. Multilingual grammars, where one abstract syntax has many parallel concrete syntaxes, can be used for reliable and meaning-preserving translation. They can also be used in authoring systems, where syntax trees are constructed in an interactive editor similar to proof editors based on LF. While being edited, the trees can simultaneously be viewed in different languages. This paper starts with a gradual introduction to GF, going through a sequence of simpler formalisms till the full power is reached. The introduction is followed by a systematic presentation of the GF formalism and outlines of the main algorithms: partial evaluation and parser generation. The paper concludes by brief discussions of the Haskell implementation of GF, existing applications, and related work.
APA, Harvard, Vancouver, ISO, and other styles
4

BARKALOVA, PETYA. "ГРАМАТИЧЕСКИ ФОРМАЛИЗМИ В ПОМОЩ НА ГРАМАТИКОГРАФИЯТА / GRAMMATICAL FORMALISMS IN AID OF GRAMMATICOGRAPHY." Journal of Bulgarian Language 68, PR (September 10, 2021): 224–49. http://dx.doi.org/10.47810/bl.68.21.pr.15.

Full text
Abstract:
This paper presents some of the results of a larger study dedicated to the path of grammatical knowledge from the ancient Greek-Byzantine grammatical treatises to the Eastern Orthodox Slavic world, to the Bulgarian grammatical tradition from the Na-tional Revival period. The focus is on the syntactic element of the grammatical description. A formal notation of the sentence sections in the grammars of Avram Mrazović, Yuriy Venelin and Ivan Bogorov is enclosed to the end of comparing the “art and craft of writing grammars”. Grammatical formalisms have proved to be a reliable tool in the analytical procedures in the phylogenetic study of the Bulgarian syntactic tradition, as well as in the configurational analysis of the sentence, which departs from the practice of asking “questions” and looks into the fundamental property of “syntactic function” through the prism of modern grammar. Keywords: Bulgarian syntactic tradition, grammaticography, grammatical forma-lisms
APA, Harvard, Vancouver, ISO, and other styles
5

SHEA, KRISTINA, and JONATHAN CAGAN. "Languages and semantics of grammatical discrete structures." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 13, no. 4 (September 1999): 241–51. http://dx.doi.org/10.1017/s0890060499134012.

Full text
Abstract:
Applying grammatical formalisms to engineering problems requires consideration of spatial, functional, and behavioral design attributes. This paper explores structural design languages and semantics for the generation of feasible and purposeful discrete structures. In an application of shape annealing, a combination of grammatical design generation and search, to the generation of discrete structures, rule syntax, and semantics are used to model desired relations between structural form and function as well as control design generation. Explicit domain knowledge is placed within the grammar through rule and syntax formulation, resulting in the generation of only forms that make functional sense and adhere to preferred visual styles. Design interpretation, or semantics, is then used to select forms that meet functional and visual goals. The distinction between syntax used in grammar rules to explicitly drive geometric design and semantics used in design interpretation to implicitly guide geometric form is shown. Overall, the designs presented show the validity of applying a grammatical formalism to an engineering design problem and illustrate a range of possibilities for modeling functional and visual design criteria.
APA, Harvard, Vancouver, ISO, and other styles
6

Laporte, Éric. "Reduction of lexical ambiguity." Ambiguity 24, no. 1 (December 31, 2001): 67–103. http://dx.doi.org/10.1075/li.24.1.05lap.

Full text
Abstract:
Summary We examine various issues faced during the elaboration of lexical disambiguators, e.g. issues related with linguistic analyses underlying disambiguators, and we exemplify these issues with grammatical constraints. We also examine computational problems and show how they are connected with linguistic problems: the influence of the granularity of tagsets, the definition of realistic and useful objectives, and the construction of the data required for the reduction of ambiguity. We show why a formalism is required for automatic ambiguity reduction, we analyse its function and we present a typology of such formalisms.
APA, Harvard, Vancouver, ISO, and other styles
7

Frank, Robert, and Tim Hunter. "Variation in mild context-sensitivity." Formal Language Theory and its Relevance for Linguistic Analysis 3, no. 2 (November 5, 2021): 181–214. http://dx.doi.org/10.1075/elt.00033.fra.

Full text
Abstract:
Abstract Aravind Joshi famously hypothesized that natural language syntax was characterized (in part) by mildly context-sensitive generative power. Subsequent work in mathematical linguistics over the past three decades has revealed surprising convergences among a wide variety of grammatical formalisms, all of which can be said to be mildly context-sensitive. But this convergence is not absolute. Not all mildly context-sensitive formalisms can generate exactly the same stringsets (i.e. they are not all weakly equivalent), and even when two formalisms can both generate a certain stringset, there might be differences in the structural descriptions they use to do so. It has generally been difficult to find cases where such differences in structural descriptions can be pinpointed in a way that allows linguistic considerations to be brought to bear on choices between formalisms, but in this paper we present one such case. The empirical pattern of interest involves wh-movement dependencies in languages that do not enforce the wh-island constraint. This pattern draws attention to two related dimensions of variation among formalisms: whether structures grow monotonically from one end to another, and whether structure-building operations are conditioned by only a finite amount of derivational state. From this perspective, we show that one class of formalisms generates the crucial empirical pattern using structures that align with mainstream syntactic analysis, and another class can only generate that same string pattern in a linguistically unnatural way. This is particularly interesting given that (i) the structurally-inadequate formalisms are strictly more powerful than the structurally-adequate ones from the perspective of weak generative capacity, and (ii) the formalism based on derivational operations that appear on the surface to align most closely with the mechanisms adopted in contemporary work in syntactic theory (merge and move) are the formalisms that fail to align with the analyses proposed in that work when the phenomenon is considered in full generality.
APA, Harvard, Vancouver, ISO, and other styles
8

GARGOURI, BILEL, MOHAMED JMAIEL, and ABDELMAJID BEN HAMADOU. "An approach to the formal specification of lingware." Natural Language Engineering 9, no. 3 (August 12, 2003): 211–30. http://dx.doi.org/10.1017/s1351324902003030.

Full text
Abstract:
This paper has two purposes. First, it suggests a formal approach for specifying and verifying lingware. This approach is based on a unified notation of the main existing formalisms for describing linguistic knowledge (i.e. Formal Grammars, Unification Grammars, HPSG, etc.) on the one hand, and the integration of data and processing on the other. Accordingly, a lingware specification includes all related aspects in a unified framework. This facilitates the development of a lingware system, since one has to follow a single development process instead of two separate ones. Secondly, it presents an environment for the formal specification of lingware, based on the suggested approach, which is neither restricted to a particular kind of application nor to a particular class of linguistic formalisms. This environment provides interfaces enabling the specification of both linguistic knowledge and functional aspects of a lingware system. Linguistic knowledge is specified with the usual grammatical formalisms, whereas functional aspects are specified with a suitable formal notation. Both descriptions will be integrated into the same framework to obtain a complete requirement specification that can be refined towards an executable program.
APA, Harvard, Vancouver, ISO, and other styles
9

Wintner, Shuly, and Uzzi Ornan. "Syntactic Analysis of Hebrew Sentences." Natural Language Engineering 1, no. 3 (September 1995): 261–88. http://dx.doi.org/10.1017/s1351324900000206.

Full text
Abstract:
AbstractDue to recent developments in the area of computational formalisms for linguistic representation, the task of designing a parser for a specified natural language is now shifted to the problem of designing its grammar in certain formal ways. This paper describes the results of a project whose aim was to design a formal grammar for modern Hebrew. Such a formal grammar has never been developed before. Since most of the work on grammatical formalisms was done without regarding Hebrew (and other Semitic languages as well), we had to choose a formalism that would best fit the specific needs of the language. This part of the project has been described elsewhere. In this paper we describe the details of the grammar we developed. The grammar deals with simple, subordinate and coordinate sentences as well as interrogative sentences. Some structures were thoroughly dealt with, among which are noun phrases, verb phrases, adjectival phrases, relative clauses, object and adjunct clauses; many types of adjuncts; subcategorization of verbs; coordination; numerals, etc. For each phrase the parser produces a description of the structure tree of the phrase as well as a representation of the syntactic relations in it. Many examples of Hebrew phrases are demonstrated, together with the structure the parser assigns them. In cases where more than one parse is produced, the reasons of the ambiguity are discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Wedekind, Jürgen, and Ronald M. Kaplan. "Tractable Lexical-Functional Grammar." Computational Linguistics 46, no. 3 (November 2020): 515–69. http://dx.doi.org/10.1162/coli_a_00384.

Full text
Abstract:
The formalism for Lexical-Functional Grammar (LFG) was introduced in the 1980s as one of the first constraint-based grammatical formalisms for natural language. It has led to substantial contributions to the linguistic literature and to the construction of large-scale descriptions of particular languages. Investigations of its mathematical properties have shown that, without further restrictions, the recognition, emptiness, and generation problems are undecidable, and that they are intractable in the worst case even with commonly applied restrictions. However, grammars of real languages appear not to invoke the full expressive power of the formalism, as indicated by the fact that algorithms and implementations for recognition and generation have been developed that run—even for broad-coverage grammars—in typically polynomial time. This article formalizes some restrictions on the notation and its interpretation that are compatible with conventions and principles that have been implicit or informally stated in linguistic theory. We show that LFG grammars that respect these restrictions, while still suitable for the description of natural languages, are equivalent to linear context-free rewriting systems and allow for tractable computation.
APA, Harvard, Vancouver, ISO, and other styles
11

Koseska-Toszewa, Violetta, and Antoni Mazurkiewicz. "Constructing catalogue of temporal situations." Cognitive Studies | Études cognitives, no. 10 (November 24, 2015): 71–109. http://dx.doi.org/10.11649/cs.2010.004.

Full text
Abstract:
Constructing catalogue of temporal situationsThe paper is aiming to create a common basis for description, comparing, and analysis natural languages. As a subject of comparison we have chosen temporal structures of some languages. For such a choice there exists a perfect tool, describing basic temporal phenomena, namely an ordering of states and events in time, certainty and uncertainty, independency of histories of separate objects, necessity and possibility. This tool is supported by the Petri nets formalism, which seems to be well suited for expressing the above mentioned phenomena. Petri nets are built form three primitive notions: of states, of events that begin or end the states, and so-called flow relation indicating succession of states and events. This simple constituents give rise to many possibilities of representing temporal phenomena; it turns out that such representations are sufficient for many (clearly, not necessarily all) temporal situations appearing in natural languages.In description formalisms used till now there is no possibility of expressing such reality phenomena as temporal dependencies in compound statement, or combination of temporality and modality. Moreover, using these formalisms one cannot distinguish between two different sources of uncertainty of the speaker while describing the reality: one, due to the lack of knowledge of the speaker what is going on in outside world, the second, due to objective impossibility of foreseen ways in which some conflict situations will be (or already have been) resolved. Petri net formalism seems to be perfectly suited for such differentiations.There are two main description principles that encompassed this paper. First, that assigns meaning to names of grammatical structures in different languages may lead to misunderstanding. Two grammatical structures with apparently close names may describe different reality. Additionally, some grammatical terms used in one language may be absent and not understandable in the other. It leads to assign meanings to situations, rather than to linguistic forms used for their expression. The second principle is limit the discussed issues to such a piece of reality that can be possible for precise description. The third is to avoid introducing such information to the described reality that is not explicitly mentioned by linguistic means. The authors try to following these principles in the present paper.The paper is organized as follows. First, some samples of situations related to present tense are given together with examples of their expressions in four languages: English, (as a reference language) and three Slavic languages, representing South slavonic languages (Bulgarian), West slavonic languages (Polish), and East slavonic languages (Russian). Within the same framework the next parts of the paper are constructed, supplying samples of using Past tenses and, finally, future tenses and modalities.The formal tools for description purposes are introduced stepwise, according to needs caused be the described reality. There are mainly Petri nets, equipped additionally with inscriptions or labeling in order to keep proper assignations of description units to described objects.
APA, Harvard, Vancouver, ISO, and other styles
12

Wedekind, Jürgen, and Ronald M. Kaplan. "LFG Generation by Grammar Specialization." Computational Linguistics 38, no. 4 (December 2012): 867–915. http://dx.doi.org/10.1162/coli_a_00113.

Full text
Abstract:
This article describes an approach to Lexical-Functional Grammar (LFG) generation that is based on the fact that the set of strings that an LFG grammar relates to a particular acyclic f-structure is a context-free language. We present an algorithm that produces for an arbitrary LFG grammar and an arbitrary acyclic input f-structure a context-free grammar describing exactly the set of strings that the given LFG grammar associates with that f-structure. The individual sentences are then available through a standard context-free generator operating on that grammar. The context-free grammar is constructed by specializing the context-free backbone of the LFG grammar for the given f-structure and serves as a compact representation of all generation results that the LFG grammar assigns to the input. This approach extends to other grammatical formalisms with explicit context-free backbones, such as PATR, and also to formalisms that permit a context-free skeleton to be extracted from richer specifications. It provides a general mathematical framework for understanding and improving the operation of a family of chart-based generation algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Sygal, Yael, and Shuly Wintner. "Towards Modular Development of Typed Unification Grammars." Computational Linguistics 37, no. 1 (March 2011): 29–74. http://dx.doi.org/10.1162/coli_a_00035.

Full text
Abstract:
Development of large-scale grammars for natural languages is a complicated endeavor: Grammars are developed collaboratively by teams of linguists, computational linguists, and computer scientists, in a process very similar to the development of large-scale software. Grammars are written in grammatical formalisms that resemble very-high-level programming languages, and are thus very similar to computer programs. Yet grammar engineering is still in its infancy: Few grammar development environments support sophisticated modularized grammar development, in the form of distribution of the grammar development effort, combination of sub-grammars, separate compilation and automatic linkage, information encapsulation, and so forth. This work provides preliminary foundations for modular construction of (typed) unification grammars for natural languages. Much of the information in such formalisms is encoded by the type signature, and we subsequently address the problem through the distribution of the signature among the different modules. We define signature modules and provide operators of module combination. Modules may specify only partial information about the components of the signature and may communicate through parameters, similarly to function calls in programming languages. Our definitions are inspired by methods and techniques of programming language theory and software engineering and are motivated by the actual needs of grammar developers, obtained through a careful examination of existing grammars. We show that our definitions meet these needs by conforming to a detailed set of desiderata. We demonstrate the utility of our definitions by providing a modular design of the HPSG grammar of Pollard and Sag.
APA, Harvard, Vancouver, ISO, and other styles
14

Pastra, Katerina, and Yiannis Aloimonos. "The minimalist grammar of action." Philosophical Transactions of the Royal Society B: Biological Sciences 367, no. 1585 (January 12, 2012): 103–17. http://dx.doi.org/10.1098/rstb.2011.0123.

Full text
Abstract:
Language and action have been found to share a common neural basis and in particular a common ‘syntax’, an analogous hierarchical and compositional organization. While language structure analysis has led to the formulation of different grammatical formalisms and associated discriminative or generative computational models, the structure of action is still elusive and so are the related computational models. However, structuring action has important implications on action learning and generalization, in both human cognition research and computation. In this study, we present a biologically inspired generative grammar of action, which employs the structure-building operations and principles of Chomsky's Minimalist Programme as a reference model. In this grammar, action terminals combine hierarchically into temporal sequences of actions of increasing complexity; the actions are bound with the involved tools and affected objects and are governed by certain goals. We show, how the tool role and the affected-object role of an entity within an action drives the derivation of the action syntax in this grammar and controls recursion, merge and move, the latter being mechanisms that manifest themselves not only in human language, but in human action too.
APA, Harvard, Vancouver, ISO, and other styles
15

Mastora, Anna, Manolis Peponakis, and Sarantos Kapidakis. "SKOS concepts and natural language concepts: An analysis of latent relationships in KOSs." Journal of Information Science 43, no. 4 (May 1, 2016): 492–508. http://dx.doi.org/10.1177/0165551516648108.

Full text
Abstract:
The vehicle to represent Knowledge Organisation Systems (KOSs) in the environment of the Semantic Web and linked data is the Simple Knowledge Organisation System (SKOS). SKOS provides a way to assign a Uniform Resource Identifier (URI) to each concept, and this URI functions as a surrogate for the concept. This fact makes of main concern the need to clarify the URIs’ ontological meaning. The aim of this study is to investigate the relationship between the ontological substance of KOS concepts and concepts revealed through the grammatical and syntactic formalisms of natural language. For this purpose, we examined the dividableness of concepts in specific KOSs (i.e. a thesaurus, a subject headings system and a classification scheme) by applying Natural Language Processing (NLP) techniques (i.e. morphosyntactic analysis) to the lexical representations (i.e. RDF literals) of SKOS concepts. The results of the comparative analysis reveal that, despite the use of multi-word units, thesauri tend to represent concepts in a way that can hardly be further divided conceptually, while subject headings and classification schemes – to a certain extent – comprise terms that can be decomposed into more conceptual constituents. Consequently, SKOS concepts deriving from thesauri are more likely to represent atomic conceptual units and thus be more appropriate tools for inference and reasoning. Since identifiers represent the meaning of a concept, complex concepts are neither the most appropriate nor the most efficient way of modelling a KOS for the Semantic Web.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Yue, and Stephen Clark. "Discriminative Syntax-Based Word Ordering for Text Generation." Computational Linguistics 41, no. 3 (September 2015): 503–38. http://dx.doi.org/10.1162/coli_a_00229.

Full text
Abstract:
Word ordering is a fundamental problem in text generation. In this article, we study word ordering using a syntax-based approach and a discriminative model. Two grammar formalisms are considered: Combinatory Categorial Grammar (CCG) and dependency grammar. Given the search for a likely string and syntactic analysis, the search space is massive, making discriminative training challenging. We develop a learning-guided search framework, based on best-first search, and investigate several alternative training algorithms. The framework we present is flexible in that it allows constraints to be imposed on output word orders. To demonstrate this flexibility, a variety of input conditions are considered. First, we investigate a “pure” word-ordering task in which the input is a multi-set of words, and the task is to order them into a grammatical and fluent sentence. This task has been tackled previously, and we report improved performance over existing systems on a standard Wall Street Journal test set. Second, we tackle the same reordering problem, but with a variety of input conditions, from the bare case with no dependencies or POS tags specified, to the extreme case where all POS tags and unordered, unlabeled dependencies are provided as input (and various conditions in between). When applied to the NLG 2011 shared task, our system gives competitive results compared with the best-performing systems, which provide a further demonstration of the practical utility of our system.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Xun, Yantao Du, Weiwei Sun, and Xiaojun Wan. "Transition-Based Parsing for Deep Dependency Structures." Computational Linguistics 42, no. 3 (September 2016): 353–89. http://dx.doi.org/10.1162/coli_a_00252.

Full text
Abstract:
Derivations under different grammar formalisms allow extraction of various dependency structures. Particularly, bilexical deep dependency structures beyond surface tree representation can be derived from linguistic analysis grounded by CCG, LFG, and HPSG. Traditionally, these dependency structures are obtained as a by-product of grammar-guided parsers. In this article, we study the alternative data-driven, transition-based approach, which has achieved great success for tree parsing, to build general dependency graphs. We integrate existing tree parsing techniques and present two new transition systems that can generate arbitrary directed graphs in an incremental manner. Statistical parsers that are competitive in both accuracy and efficiency can be built upon these transition systems. Furthermore, the heterogeneous design of transition systems yields diversity of the corresponding parsing models and thus greatly benefits parser ensemble. Concerning the disambiguation problem, we introduce two new techniques, namely, transition combination and tree approximation, to improve parsing quality. Transition combination makes every action performed by a parser significantly change configurations. Therefore, more distinct features can be extracted for statistical disambiguation. With the same goal of extracting informative features, tree approximation induces tree backbones from dependency graphs and re-uses tree parsing techniques to produce tree-related features. We conduct experiments on CCG-grounded functor–argument analysis, LFG-grounded grammatical relation analysis, and HPSG-grounded semantic dependency analysis for English and Chinese. Experiments demonstrate that data-driven models with appropriate transition systems can produce high-quality deep dependency analysis, comparable to more complex grammar-driven models. Experiments also indicate the effectiveness of the heterogeneous design of transition systems for parser ensemble, transition combination, as well as tree approximation for statistical disambiguation.
APA, Harvard, Vancouver, ISO, and other styles
18

NEDERHOF, MARK-JAN. "Efficient generation of random sentences." Natural Language Engineering 2, no. 1 (March 1996): 1–13. http://dx.doi.org/10.1017/s1351324996001234.

Full text
Abstract:
We discuss the random generation of strings using the grammatical formalism AGFL. This formalism consists of context-free grammars extended with a parameter mechanism, where the parameters range over a finite domain. Our approach consists in static analysis of the combinations of parameter values with which derivations can be constructed. After this analysis, generation of sentences can be performed without backtracking.
APA, Harvard, Vancouver, ISO, and other styles
19

Gamallo Otero, Pablo, and Isaac González López. "A grammatical formalism based on patterns of Part of Speech tags." International Journal of Corpus Linguistics 16, no. 1 (March 11, 2011): 45–71. http://dx.doi.org/10.1075/ijcl.16.1.03gam.

Full text
Abstract:
In this paper, we describe a grammatical formalism, called DepPattern, to write dependency grammars using patterns of Part of Speech (PoS) tags augmented with lexical and morphological information. The formalism inherits ideas from Sinclair’s work and Pattern Grammar. To properly analyze semi-fixed idiomatic expressions, DepPattern distinguishes between open-choice and idiomatic rules. A grammar is defined as a set of lexical-syntactic rules at different levels of abstraction. In addition, a compiler was implemented so as to generate deterministic and robust parsers from DepPattern grammars. These parsers identify dependencies which can be used to improve corpus-based applications such as information extraction. At the end of this article, we describe an experiment which evaluates the efficiency of a dependency parser generated from a simple DepPattern grammar. In particular, we evaluated the precision of a semantic extraction method making use of a DepPattern-based parser.
APA, Harvard, Vancouver, ISO, and other styles
20

Mahmood, Aymen Adil. "Formalism: Noam Chomsky and his Generative Grammar." Journal of Tikrit University for Humanities 30, no. 1, 1 (January 15, 2023): 1–17. http://dx.doi.org/10.25130/jtuh.30.1.1.2023.22.

Full text
Abstract:
For six decades generative concept dominated syntactic theory. The work on generative rules cannot simplify the concepts of the normative principles of the rules, but rather that the subject matter of the rules be considered normative. Grammar is a way to express phrases in their correct form. Grammar rules are accurate by the way they are formulated in a specific type that does not include generative grammar. The term "generative" is directly related to Noam Chomsky's tradition of grammatical research. This term and its formalities and terminology have been studied extensively within the Chomsky tradition.
APA, Harvard, Vancouver, ISO, and other styles
21

Rambow, Owen, K. Vijay-Shanker, and David Weir. "D-Tree Substitution Grammars." Computational Linguistics 27, no. 1 (March 2001): 87–121. http://dx.doi.org/10.1162/089120101300346813.

Full text
Abstract:
There is considerable interest among computational linguists in lexicalized grammatical frameworks; lexicalized tree adjoining grammar (LTAG) is one widely studied example. In this paper, we investigate how derivations in LTAG can be viewed not as manipulations of trees but as manipulations of tree descriptions. Changing the way the lexicalized formalism is viewed raises questions as to the desirability of certain aspects of the formalism. We present a new formalism, d-tree substitution grammar (DSG). Derivations in DSG involve the composition of d-trees, special kinds of tree descriptions. Trees are read off from derived d-trees. We show how the DSG formalism, which is designed to inherit many of the characterestics of LTAG, can be used to express a variety of linguistic analyses not available in LTAG.
APA, Harvard, Vancouver, ISO, and other styles
22

Yao, Jian, Qiwang Huang, and Weiping Wang. "Adaptive CGFs Based on Grammatical Evolution." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/197306.

Full text
Abstract:
Computer generated forces (CGFs) play blue or red units in military simulations for personnel training and weapon systems evaluation. Traditionally, CGFs are controlled through rule-based scripts, despite the doctrine-driven behavior of CGFs being rigid and predictable. Furthermore, CGFs are often tricked by trainees or fail to adapt to new situations (e.g., changes in battle field or update in weapon systems), and, in most cases, the subject matter experts (SMEs) review and redesign a large amount of CGF scripts for new scenarios or training tasks, which is both challenging and time-consuming. In an effort to overcome these limitations and move toward more true-to-life scenarios, a study using grammatical evolution (GE) to generate adaptive CGFs for air combat simulations has been conducted. Expert knowledge is encoded with modular behavior trees (BTs) for compatibility with the operators in genetic algorithm (GA). GE maps CGFs, represented with BTs to binary strings, and uses GA to evolve CGFs with performance feedback from the simulation. Beyond-visual-range air combat experiments between adaptive CGFs and nonadaptive baseline CGFs have been conducted to observe and study this evolutionary process. The experimental results show that the GE is an efficient framework to generate CGFs in BTs formalism and evolve CGFs via GA.
APA, Harvard, Vancouver, ISO, and other styles
23

Barlow, Michael. "Corpora for Theory and Practice." International Journal of Corpus Linguistics 1, no. 1 (January 1, 1996): 1–37. http://dx.doi.org/10.1075/ijcl.1.1.03bar.

Full text
Abstract:
In this paper intuition-based studies of reflexive forms such as myself are contrasted with a corpus-based investigation of actual usage of reflexives. The examination of reflexives in English in several corpora reveals a variety of patterns, which are analysed within a schema-based approach to grammar (Barlow and Kemmer 1994). This approach follows the cognitive/functional tradition of grammatical analysis in viewing all grammatical units as composed of form-meaning pairings. The paper demonstrates that a schema-based approach is well-suited to the task of describing the major and minor patterns of use revealed by corpus analysis. The importance of text analysis in language teaching is highlighted and connections between the schema-based grammatical formalism and data-driven approaches to second language learning (Johns 1991b) are briefly explored.
APA, Harvard, Vancouver, ISO, and other styles
24

Desclés, Jean-Pierre. "La linguistique peut-elle sortir de son état pré-galiléen ?" Neophilologica 2019 33 (November 10, 2021): 1–23. http://dx.doi.org/10.31261/neo.2021.33.17.

Full text
Abstract:
The future of linguistics implies a better definition of concepts, especially in the semantic analysis. The notion of operator plays an important role in several areas of linguistics, for instance categorical grammars and representations of the meanings of grammatical categories. The general topology makes it possible to mathematize the grammatical concepts (time, aspects, modalities, enunciative operations) by means of operators. Curry’s Combinatorial Logic is an adequate formalism for composing and transforming operators at different levels of analysis that connect the semiotic expressions of languages (the observables) with their semantico-cognitive interpretations. The article refers to many studies that develop the points discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

SORACE, ANTONELLA. "Language and cognition in bilingual production: the real work still lies ahead." Bilingualism: Language and Cognition 19, no. 5 (March 7, 2016): 895–96. http://dx.doi.org/10.1017/s1366728916000110.

Full text
Abstract:
Goldrick, Putnam and Schwarz (Goldrick, Putnam & Schwarz) argue that code-mixing in bilingual production involves not only combining forms from both languages but also – crucially – integrating grammatical principles with gradient mental representations. They further propose an analysis of a particular case of intrasentential code mixing – doubling constructions – framed within the formalism of Gradient Symbolic Computation. This formalism, in their view, is better suited to accounting for code mixing than other generative language models because it allows the weighting of constraints both in the choice of particular structures within a single language and in blends of structures in code-mixed productions.
APA, Harvard, Vancouver, ISO, and other styles
26

Kilborn, Kerry, and Ann Cooreman. "Sentence interpretation strategies in adult Dutch–English bilinguals." Applied Psycholinguistics 8, no. 4 (December 1987): 415–31. http://dx.doi.org/10.1017/s0142716400000394.

Full text
Abstract:
ABSTRACTThis study is concerned with the probabilistic nature of processing strategies in bilingual speakers of Dutch and English. We used a sentence interpretation task designed to set up various “coalitions” and “competitions” among a restricted set of grammatical entities (i.e., word order, animacy, agreement). Performance in English paralleled that in Dutch in large measure, but where it diverged it approached performance on similar tasks by English monolinguals (Bates et al., 1982). These findings are interpreted on the basis of the “competition model,” a probabilistic theory of grammatical processing which provides a formalism for explaining what it means for a second language user to be “between” languages.
APA, Harvard, Vancouver, ISO, and other styles
27

GOLDRICK, MATTHEW, MICHAEL PUTNAM, and LARA SCHWARZ. "Coactivation in bilingual grammars: A computational account of code mixing." Bilingualism: Language and Cognition 19, no. 5 (January 13, 2016): 857–76. http://dx.doi.org/10.1017/s1366728915000802.

Full text
Abstract:
A large body of research into bilingualism has revealed that language processing is fundamentally non-selective; there is simultaneous, graded co-activation of mental representations from both of the speakers’ languages. An equally deep tradition of research into code switching/mixing has revealed the important role that grammatical principles play in determining the nature of bilingual speech. We propose to integrate these two traditions within the formalism of Gradient Symbolic Computation. This allows us to formalize the integration of grammatical principles with gradient mental representations. We apply this framework to code mixing constructions where an element of an intended utterance appears in both languages within a single utterance and discuss the directions it suggests for future research.
APA, Harvard, Vancouver, ISO, and other styles
28

CHRISTIANSEN, HENNING. "CHR grammars." Theory and Practice of Logic Programming 5, no. 4-5 (July 2005): 467–501. http://dx.doi.org/10.1017/s1471068405002395.

Full text
Abstract:
A grammar formalism based upon CHR is proposed analogously to the way Definite Clause Grammars are defined and implemented on top of Prolog. These grammars execute as robust bottom-up parsers with an inherent treatment of ambiguity and a high flexibility to model various linguistic phenomena. The formalism extends previous logic programming based grammars with a form of context-sensitive rules and the possibility to include extra-grammatical hypotheses in both head and body of grammar rules. Among the applications are straightforward implementations of Assumption Grammars and abduction under integrity constraints for language analysis. CHR grammars appear as a powerful tool for specification and implementation of language processors and may be proposed as a new standard for bottom-up grammars in logic programming.
APA, Harvard, Vancouver, ISO, and other styles
29

FERRUCCI, FILOMENA, and GIULIANA VITIELLO. "GRAMMATICAL INFERENCE FOR THE AUTOMATIC GENERATION OF VISUAL LANGUAGES." International Journal of Software Engineering and Knowledge Engineering 09, no. 04 (August 1999): 467–93. http://dx.doi.org/10.1142/s0218194099000267.

Full text
Abstract:
In this paper we address the problem of the automatic generation of visual languages from a sample set of visual sentences. We present an improvement of the inference module of the VLG system which was originally conceived for the generation of iconic languages [11]. With this extension any kind of visual languages, like diagrams and forms, can be considered. To this aim, we present an inference algorithm for the class of Boundary SR grammars. These grammars are a subclass of the SR grammars with the interesting property of confluence, which extends the concept of context-freeness to the case of nonlinear grammars. Moreover, in spite of the simplicity and naturalness of the formalism, the generative power of this class is sufficient to specify interesting visual languages. The inference algorithm exploits an elegant characterization of Boundary SR languages in terms of tree and string languages. More precisely, we show that a visual language is a Boundary SR language if and only if it can be defined as a regular tree language and a set of properly associated string languages. Based on this result, the problem of identifying structural properties in a diagrammatic visual sentence is brought back to the detection of structural properties in tree and string languages. The main advantage coming from the use of a grammatical inference technique in visual language specification is that the designer only needs to specify a set of visual sentences that he/she feels to sufficiently exemplify the intended target language.
APA, Harvard, Vancouver, ISO, and other styles
30

Giachos, Ioannis, Eleni Batzaki, Evangelos C. Papakitsos, Stavros Kaminaris, and Nikolaos Laskaris. "A Natural Language Generation Algorithm for Greek by Using Hole Semantics and a Systemic Grammatical Formalism." Journal of Computer Science Research 5, no. 4 (December 6, 2023): 27–37. http://dx.doi.org/10.30564/jcsr.v5i4.6067.

Full text
Abstract:
This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems, with the aim of achieving more linguistic communication capabilities between humans and robots. In this paper, the authors attempt an algorithmic approach to natural language generation through hole semantics and by applying the OMAS-III computational model as a grammatical formalism. In the original work, a technical language is used, while in the later works, this has been replaced by a limited Greek natural language dictionary. This particular effort was made to give the evolving system the ability to ask questions as well as the authors developed an initial dialogue system using these techniques. The results show that the use of these techniques the authors apply can give us a more sophisticated dialogue system in the future.
APA, Harvard, Vancouver, ISO, and other styles
31

GULLBERG, MARIANNE, and M. CARMEN PARAFITA COUTO. "An integrated perspective on code-mixing patterns beyond doubling?" Bilingualism: Language and Cognition 19, no. 5 (February 11, 2016): 885–86. http://dx.doi.org/10.1017/s1366728916000080.

Full text
Abstract:
Code-mixing (CM) is a striking example of how two languages are active simultaneously in bilingual production. Gradient Symbolic Computation (GSC) proposes a formalism to account for the systematicity of CM patterns by integrating psycholinguistic notions of bilingual co-activation with generativist accounts of grammar. We applaud the attempt to bridge research traditions and all efforts to capture the systematicity of variation, and the interaction between processing and grammatical constraints in bilingual production. However, the descriptive and predictive scope of the current proposal remains somewhat unclear, as does its connection to existing accounts.
APA, Harvard, Vancouver, ISO, and other styles
32

Fried, Mirjam. "Construction Grammar as a tool for diachronic analysis." Constructions and Frames 1, no. 2 (December 10, 2009): 261–90. http://dx.doi.org/10.1075/cf.1.2.04fri.

Full text
Abstract:
Through a discourse-grounded internal reconstruction that aims at capturing the emergence of grammatical structure, the study examines the development of the subjective epistemic particle jestli ‘[in-my-opinion-] maybe’ in conversational Czech. Through internal reconstruction, the change (syntactic complementizer > speaker-centered epistemic contextualizer > subjective epistemic particle) is presented as a metonymy-based conventionalization of a pragmatic meaning implied by certain tokens of indirect Y/N questions into a new modal meaning. Taking a Construction Grammar approach, so far largely untested on diachronic data, the point of the analysis is to show that we can engage in a systematic treatment of the gradualness of change, by (i) combining the ‘holistic’ (constructional) dimension with the internal, feature-based and discourse-motivated mechanisms of complex grammatical shifts, and (ii) appealing to the explanatory potential of general cognitive and communicative principles as they manifest themselves in natural discourse. I also propose a formalism for representing the transitional nature of intermediate patterns.
APA, Harvard, Vancouver, ISO, and other styles
33

Griffiths, Joshua M. "Competing repair strategies for word-final obstruent-liquid clusters in northern metropolitan French." Journal of French Language Studies 32, no. 1 (November 16, 2021): 1–24. http://dx.doi.org/10.1017/s0959269520000319.

Full text
Abstract:
ABSTRACTFrench licenses word-final obstruent-liquid clusters (table /tabl/; souffre /sufʁ/). These clusters may be realised faithfully resulting in an apparent violation of the sonority sequencing principle (Clements, 1990). Yet, the clusters can also be repaired in one of two ways: (1) through the reduction of the cluster (i.e. [tab]) or (2) through the epenthesis of a schwa vowel, resyllabifying the cluster into the onset position (i.e. [ta.blə].) In this article, I investigate which factors condition the realisation of word-final obstruent-liquid clusters. The results are formalised in Maximum Entropy Grammar (Goldwater and Johnson, 2003), but evidence for effects of style and speaker age require the scaling of several constraints (Coetzee and Kawahara, 2013). This study sheds light on these curious clusters, while raising new questions about the interaction of grammatical and non-grammatical factors.
APA, Harvard, Vancouver, ISO, and other styles
34

Gabor, Mateusz, Wojciech Wieczorek, and Olgierd Unold. "Split-Based Algorithm for Weighted Context-Free Grammar Induction." Applied Sciences 11, no. 3 (January 24, 2021): 1030. http://dx.doi.org/10.3390/app11031030.

Full text
Abstract:
The split-based method in a weighted context-free grammar (WCFG) induction was formalised and verified on a comprehensive set of context-free languages. WCFG is learned using a novel grammatical inference method. The proposed method learns WCFG from both positive and negative samples, whereas the weights of rules are estimated using a novel Inside–Outside Contrastive Estimation algorithm. The results showed that our approach outperforms in terms of F1 scores of other state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Paxson, James J. "Personification's Gender." Rhetorica 16, no. 2 (1998): 149–79. http://dx.doi.org/10.1525/rh.1998.16.2.149.

Full text
Abstract:
Abstract: The fact that classical and early medieval allegorical personifications were exclusively female has long perplexed literary scholars and rhetoricians. Although arguments have been made about this gendering using grammatical formalism for the most part, an examination of rhetoric's own deep structure—that is, the discursive metaphors it has always employed to talk about tropes and figures—promises to better articulate the gendered bases of the figure. Using analytical tactics drawn from Paul de Man's discussions of prosopopeia, this essay re-examines some of the rhetorical record along with programmatic imagery from patristic writings in order to demonstrate how women theinselves could serve as the “figures of figuration.”
APA, Harvard, Vancouver, ISO, and other styles
36

Kostic, Aleksandar, Svetlana Ilic, and Petar Milin. "Probability estimate and the optimal text size." Psihologija 41, no. 1 (2008): 35–51. http://dx.doi.org/10.2298/psi0801035k.

Full text
Abstract:
Reliable language corpus implies a text sample of size n that provides stable probability distributions of linguistic phenomena. The question is what is the minimal (i.e. the optimal) text size at which probabilities of linguistic phenomena become stable. Specifically, we were interested in probabilities of grammatical forms. We started with an a priori assumption that text size of 1.000.000 words is sufficient to provide stable probability distributions. Text of this size we treated as a "quasi-population". Probability distribution derived from the "quasi-population" was then correlated with probability distribution obtained on a minimal sample size (32 items) for a given linguistic category (e.g. nouns). Correlation coefficient was treated as a measure of similarity between the two probability distributions. The minimal sample was increased by geometrical progression, up to the size where correlation between distribution derived from the quasi-population and the one derived from an increased sample reached its maximum (r=1). Optimal sample size was established for grammatical forms of nouns, adjectives and verbs. General formalism is proposed that allows estimate of an optimal sample size from minimal sample (i.e. 32 items).
APA, Harvard, Vancouver, ISO, and other styles
37

Butuc, Petru. "Defining Aspects of the Logical Subject into the Sentence." Philologia, no. 2(314) (August 2021): 76–83. http://dx.doi.org/10.52505/1857-4300.2021.2(314).08.

Full text
Abstract:
Although linguistics, the science of language, has reached a high level of development, anyway in syntax there are still many unsolved problems. A very important problem is the one that identifies the parts of the sentences. Therefore, because of inconsistent application of principles, at the analysis of language acts, at the syntactic level, sometimes the extreme structural-grammatical formalism is reached, which, as a result, superimpose parts of sentences over parts of speech, the criterion which conducts to a morphological interpretation of syntax. Such a syntactical analysis is useless, because it makes impossible the real identification of text ideas. Grammarians, situated on morphological positions in syntax, are launching different methodological versions this way. One of them would be that the grammar case form is decisive for identification of the parts of sentences. Such a method is less applicable in Romanian, because at the same grammatical form of the case we have several syntactic functions and at the same function - several forms. In this study the author tries to analyze these morphological and syntactic aspects by demonstration, referring, in particular, to the semantic-syntactic status of the logical subject in the sentence.
APA, Harvard, Vancouver, ISO, and other styles
38

Hyman, Larry M., and Armindo Ngunga. "On the non-universality of tonal association ‘conventions’: evidence from Ciyao." Phonology 11, no. 1 (May 1994): 25–68. http://dx.doi.org/10.1017/s0952675700001834.

Full text
Abstract:
One of the major aims of linguistic theory is to determine what is universal vs. language-specific within grammatical systems. In phonology, for example, a number of universals have been proposed and incorporated into the various subtheories that deal with segmental and prosodic aspects of sound systems. In his original autosegmental theory, for instance, Goldsmith (1976) provided a formalism and a set of principles embodying a number of universal claims about how different tiers may link to each other. Most of the support for this theory came from the study of tone: tones (Ts) were said to reside on separate ‘tiers’ joined by association lines to their respective tone-bearing units (TBUs).
APA, Harvard, Vancouver, ISO, and other styles
39

Underwood, Nancy L. "A Typed Feature-based Grammar of Danish." Nordic Journal of Linguistics 20, no. 1 (June 1997): 31–62. http://dx.doi.org/10.1017/s0332586500004005.

Full text
Abstract:
This paper presents an overview of the first broad coverage grammatical description of Danish in a Typed Feature Structure (TFS) based unification formalism inspired by HPSG. These linguistic specifications encompass phenomena within inflectional morphology, phrase structure and predicate argument structure, and have been developed with a view to implementation. The emphasis on implementability and re-usability of the specifications has led to the adoption of a rather leaner formal framework than that underlying HPSG. However, the paper shows that the adoption of such a framework does not lead to a loss of expressibility, but in fact enables certain phenomena, such as the interface between morphology and syntax and local discontinuities, to be treated in a simple and elegant fashion.
APA, Harvard, Vancouver, ISO, and other styles
40

ARNDT, TIMOTHY, SHI-KUO CHANG, and ANGELA GUERCIO. "FORMAL SPECIFICATION AND PROTOTYPING OF MULTIMEDIA APPLICATIONS." International Journal of Software Engineering and Knowledge Engineering 10, no. 04 (August 2000): 377–409. http://dx.doi.org/10.1142/s0218194000000250.

Full text
Abstract:
Multimedia systems incorporating hyperlinks and user interaction can be prototyped using TAOML, an extension of HTML. TAOML is used to define a Teleaction Object (TAO) which is a multimedia object with associated hypergraph structure and knowledge structure The hypergraph structure supports the effective presentation and efficient communication of multimedia information. In this paper, a formal specification methodology for TAOs using Symbol Relation (SR) grammars is described. An attributed SR grammar is then introduced in order to associate knowledge with the TAO. The limitations to achieve an efficient parser are given. The grammatical formalism allows for validation and verification of the system specification. This methodology provides a principled approach to specify, verify, validate and prototype multimedia applications.
APA, Harvard, Vancouver, ISO, and other styles
41

Connors, Robert J. "The Erasure of the Sentence." College Composition & Communication 52, no. 1 (September 1, 2000): 96–128. http://dx.doi.org/10.58680/ccc20001409.

Full text
Abstract:
This article examines the sentence-based pedagogies that arose in composition during the 1960s and 1970s—the generative rhetoric of Francis Christensen, imitation exercises, and sentence-combining—and attempts to discern why these three pedagogies have been so completely elided within contemporary composition studies. The usefulness of these sentence-based rhetorics was never disproved, but a growing wave of anti-formalism, anti-behaviorism, and anti-empiricism within English-based composition studies after 1980 doomed them to a marginality under which they still exist today. The result of this erasure of sentence pedagogies is a culture of writing instruction that has very little to do with or to say about the sentence outside of a purely grammatical discourse.
APA, Harvard, Vancouver, ISO, and other styles
42

Harris, R. Allen. "Linguistics, Technical Writing, and Generalized Phrase Structure Grammar." Journal of Technical Writing and Communication 18, no. 3 (July 1988): 227–40. http://dx.doi.org/10.2190/wtlt-qky6-lw4v-w2bd.

Full text
Abstract:
Linguistics has been largely misunderstood in writing pedagogy. After Chomsky's revolution, it was widely touted as a panacea; now it is widely flogged as a pariah. Both attitudes are extreme. It has a number of applications in the writing classroom, and it is particularly ripe for technical writing students, who have more sophistication with formalism than their humanities counterparts. Moreover, although few scholars outside of linguistics are aware of it, Transformational Grammar is virtually obsolete; most grammatical models are organized around principled aversions to the transformation, and even Chomsky has little use for his most famous innovation these days. Among the more recent developments is Generalized Phrase Structure Grammar, a model with distinct formal and pedagogical advantages over Chomsky's early transformational work.
APA, Harvard, Vancouver, ISO, and other styles
43

Purgina, Marina, Maxim Mozgovoy, and John Blake. "WordBricks: Mobile Technology and Visual Grammar Formalism for Gamification of Natural Language Grammar Acquisition." Journal of Educational Computing Research 58, no. 1 (February 26, 2019): 126–59. http://dx.doi.org/10.1177/0735633119833010.

Full text
Abstract:
Gamification of language learning is a clear trend of recent years. Widespread use of smartphones and the rise of mobile gaming as a popular leisure activity contribute to the popularity of gamification, as application developers can rely on an unprecedented reach of their products and expect acceptance of game-like elements by the users. In terms of content, however, most mobile apps implement traditional language learning activities, such as reading, listening, translating, and solving quizzes. This article discusses gamification of learning natural language grammar with a mobile app WordBricks, based on a concept of more user-centric lab-style experimental activities. WordBricks challenges users to create syntactically accurate sentences by arranging jigsaw-like colored blocks. Users receive instantaneous feedback on the syntactic compatibility each time any two blocks are placed together. This Scratch-inspired virtual language lab harnesses grammar models used in computational linguistics and allows users to discover underlying grammatical constructions through experimentation. The system was evaluated in a number of diverse settings and shows how the principles of gamification can be applied to second-language acquisition. We discuss general features that enable the users to engage in game-playing behavior and analyze open challenges, relevant for a variety of language learning systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Desclés, Jean-Pierre. "Vers un Calcul des Significations dans l’Analyse des Langues." Revista de Filosofia Moderna e Contemporânea 8, no. 1 (September 20, 2020): 21–71. http://dx.doi.org/10.26512/rfmc.v8i1.31016.

Full text
Abstract:
L’article présente des réflexions épistémologiques sur le programme de recherche qui a pour objet l’étude de l’activité de langage exprimée par les langues, activité qui ne se ramène pas à la simple communication et à l’expression de la pensée. Il présente certains concepts importants de la théorie de l’énonciation en les articulant avec des schèmes sémantico-cognitifs, qui représentent les significations d’unités grammaticales et lexicales, et qui sont engendrés, dans le formalisme de la Logique Combinatoire de Curry, par des compositions et des transformations d’opérateurs primitifs ancrés sur les activités cognitives de perception et d’action mais non réduits à ces activités.
APA, Harvard, Vancouver, ISO, and other styles
45

Sigurd, Bengt. "Using Referent Grammar (RG) in Computer Analysis, Generation and Translation of Sentences." Nordic Journal of Linguistics 11, no. 1-2 (June 1988): 129–50. http://dx.doi.org/10.1017/s0332586500001785.

Full text
Abstract:
The paper presents Referent Grammar (RG), a version of generalized phrase structure grammar. RG uses descriptive labels for defective categories (categories tacking a constituent) instead of slash expressions and needs no null (empty, zero) categories. RG uses both functional and categorial representations and the grammar rules, written in the Prolog DCG formalism, relate these two levels. The functional representations of RG include referent variables (numbers) with noun phrases which makes it possible to keep track of the referents within the sentences and in the text. Relative clauses can be defined in a new way using these referents and the referent, enriched with grammatical features, can be used for controlling agreement. Substantial fragments of Swedish and English grammars have been implemented in RG and the print-out of a demo session shows how RG works in computerized analysis, generation and translation of Swedish and English sentences.
APA, Harvard, Vancouver, ISO, and other styles
46

Putnam, Michael T., and Robert Klosinski. "The good, the bad, and the gradient." Linguistic Approaches to Bilingualism 10, no. 1 (June 21, 2017): 5–34. http://dx.doi.org/10.1075/lab.16008.put.

Full text
Abstract:
Abstract Although formal analyses of code-switching have enjoyed some success in determining which structures and interfaces are more fertile environments for switches than others, research exposing recalcitrant counter-examples to proposed constraints and axioms responsible for governing code-switching is abound. We advance the claim here that sub-optimal representations, i.e., losers , stand to reveal important information regarding the interaction of grammatical principles and processing strategies of bilingual speakers and that any comprehensive analysis of code-switching phenomena should include them. These losers are the result of gradient activation in both input and output forms. We demonstrate how the formalism Gradient Symbolic Computation (GSC; Smolensky et al., 2014) can account for both of these observed facets of bilingual grammars in a unified manner. Building upon the work of Goldrick et al. (2016a,b), we provide an analysis of mixed determiner phrases (DPs) as an example of the fundamental components of a GSC-analysis.
APA, Harvard, Vancouver, ISO, and other styles
47

Delmas, Claude. "“Commence + to - infinitives” in G. Stein’s discourse." Recherches anglaises et nord-américaines 44, no. 1 (2011): 67–79. http://dx.doi.org/10.3406/ranam.2011.1407.

Full text
Abstract:
Cette contribution a pour objet d’étude des énoncés contenant des occurrences du verbe anglais commence au prétérit suivi de la complémentation infinitive en to. Elle se donne pour visée de montrer dans quelle mesure se vérifient les hypothèses communes proposées dans la littérature pour expliquer quelques facettes du fonctionnement du verbe retenu et sa complémentation. L’étude tente de montrer en quoi le formalisme du verbe commence implique la relation co-énonciative en rupture. Certains des concepts utilisés sont empruntés à la théorie des «modes de discours» de C. Smith et plus particulièrement celui de glissement entre les modes. Il est montré que les dimensions textuelles et discursives peuvent éclairer certaines propriétés du sujet grammatical et de la relation prédicative infinitive prise en elle-même. S’appuyant sur certains apports des théories énonciatives françaises, l’étude tente de montrer que le statut de l’énonciateur peut lui-même faire l’objet de caractérisation et d’une mise en perspective au sein d’un mode argumentatif. L’unique exemple commenced painting du texte suggère une analyse du type identification en dépit d’une frontière plus complexe que l’explication aspectuelle souvent mise en avant.
APA, Harvard, Vancouver, ISO, and other styles
48

Opaliński, Krzysztof, and Patrycja Potoniec. "KORPUS POLSZCZYZNY XVI WIEKU." Poradnik Językowy, no. 8/2020(777) (October 28, 2020): 17–31. http://dx.doi.org/10.33896/porj.2020.8.2.

Full text
Abstract:
The original purpose of creating the corpus of the 16th Polish language was to preserve the material basis of Słownik polszczyzny XVI wieku (Dictionary of the 16th-Century Polish Language) (SPXVI) comprising 272 texts transliterated in accordance with standardised principles, which is of great value. The project described here consists in creating an online base of the resources and using a part of it as a germ of a language corpus with texts designated with morphosyntactic markers. The works adopted XML encoding in the TEI (Text Encoding Initiative) formalism, version P5, adjusted to a 16th-century text. Typographical elements as well as grammatical categories and forms of words were designated in the texts. The germ of the corpus of the 16th-century Polish language comprises 135 thousand segments and it will be expanded by another 100 thousand in the future to provide material for an automated form designation tool. Ultimately, integration with the Diachronic Corpus of Polish is planned. Keywords: lexicography – history of Polish – diachronic corpus of Polish
APA, Harvard, Vancouver, ISO, and other styles
49

Schleifer, Ronald. "The semiotics of sensation: A. J. Greimas and the experience of meaning." Semiotica 2017, no. 214 (January 1, 2017): 173–92. http://dx.doi.org/10.1515/sem-2016-0182.

Full text
Abstract:
AbstractIt has been the life-long ambition of A. J. Greimas to analyze the nature of meaning, and in his work he has consistently described meaning as a felt experience, what he calls the “feeling of understanding.” This essay examines the Greimassian investigation of meaning as experiential – which is to say sensational – as well as cognitive by analyzing, by means of Greimas’s “semiotic square,” P. M. S. Hacker’s recent exploration of the relationship between sensation and cognition undertaken in terms of the semantics of ordinary-language philosophy. That is, the essay subjects what it calls “the illusion of immediacy” in ordinary-language philosophy to the systematic analysis of the “semantic formalism” of the semiotic square in order to demonstrate that the seeming “given” of ordinary language theory – the “logico-grammatical terrain and . . . the conceptual landscape” that Hackers describes – can be profitably analyzed in terms of the interaction of semiotic constraints. It concludes by touching on the ways Greimassian semiotics is congruent with – and perhaps supported by – recent neurological understandings of sensate experience.
APA, Harvard, Vancouver, ISO, and other styles
50

Roszko, Danuta, and Roman Roszko. "Projekcja znaczeń w wielojęzycznych korpusach wraz z przykładami jej zastosowania w badaniach korpusowych nad językiem polskim." Slavica Wratislaviensia 175 (September 6, 2022): 25–36. http://dx.doi.org/10.19195/0137-1150.175.2.

Full text
Abstract:
The Institute of Slavic Studies of the Polish Academy of Sciences, as part of the CLARIN-PL consortium, is engaged in the creation of multilingual corpora of Slavic and Baltic languages. Some of the planned corpora have already been made available in the dSpace CLARIN-PL repository and in the KonText online browser. One of these corpora is the Polish–Lithuanian corpus, which in this article serves to illustrate the possibilities of applying CLARIN-PL corpora in research into the Polish language. The Lithuanian language has an archaic structure in comparison with Polish. Lexemes are characterised by a clearer structure. There are regular connections between grammatical form and meaning, with a small number of exceptions. These features of Lithuanian ensure that the projection of strongly formalised meanings onto Polish-language resources is a straightforward operation, leading to a significant narrowing of Polish-language resources, which can be analysed in the context of unequivocal interpretations based on Lithuanian meanings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography