Littérature scientifique sur le sujet « Formalismes grammaticaux »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Formalismes grammaticaux ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Formalismes grammaticaux"

1

Carlson, Lauri, et Krister Linden. « Unification as a Grammatical Tool ». Nordic Journal of Linguistics 10, no 2 (décembre 1987) : 111–36. http://dx.doi.org/10.1017/s033258650000161x.

Texte intégral
Résumé :
The present paper is an introduction to unification as a formalism for writing grammars for natural languages. The paper is structured as follows. Section 1 briefly describes the history and the current scene of unification based grammar formalisms. Sections 2–3 describe the basic design of current formalisms. Section 4 constitutes a tutorial introduction to a representative unification based grammar formalism, the D–PATR system of Karttunen (1986). Sections 5—6 consider extensions of the unification formalism and its limitations. Section 7 examines implementation questions and addresses the question of the computational complexity of unification. — Some notes on terminology.
Styles APA, Harvard, Vancouver, ISO, etc.
2

RANTA, AARNE. « Grammatical Framework ». Journal of Functional Programming 14, no 2 (22 janvier 2004) : 145–89. http://dx.doi.org/10.1017/s0956796803004738.

Texte intégral
Résumé :
Grammatical Framework (GF) is a special-purpose functional language for defining grammars. It uses a Logical Framework (LF) for a description of abstract syntax, and adds to this a notation for defining concrete syntax. GF grammars themselves are purely declarative, but can be used both for linearizing syntax trees and parsing strings. GF can describe both formal and natural languages. The key notion of this description is a grammatical object, which is not just a string, but a record that contains all information on inflection and inherent grammatical features such as number and gender in natural languages, or precedence in formal languages. Grammatical objects have a type system, which helps to eliminate run-time errors in language processing. In the same way as a LF, GF uses dependent types in abstract syntax to express semantic conditions, such as well-typedness and proof obligations. Multilingual grammars, where one abstract syntax has many parallel concrete syntaxes, can be used for reliable and meaning-preserving translation. They can also be used in authoring systems, where syntax trees are constructed in an interactive editor similar to proof editors based on LF. While being edited, the trees can simultaneously be viewed in different languages. This paper starts with a gradual introduction to GF, going through a sequence of simpler formalisms till the full power is reached. The introduction is followed by a systematic presentation of the GF formalism and outlines of the main algorithms: partial evaluation and parser generation. The paper concludes by brief discussions of the Haskell implementation of GF, existing applications, and related work.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Keller, Bill. « Formalisms for grammatical knowledge representation ». Artificial Intelligence Review 6, no 4 (1992) : 365–81. http://dx.doi.org/10.1007/bf00123690.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

SHEA, KRISTINA, et JONATHAN CAGAN. « Languages and semantics of grammatical discrete structures ». Artificial Intelligence for Engineering Design, Analysis and Manufacturing 13, no 4 (septembre 1999) : 241–51. http://dx.doi.org/10.1017/s0890060499134012.

Texte intégral
Résumé :
Applying grammatical formalisms to engineering problems requires consideration of spatial, functional, and behavioral design attributes. This paper explores structural design languages and semantics for the generation of feasible and purposeful discrete structures. In an application of shape annealing, a combination of grammatical design generation and search, to the generation of discrete structures, rule syntax, and semantics are used to model desired relations between structural form and function as well as control design generation. Explicit domain knowledge is placed within the grammar through rule and syntax formulation, resulting in the generation of only forms that make functional sense and adhere to preferred visual styles. Design interpretation, or semantics, is then used to select forms that meet functional and visual goals. The distinction between syntax used in grammar rules to explicitly drive geometric design and semantics used in design interpretation to implicitly guide geometric form is shown. Overall, the designs presented show the validity of applying a grammatical formalism to an engineering design problem and illustrate a range of possibilities for modeling functional and visual design criteria.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Laporte, Éric. « Reduction of lexical ambiguity ». Ambiguity 24, no 1 (31 décembre 2001) : 67–103. http://dx.doi.org/10.1075/li.24.1.05lap.

Texte intégral
Résumé :
Summary We examine various issues faced during the elaboration of lexical disambiguators, e.g. issues related with linguistic analyses underlying disambiguators, and we exemplify these issues with grammatical constraints. We also examine computational problems and show how they are connected with linguistic problems: the influence of the granularity of tagsets, the definition of realistic and useful objectives, and the construction of the data required for the reduction of ambiguity. We show why a formalism is required for automatic ambiguity reduction, we analyse its function and we present a typology of such formalisms.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Wedekind, Jürgen, et Ronald M. Kaplan. « Tractable Lexical-Functional Grammar ». Computational Linguistics 46, no 3 (novembre 2020) : 515–69. http://dx.doi.org/10.1162/coli_a_00384.

Texte intégral
Résumé :
The formalism for Lexical-Functional Grammar (LFG) was introduced in the 1980s as one of the first constraint-based grammatical formalisms for natural language. It has led to substantial contributions to the linguistic literature and to the construction of large-scale descriptions of particular languages. Investigations of its mathematical properties have shown that, without further restrictions, the recognition, emptiness, and generation problems are undecidable, and that they are intractable in the worst case even with commonly applied restrictions. However, grammars of real languages appear not to invoke the full expressive power of the formalism, as indicated by the fact that algorithms and implementations for recognition and generation have been developed that run—even for broad-coverage grammars—in typically polynomial time. This article formalizes some restrictions on the notation and its interpretation that are compatible with conventions and principles that have been implicit or informally stated in linguistic theory. We show that LFG grammars that respect these restrictions, while still suitable for the description of natural languages, are equivalent to linear context-free rewriting systems and allow for tractable computation.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Frank, Robert, et Tim Hunter. « Variation in mild context-sensitivity ». Formal Language Theory and its Relevance for Linguistic Analysis 3, no 2 (5 novembre 2021) : 181–214. http://dx.doi.org/10.1075/elt.00033.fra.

Texte intégral
Résumé :
Abstract Aravind Joshi famously hypothesized that natural language syntax was characterized (in part) by mildly context-sensitive generative power. Subsequent work in mathematical linguistics over the past three decades has revealed surprising convergences among a wide variety of grammatical formalisms, all of which can be said to be mildly context-sensitive. But this convergence is not absolute. Not all mildly context-sensitive formalisms can generate exactly the same stringsets (i.e. they are not all weakly equivalent), and even when two formalisms can both generate a certain stringset, there might be differences in the structural descriptions they use to do so. It has generally been difficult to find cases where such differences in structural descriptions can be pinpointed in a way that allows linguistic considerations to be brought to bear on choices between formalisms, but in this paper we present one such case. The empirical pattern of interest involves wh-movement dependencies in languages that do not enforce the wh-island constraint. This pattern draws attention to two related dimensions of variation among formalisms: whether structures grow monotonically from one end to another, and whether structure-building operations are conditioned by only a finite amount of derivational state. From this perspective, we show that one class of formalisms generates the crucial empirical pattern using structures that align with mainstream syntactic analysis, and another class can only generate that same string pattern in a linguistically unnatural way. This is particularly interesting given that (i) the structurally-inadequate formalisms are strictly more powerful than the structurally-adequate ones from the perspective of weak generative capacity, and (ii) the formalism based on derivational operations that appear on the surface to align most closely with the mechanisms adopted in contemporary work in syntactic theory (merge and move) are the formalisms that fail to align with the analyses proposed in that work when the phenomenon is considered in full generality.
Styles APA, Harvard, Vancouver, ISO, etc.
8

BARKALOVA, PETYA. « ГРАМАТИЧЕСКИ ФОРМАЛИЗМИ В ПОМОЩ НА ГРАМАТИКОГРАФИЯТА / GRAMMATICAL FORMALISMS IN AID OF GRAMMATICOGRAPHY ». Journal of Bulgarian Language 68, PR (10 septembre 2021) : 224–49. http://dx.doi.org/10.47810/bl.68.21.pr.15.

Texte intégral
Résumé :
This paper presents some of the results of a larger study dedicated to the path of grammatical knowledge from the ancient Greek-Byzantine grammatical treatises to the Eastern Orthodox Slavic world, to the Bulgarian grammatical tradition from the Na-tional Revival period. The focus is on the syntactic element of the grammatical description. A formal notation of the sentence sections in the grammars of Avram Mrazović, Yuriy Venelin and Ivan Bogorov is enclosed to the end of comparing the “art and craft of writing grammars”. Grammatical formalisms have proved to be a reliable tool in the analytical procedures in the phylogenetic study of the Bulgarian syntactic tradition, as well as in the configurational analysis of the sentence, which departs from the practice of asking “questions” and looks into the fundamental property of “syntactic function” through the prism of modern grammar. Keywords: Bulgarian syntactic tradition, grammaticography, grammatical forma-lisms
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wintner, Shuly, et Uzzi Ornan. « Syntactic Analysis of Hebrew Sentences ». Natural Language Engineering 1, no 3 (septembre 1995) : 261–88. http://dx.doi.org/10.1017/s1351324900000206.

Texte intégral
Résumé :
AbstractDue to recent developments in the area of computational formalisms for linguistic representation, the task of designing a parser for a specified natural language is now shifted to the problem of designing its grammar in certain formal ways. This paper describes the results of a project whose aim was to design a formal grammar for modern Hebrew. Such a formal grammar has never been developed before. Since most of the work on grammatical formalisms was done without regarding Hebrew (and other Semitic languages as well), we had to choose a formalism that would best fit the specific needs of the language. This part of the project has been described elsewhere. In this paper we describe the details of the grammar we developed. The grammar deals with simple, subordinate and coordinate sentences as well as interrogative sentences. Some structures were thoroughly dealt with, among which are noun phrases, verb phrases, adjectival phrases, relative clauses, object and adjunct clauses; many types of adjuncts; subcategorization of verbs; coordination; numerals, etc. For each phrase the parser produces a description of the structure tree of the phrase as well as a representation of the syntactic relations in it. Many examples of Hebrew phrases are demonstrated, together with the structure the parser assigns them. In cases where more than one parse is produced, the reasons of the ambiguity are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Koseska-Toszewa, Violetta, et Antoni Mazurkiewicz. « Constructing catalogue of temporal situations ». Cognitive Studies | Études cognitives, no 10 (24 novembre 2015) : 71–109. http://dx.doi.org/10.11649/cs.2010.004.

Texte intégral
Résumé :
Constructing catalogue of temporal situationsThe paper is aiming to create a common basis for description, comparing, and analysis natural languages. As a subject of comparison we have chosen temporal structures of some languages. For such a choice there exists a perfect tool, describing basic temporal phenomena, namely an ordering of states and events in time, certainty and uncertainty, independency of histories of separate objects, necessity and possibility. This tool is supported by the Petri nets formalism, which seems to be well suited for expressing the above mentioned phenomena. Petri nets are built form three primitive notions: of states, of events that begin or end the states, and so-called flow relation indicating succession of states and events. This simple constituents give rise to many possibilities of representing temporal phenomena; it turns out that such representations are sufficient for many (clearly, not necessarily all) temporal situations appearing in natural languages.In description formalisms used till now there is no possibility of expressing such reality phenomena as temporal dependencies in compound statement, or combination of temporality and modality. Moreover, using these formalisms one cannot distinguish between two different sources of uncertainty of the speaker while describing the reality: one, due to the lack of knowledge of the speaker what is going on in outside world, the second, due to objective impossibility of foreseen ways in which some conflict situations will be (or already have been) resolved. Petri net formalism seems to be perfectly suited for such differentiations.There are two main description principles that encompassed this paper. First, that assigns meaning to names of grammatical structures in different languages may lead to misunderstanding. Two grammatical structures with apparently close names may describe different reality. Additionally, some grammatical terms used in one language may be absent and not understandable in the other. It leads to assign meanings to situations, rather than to linguistic forms used for their expression. The second principle is limit the discussed issues to such a piece of reality that can be possible for precise description. The third is to avoid introducing such information to the described reality that is not explicitly mentioned by linguistic means. The authors try to following these principles in the present paper.The paper is organized as follows. First, some samples of situations related to present tense are given together with examples of their expressions in four languages: English, (as a reference language) and three Slavic languages, representing South slavonic languages (Bulgarian), West slavonic languages (Polish), and East slavonic languages (Russian). Within the same framework the next parts of the paper are constructed, supplying samples of using Past tenses and, finally, future tenses and modalities.The formal tools for description purposes are introduced stepwise, according to needs caused be the described reality. There are mainly Petri nets, equipped additionally with inscriptions or labeling in order to keep proper assignations of description units to described objects.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Formalismes grammaticaux"

1

Morey, Mathieu. « Étiquetage grammatical symbolique et interface syntaxe-sémantique des formalismes grammaticaux lexicalisés polarisés ». Phd thesis, Université de Lorraine, 2011. http://tel.archives-ouvertes.fr/tel-00640561.

Texte intégral
Résumé :
Les travaux de cette thèse portent sur l'analyse syntaxique et sémantique de la phrase, en utilisant pour l'analyse syntaxique un formalisme grammatical lexicalisé polarisé et en prenant comme exemple les grammaires d'interaction. Dans les formalismes grammaticaux lexicalisés, les polarités permettent de contrôler explicitement la composition des structures syntaxiques. Nous exploitons d'abord le besoin de composition exprimé par certaines polarités pour définir une notion faible de réduction de grammaire applicable à toute grammaire lexicalisée polarisée. Nous étudions ensuite la première phase de l'analyse syntaxique des formalismes lexicalisés: l'étiquetage grammatical. Nous exploitons là encore le besoin de composition de certaines polarités pour concevoir trois méthodes symboliques de filtrage des étiquetages grammaticaux que nous implantons sur automate. Nous abordons enfin l'interface syntaxe-sémantique des formalismes lexicalisés. Nous montrons comment l'utilisation de la réécriture de graphes comme modèle de calcul permet concrètement d'utiliser des structures syntaxiques riches pour calculer des représentations sémantiques sous-spécifiées.
Styles APA, Harvard, Vancouver, ISO, etc.
2

De, Lagane de Malezieux Guillaume. « Contributions à l’ingénierie multilingue et sémantique des exigences en système de systèmes ». Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALM061.

Texte intégral
Résumé :
Notre recherche concerne le traitement de grosses spécifications dans le domaine des systèmes de systèmes. Un cahier des charges ou une spécification est un ensemble structuré d’exigences. Les outils actuels ne permettent pas de détecter les cas d’inconsistance, d’incomplétude, voire d’incorrection (dus à des ambiguïtés) ou de difficulté de compréhension (due à la complexité des énoncés), qui posent de nombreux problèmes, et peuvent même causer des catastrophes lors de la réalisation puis de la mise en œuvre1. Après avoir fait un état de l’art assez complet sur les systèmes de traitement des exigences (STEX), et sur les domaines qui peuvent participer à leur amélioration, nous proposons une architecture mettant en œuvre des techniques de TALN, d’élicitation interactive du sens, d’extraction de contenu dans une ou plusieurs ontologies, et de calcul sémantique. Une représentation interlingue (graphe UNL) de chaque exigence est obtenue grâce à une analyse multiple factorisante suivie d’une étape de désambiguïsation interactive (DI) reprenant en l’améliorant la technique prototypée dans le projet LIDIA [Blanchon & al. 1994] : repérage des ambiguïtés, calcul automatique d’un arbre de questions, puis mise en œuvre à l’initiative de l’utilisateur, avec une stratégie paramétrable et des interfaces plus ergonomiques (nuages de mots pour les ambiguïtés lexicales, possibilité de manipulation directe pour les ambiguïtés structurelles). Il est alors possible de créer et stocker des annotations visualisables en format auto-explicatif, de sorte que la communication soit à sens garanti.Une fois les graphes UNL corrects obtenus, on procède à l’extraction de contenu, sous forme d’énoncés logiques (en OWL) dans une ontologie cadre et dans une ontologie métier, et à des calculs logiques (inférenciels) pouvant détecter des cas d’incohérence ou d’incomplétude. Un problème récurrent et difficile est la mise en œuvre de solutions de ce type (TALN+IA) sous forme d’ajout aux systèmes déjà lourds de gestion d’ensembles d’exigences (sous DOORS, RQS, ou SBOCS). Pour cela, nous proposons deux environnements. (1) UNSEL-INTER assure la mise en œuvre des ressources et algorithmes de UNSEL-DEVLING et UNSEL-DEVSEM, et du dialogue de désambiguïsation préliminaire à l’extraction, puis de l’interaction éventuelle lancée par UNSEL-SEM lors de la détection d’inconsistances ou d’incomplétudes. (2) UNSEL-OPER est un frontal interagissant avec le contenu d’un STEX, mettant en œuvre les traitements linguistico-sémantiques par appels à UNSEL-INTER, stockant les résultats (graphes UNL, contenu logique extrait, traductions, forme DAE) dans une base de données référant aux exigences du STEX, et notifiant le STEX des reformulations conseillées par UNSEL-SEM. Le prototype UNSEL complet a pu être validé sur une partie de la spécification SSS gérée par SBOCS. Les perspectives sont, outre le passage à l’échelle et l’opérationnalisation dans un cadre industriel, l’adaptation à d’autres applications, telles que la TA interactive de haute qualité, la construction de présentations de textes à sens garanti, et la réponse à des questions ciblées sur de gros documents comme des rapports annuels de sociétés.Ce travail a conduit à l’introduction d’un thème novateur, objet d'une recherche future, la découverte dans un corpus de types d’ambiguïté non encore répertoriés, et de proposition automatique de règles de DI correspondantes
Contributions to the multilingual and semantic engineering of systems of systems requirementsOur research concerns the processing of large specifications in the field of systems of systems. A specification is a structured set of requirements. Current tools do not allow to detect cases of inconsistency, incompleteness, or even ambiguity or difficulty of comprehension, which pose many problems, and can even cause disasters during the realization and then the implementation. After having reviewed the state of the art on requirements processing systems (RPS), we propose an architecture implementing NLP techniques, interactive meaning elicitation, content extraction in one or more ontologies, and semantic computation. A cross-lingual representation (UNL graph) of each requirement is obtained thanks to a multiple factoring analysis followed by an interactive disambiguation (ID) step that improves on the technique prototyped in the LIDIA project [Blanchon & al. 1994]: automatic computation of a question tree, then user-initiated implementation, with a configurable strategy and more ergonomic interfaces (word clouds for lexical ambiguities, direct manipulation possibility for structural ambiguities). At this point, it is possible to create and store annotations that can be visualized in an self-explanatory format.A recurring and difficult problem is the implementation of solutions of this type (NLP + AI) as an addition to the already heavy systems for managing sets of requirements (under DOORS, RQS, or SBOCS). For this, we offer two environments. (1) UNSEL-INTER ensures the implementation of the resources and algorithms of UNSEL-DEVLING and UNSEL-DEVSEM, and of the disambiguation dialogue preliminary to the extraction, then of the possible interaction launched by UNSEL-SEM during the detection inconsistencies or incompleteness. (2) UNSEL-OPER is a front-end interacting with the content of a STEX, implementing linguistic-semantic processing by calls to UNSEL-INTER, storing the results (UNL graphs, extracted logical content, translations, SED form) in a database referring to STEX requirements, and notifying STEX of reformulations recommended by UNSEL-SEM. The complete UNSEL prototype could be validated on part of the SSS specification managed by SBOCS. The prospects are, in addition to scaling up and operationalizing in an industrial setting, the adaptation to other applications, such as high-quality interactive MT, the construction of meaning-guaranteed textual presentations, and answering targeted questions on voluminous documents such as annual company reports. This work has also led to the introduction of an innovative research direction, to be pursued in the future, namely the machine-aided discovery in a corpus of not yet described ambiguity types and the automatic proposal of ID rules
Styles APA, Harvard, Vancouver, ISO, etc.
3

Narayan, Shashi. « Generating and simplifying sentences ». Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0166/document.

Texte intégral
Résumé :
Selon la représentation d’entrée, cette thèse étudie ces deux types : la génération de texte à partir de représentation de sens et à partir de texte. En la première partie (Génération des phrases), nous étudions comment effectuer la réalisation de surface symbolique à l’aide d’une grammaire robuste et efficace. Cette approche s’appuie sur une grammaire FB-LTAG et prend en entrée des arbres de dépendance peu profondes. La structure d’entrée est utilisée pour filtrer l’espace de recherche initial à l’aide d’un concept de filtrage local par polarité afin de paralléliser les processus. Afin nous proposons deux algorithmes de fouille d’erreur: le premier, un algorithme qui exploite les arbres de dépendance plutôt que des données séquentielles et le second, un algorithme qui structure la sortie de la fouille d’erreur au sein d’un arbre afin de représenter les erreurs de façon plus pertinente. Nous montrons que nos réalisateurs combinés à ces algorithmes de fouille d’erreur améliorent leur couverture significativement. En la seconde partie (Simplification des phrases), nous proposons l’utilisation d’une forme de représentations sémantiques (contre à approches basées la syntaxe ou SMT) afin d’améliorer la tâche de simplification de phrase. Nous utilisons les structures de représentation du discours pour la représentation sémantique profonde. Nous proposons alors deux méthodes de simplification de phrase: une première approche supervisée hybride qui combine une sémantique profonde à de la traduction automatique, et une seconde approche non-supervisée qui s’appuie sur un corpus comparable de Wikipedia
Depending on the input representation, this dissertation investigates issues from two classes: meaning representation (MR) to text and text-to-text generation. In the first class (MR-to-text generation, "Generating Sentences"), we investigate how to make symbolic grammar based surface realisation robust and efficient. We propose an efficient approach to surface realisation using a FB-LTAG and taking as input shallow dependency trees. Our algorithm combines techniques and ideas from the head-driven and lexicalist approaches. In addition, the input structure is used to filter the initial search space using a concept called local polarity filtering; and to parallelise processes. To further improve our robustness, we propose two error mining algorithms: one, an algorithm for mining dependency trees rather than sequential data and two, an algorithm that structures the output of error mining into a tree to represent them in a more meaningful way. We show that our realisers together with these error mining algorithms improves on both efficiency and coverage by a wide margin. In the second class (text-to-text generation, "Simplifying Sentences"), we argue for using deep semantic representations (compared to syntax or SMT based approaches) to improve the sentence simplification task. We use the Discourse Representation Structures for the deep semantic representation of the input. We propose two methods: a supervised approach (with state-of-the-art results) to hybrid simplification using deep semantics and SMT, and an unsupervised approach (with competitive results to the state-of-the-art systems) to simplification using the comparable Wikipedia corpus
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Formalismes grammaticaux"

1

Tayeb-bey, S., et A. S. Saidi. « Grammatical formalism for document understanding system : From document towards HTML text ». Dans Advances in Document Image Analysis, 165–75. Berlin, Heidelberg : Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63791-5_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Seraku, Tohru, et Akira Ohtani. « The Word Order Flexibility in Japanese Novels ». Dans Computational and Cognitive Approaches to Narratology, 213–44. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0432-0.ch008.

Texte intégral
Résumé :
In Japanese, whose basic word-order is S(ubject)-O(bject)-V(erb), non-verbal elements may be permuted with the restriction that such elements cannot occur post-verbally. This restriction, however, does not apply to narrative discourse, especially conversations in novels. This discourse phenomenon with post-verbal elements is called “postposing.” This chapter reveals several grammatical properties of postposing based on Japanese novels, and present an explicit account in an integrated theory of grammar. More precisely, the narrative data indicate that the syntactic type of postposed element is quite diverse and that, contrary to the prevalent, opposing view, Japanese postposing is not restricted to a matrix clause. These issues are addressed within Dynamic Syntax, a cognitively realistic grammar formalism which specifies a set of constraints on building up a structure online. This architectural design formalises the incremental process of how the reader gradually updates an interpretation by parsing a postposing sentence in narrative discourse.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Farrell, Patrick. « Relational Grammar ». Dans Grammatical Relations, 112–34. Oxford University PressOxford, 2005. http://dx.doi.org/10.1093/oso/9780199264018.003.0003.

Texte intégral
Résumé :
Abstract Relational Grammar (RG) (Perlmutter 1983, Perlmutter and Rosen 1984, Blake 1990, Postal and Joseph 1990) is a theory of syntax that is built on the idea that grammatical relations such as subject, direct object, and indirect object are primitive (i.e. basic and indefinable) concepts in terms of which clause structure in all languages is organized. Its use of multiple levels of clause structure, whereby an NP with the patient role, for example, can be the direct object at an initial or “ logical” level and the subject at the final level, make it well-suited to describing the voice and relational-alternation constructions that are so pivotal to an understanding of the syntactic phenomena in most languages, as discussed in Section 2.2, as well as such ubiquitous phenomena as raising to subject, possessor raising, causativization, noun incorporation, quantifier float, and control of adverbial phrases and complement clauses, which are generally constrained in terms of relational categories such as subject and direct object. Coupled with its general userfriendliness, that is its relative lack of obscure and complex formalism, its cross-linguistic adaptability has made it particularly popular among fieldworkers, who have produced accessible and theoretically-informed descriptive work of enduring interest on a wide array of typologically and genetically diverse languages.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Culicover, Peter W. « Constructions ». Dans Language Change, Variation, and Universals, 16–40. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198865391.003.0002.

Texte intégral
Résumé :
This chapter sets out an approach to grammatical description in which the notions of Chapter 1 can be formally implemented. The account is a constructional one. The formalism is outlined, it is shown how it is used to account for grammatical phenomena, and its utility in describing variation and change is highlighted.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Michelman, Frank I. « Legal Formalism and the Rule of Law ». Dans Constitutional Essentials, 173—C12.N19. Oxford University PressNew York, 2022. http://dx.doi.org/10.1093/oso/9780197655832.003.0013.

Texte intégral
Résumé :
Abstract Used in a political (as opposed to what we may call a “metaphysical”) sense, the expression “the rule of law” names not a cosmic force to which we are beholden like it or not, but a type of desired or ideal governmental practice comprising elements of authority, obligation, restraint, regularity, fairness, evenhandedness, transparency, and accountability. In a parallel use, “legal formalism” names a preference respecting modes of expression of legal norms. Formalist-minded lawyers and judges seek rendition of applicable legal norms wherever possible in the grammatical mode of “rule” rather than “principle” or “standard.” Prompted by a recent suggestion that attractions to legal formalism and to rule-of-law metaphysics, both deemed problematic for democracy, are “quintessentially liberal,” Chapter 12 asks to what extent those tendencies appear in the political-liberal constitutional thought of John Rawls. The answer, briefly, is that Rawls is not a constitutional-legal formalist, and that his commendations of a due regard for the rule of law as a liberal constitutional essential all refer to a political-not-metaphysical notion with which democracy-minded critics should have no quarrel. A trace of rule-of-law metaphysics may nevertheless be said to remain in Rawls’s constitution-centered liberal principle of legitimacy. Democracy-minded critics might take issue with that, but the difference to their own models may not lie exactly where they sometimes seem to suppose.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Schmalz, Veronica Juliana, et Frederik Cornillie. « Towards truly intelligent and personalized ICALL systems using Fluid Construction Grammar ». Dans Proceedings of the XXIst International CALL Research Conference, 169–79. Castledown Publishers, 2022. http://dx.doi.org/10.29140/9781914291050-24.

Texte intégral
Résumé :
Intelligent Computer-Assisted Language Learning (ICALL) aims to design effective systems for the analysis of learners’ production in a target language ensuring both successful learning and motivated learners. Most of the existing systems, however, focus extensively on the form rather than on the meaning of language. To create effective systems facilitating personalized language learning both form and meaning should be considered. The reason behind this is that language is a continuous flow of information passing from one user or agent to another, both during comprehension and production. This becomes even more relevant in the case of second or foreign languages (L2), where certain linguistic choices may be dictated by inexact form-meaning links construed by the learner. In this research project, we focus on the analysis of the spoken production of adult learners of German, taking argument and information structure as a use case. We use Fluid Construction Grammar as a formalism, which captures relevant linguistic aspects at both the syntactic level (form) and the semantic level (meaning). Its particularity lies in the possibility of closely monitoring bidirectional form-meaning interactions starting from constructions of different nature modeled in an extensively customizable way. Our work is in progress, and we focus on ways to provide helpful feedback on meaning. German displays a rather articulated grammar and obtaining insights not only on its formal but also on its semantic correctness could offer important steps forward for Intelligent CALL. The design of computational systems for Intelligent CALL that can effectively support L2 learners in personalized learning requires a grammatical framework that is computationally effective and offers linguistic and acquisitional perspicuity (Schulze & Penner, 2008).
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Formalismes grammaticaux"

1

Manaster-Ramer, Alexis, et Wlodek Zadrozny. « Expressive power of grammatical formalisms ». Dans the 13th conference. Morristown, NJ, USA : Association for Computational Linguistics, 1990. http://dx.doi.org/10.3115/991146.991181.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Joshi, Aravind K. « Unification and some new grammatical formalisms ». Dans the 1987 workshop. Morristown, NJ, USA : Association for Computational Linguistics, 1987. http://dx.doi.org/10.3115/980304.980314.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ranta, Aarne. « Grammatical Framework : an Interlingual Grammar Formalism ». Dans Proceedings of the 14th International Conference on Finite-State Methods and Natural Language Processing. Stroudsburg, PA, USA : Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/w19-3101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Vijay-Shanker, K., David J. Weir et Aravind K. Joshi. « Characterizing structural descriptions produced by various grammatical formalisms ». Dans the 25th annual meeting. Morristown, NJ, USA : Association for Computational Linguistics, 1987. http://dx.doi.org/10.3115/981175.981190.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

McDonald, David D., et James D. Pustejovsky. « TAG's as a grammatical formalism for generation ». Dans the workshop. Morristown, NJ, USA : Association for Computational Linguistics, 1986. http://dx.doi.org/10.3115/1077146.1077166.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

McDonald, David D., et James D. Pustejovsky. « TAGs as a grammatical formalism for generation ». Dans the 23rd annual meeting. Morristown, NJ, USA : Association for Computational Linguistics, 1985. http://dx.doi.org/10.3115/981210.981222.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bonfante, Guillaume, Bruno Guillaume et Guy Perrier. « Polarization and abstraction of grammatical formalisms as methods for lexical disambiguation ». Dans the 20th international conference. Morristown, NJ, USA : Association for Computational Linguistics, 2004. http://dx.doi.org/10.3115/1220355.1220399.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kurariya, Pavan, Prashant Chaudhary, Jahnavi Bodhankar, Lenali Singh et Ajai Kumar. « Unveiling the Power of TAG Using Statistical Parsing for Natural Languages ». Dans 4th International Conference on NLP Trends & Technologies. Academy & Industry Research Collaboration, 2023. http://dx.doi.org/10.5121/csit.2023.131407.

Texte intégral
Résumé :
The Revolution of the Artificial Intelligence (AI) has started when machines could decipher enigmatic symbols concealed within messages. Subsequently, with the progress of Natural Language Processing (NLP), machines attained the capacity to understand and comprehend human language. Tree Adjoining Grammar (TAG) has become powerful grammatical formalism for processing Large-scale Grammar. However, TAG mostly rely on Grammar which is created by Languages expert and due to structural ambiguity in Natural Languages computation complexity of TAG is very high o(n^6). We observed that rules-based approach has many serious flaws, firstly, language evolves with time and it is impossible to create grammar which is extensive enough to represent every structure of language in real world. Secondly, it takes too much time and language resources to develop a practical solution. These difficulties motivated us to explore an alternative approach instead of completely rely on the rule-based method. In this paper, we proposed a Statistical Parsing algorithm for Natural Languages (NL) using TAG formalism where Parser makes crucial use of data driven model for identifying Syntactic dependencies of complex structure. We observed that using probabilistic model along with limited training data can significantly improve both the quality and performance of TAG Parser. We also demonstrate that the newer parser outperforms previous rule-based parser on given sample corpus. Our experiment for many Indian Languages, also provides further support for the claim that above mentioned approach might be an awaiting solution for problem that require rich structural analysis of corpus and constructing syntactic dependencies of any Natural Language without much depending on manual process of creating grammar for same. Finally, we present result of our on-going research where probability model will be applying to appropriate selection of adjunction of any given node of elementary trees and state chart representations are shared across derivation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Fu, Z., et A. Y. C. Nee. « Interpreting Feature Viewpoints for Concurrent Engineering ». Dans ASME 1994 International Computers in Engineering Conference and Exhibition and the ASME 1994 8th Annual Database Symposium collocated with the ASME 1994 Design Technical Conferences. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/cie1994-0425.

Texte intégral
Résumé :
Abstract Concurrent (or simultaneous) engineering has recently been proposed as a potential means to improve the product development practice. It requires the product life-cycle aspects such as manufacturing requirements to be considered during the stages of designing a part so that the design feedback on the manufacturability, assemblability and so on can be provided to the designers. To support this purpose needs the integration of geometric models, analysis and synthesis tools as well as domain knowledge while the design is in progress. In the past few years, the concept of features has received significant attentions in the context of design and manufacturing automation. However, the application of features is currently limited, mainly due to the domain-dependent nature of features. A crucial problem has been the interpretation of multiple feature viewpoints, particularly, the conversion among feature representations, that is to support the reasoning about feature-based representations of a product design from a specific perspective and their interpretation. In this paper, important engineering perspectives and related feature-based representations supporting concurrent design and manufacturing have been identified. A methodology of interpreting different feature representations has been proposed based on a coupling between grammatical formalism and knowledge-based inference. A case study of applying this methodology to the conversion from design features based representation into representations suitable for machining process planning is reported.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie