Littérature scientifique sur le sujet « Semantic Adequacy »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Semantic Adequacy ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Semantic Adequacy"

1

Wybraniec-Skardowska, Urszula. « On Language Adequacy ». Studies in Logic, Grammar and Rhetoric 40, no 1 (1 mars 2015) : 257–92. http://dx.doi.org/10.1515/slgr-2015-0013.

Texte intégral
Résumé :
Abstract The paper concentrates on the problem of adequate reflection of fragments of reality via expressions of language and inter-subjective knowledge about these fragments, called here, in brief, language adequacy. This problem is formulated in several aspects, the most general one being: the compatibility of the language syntax with its bi-level semantics: intensional and extensional. In this paper, various aspects of language adequacy find their logical explication on the ground of the formal-logical theory of syntax T of any categorial language L generated by the so-called classical categorial grammar, and also on the ground of its extension to the bi-level, intensional and ex- tensional semantic-pragmatic theory ST for L. In T, according to the token- type distinction of Ch. S. Peirce, L is characterized first as a language of wellformed expression-tokens (wfe-tokens) - material, concrete objects - and then as a language of wfe-types - abstract objects, classes of wfe-tokens. In ST the semantic-pragmatic notions of meaning and interpretation for wfe-types of L of intensional semantics and the notion of denotation of extensional seman- tics for wfe-types and constituents of knowledge are formalized. These notions allow formulating a postulate (an axiom of categorial adequacy) from which follow all the most important conditions of the language adequacy, including the above, and a structural one connected with three principles of compositionality.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Suárez, Mauricio. « The Semantic View, Empirical Adequacy, and Application ». Crítica (México D. F. En línea) 37, no 109 (4 décembre 2005) : 29–63. http://dx.doi.org/10.22201/iifs.18704905e.2005.449.

Texte intégral
Résumé :
It is widely accepted in contemporary philosophy of science that the domain of application of a theory is typically larger than its explanatory covering power: theories can be applied to phenomena that they do not explain. I argue for an analogous thesis regarding the notion of empirical adequacy. A theory’s domain of application is typically larger than its domain of empirical adequacy: theories are often applied to phenomena from which they receive no empirical confirmation.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kovacs, Thomas. « A Survey of American Speech-Language Pathologists' Perspectives on Augmentative and Alternative Communication Assessment and Intervention Across Language Domains ». American Journal of Speech-Language Pathology 30, no 3 (18 mai 2021) : 1038–48. http://dx.doi.org/10.1044/2020_ajslp-20-00224.

Texte intégral
Résumé :
Purpose The aim of the study was to collect information about American speech-language pathologists' preprofessional training, practice, self-perceived competence, adequacy of resources, and interest in continuing education related to augmentative and alternative communication (AAC) assessment and intervention strategies addressing each of the five language domains: semantics, pragmatics, phonology, morphology, and syntax. Method An anonymous online survey of American speech-language pathologists was conducted. Results A majority of participants rated their preprofessional training for assessing semantic and pragmatic skills positively. Otherwise, a majority of participants rated preprofessional training for assessment and intervention negatively across language domains. High interest in continuing education opportunities addressing assessment and intervention was found across language domains. A discrepancy between responses to questions addressing semantic and pragmatic skills and responses to questions addressing phonological, morphological, and syntactic skills was consistently found for ratings of preprofessional training, practice, perceived competence, and adequacy of resources. In all cases, higher frequencies of positive ratings were found for questions addressing semantic and pragmatic skills. Conclusions Improved preprofessional training and continuing education opportunities are needed to support AAC assessment and intervention across language domains. Perspectives and practice patterns reflect a historical emphasis on semantic and pragmatic skills in the external evidence base, even though there are several recent journal articles addressing morphology and syntax in clients who use AAC.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Feil, Ruth M. E., et Bo Laursen. « Strukturel semantik og formel leksikalsk repræsentation ». HERMES - Journal of Language and Communication in Business 1, no 2 (17 juillet 2015) : 77. http://dx.doi.org/10.7146/hjlcb.v1i2.21355.

Texte intégral
Résumé :
This paper deals with lexical semantics from a European structuralist point of view. Three fundamentals of the tradition are treated: 1) 'meaning' viewed as 'sense' and not 'reference', 2) the semantic interdependence between words, and 3) componential analysis of lexical meaning. The structuralist approach to lexical semantics is commented upon with respect to its adequacy as a theory for the description of word meaning and to its explicitness in view of a formal representation of word meaning.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kiklewicz, Aleksander, et Dorota Szumska. « Moskiewska szkoła semantyczna a Polska szkoła składni semantycznej : rekonesans metalingwistyczny ». Przegląd Wschodnioeuropejski 14, no 1 (26 juin 2023) : 285–318. http://dx.doi.org/10.31648/pw.9038.

Texte intégral
Résumé :
The article, as reflected in the title, is an attempt to name the main similarities and differences between the theoretical ideas and the results of empirical research of the two most influential theoretical schools of semantics, i.e. the Moscow Semantic School (MSS) and the Polish School of Semantic Syntax (PSSS). The aim of the comparison is to make it clear that, in spite of a common epistemological ground, which lies in functionalism and an integrated approach to semantics and syntax, MSS and PSSS prioritize the problem of a complex description of the semantics of lexical units in a different way. MSS puts stress on the troubled relationship between semantics and lexicography, focusing on the lexicographical application of semantic analysis, while PSSS examines the sentence-forming formalization of the propositional structure within the scope of the explication of its components. The final conclusion is that, although it would be naive to assume that MSS and PSSS can work hand in hand, it would be reasonable to expect that they can learn from mutual experiences and inspire each other to improve the explanatory adequacy of research results, contributing to the theory of lexical semantics.
Styles APA, Harvard, Vancouver, ISO, etc.
6

SAFAVI, Sarvenaz. « SEMIOTIC BROADENING OF SIGN : A SEMIOTIC ANALYSIS OF THE BLACK COLOR-SIGN ». Uluslararası Sosyal Bilimler ve Sanat Araştırmaları 2, no 1 (27 janvier 2023) : 50–55. http://dx.doi.org/10.32955/neuissar202321675.

Texte intégral
Résumé :
The purpose of writing this article is to show the symmetrical function of linguistics semantics and semiotics. This article explores how semantic broadening in natural languages can be generalized to semiotics, the study of signs and symbolic communication. This study demonstrates that signs, whether members of verbal or non-verbal systems, can be studied similarly to semantic broadening. After introducing the process of semantic broadening, the discussion of the function of this process in semiotics is narrowed down to a case study of the black color-sign. The research method of this analysis is based on qualitative methodology and by collecting data based on a simple assumption which shows that semiotic broadening is a superordinate term that can have explanatory adequacy and can be expanded to verbal and non-verbal signs.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kyrychuk, Larysa. « TRANSLATION STRATEGIES, METHODS AND TECHNIQUES : IN PURSUIT OF TRANSLATION ADEQUACY ». RESEARCH TRENDS IN MODERN LINGUISTICS AND LITERATURE 1 (22 novembre 2018) : 64–80. http://dx.doi.org/10.29038/2617-6696.2018.1.64.80.

Texte intégral
Résumé :
The present paper examines the ways of achieving adequacy in translation. The aim of the study is to define and describe the options that the different translators choose while rendering the message of the same source text (ST) and to establish the translation adequacy conditions. The translators’ options are considered in terms of the techniques employed to achieve equivalence between the textual micro units of the original and those of the target text (TT). It is argued that the choice of the translation techniques is determined by the global translation strategy which is seen as a translator’s action plan to reach the functional identification between the ST and TT. In the course of translating the corresponding techniques are used by the translators to fit the local strategies which necessitate the specific ways of dealing with translation challenges. In order to identify formal, semantic and communicative features of the translators’ options we set out the specific task of exploring the notions of translation strategy, translation method, translation technique and translation equivalence and define their significance in achieving accuracy and/or transparency of translation and, eventually, the adequacy of translation. The study is based on the descriptive and comparative analyses of four translation cases (TTs). The examination of the English-Ukrainian correlative units intends to indicate the types of equivalents and point out their contribution to the translation adequacy. One of the important findings to emerge from the study is that translation adequacy may be measured against the TT acceptability which is considered on four levels and involves correlations between structural, semantic and pragmatic equivalents. The TT that reaches the first level of acceptability is viewed as the case of low translation adequacy; the TT on the second level of acceptability is seen as the case of near adequate translation; the TT on the third level of acceptability is termed as sufficiently adequate translation; the TT on the forth level of acceptability is defined as the case of complete adequate translation.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Badie, Farshad, et Luis M. Augusto. « The Form in Formal Thought Disorder : A Model of Dyssyntax in Semantic Networking ». AI 3, no 2 (20 avril 2022) : 353–70. http://dx.doi.org/10.3390/ai3020022.

Texte intégral
Résumé :
Formal thought disorder (FTD) is a clinical mental condition that is typically diagnosable by the speech productions of patients. However, this has been a vexing condition for the clinical community, as it is not at all easy to determine what “formal” means in the plethora of symptoms exhibited. We present a logic-based model for the syntax–semantics interface in semantic networking that can not only explain, but also diagnose, FTD. Our model is based on description logic (DL), which is well known for its adequacy to model terminological knowledge. More specifically, we show how faulty logical form as defined in DL-based Conception Language (CL) impacts the semantic content of linguistic productions that are characteristic of FTD. We accordingly call this the dyssyntax model.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Jamali, Narges, et Hossein Vahid Dastjerdi. « Assessment of Semantic Adequacy : English Translations of Persian Official Texts in Focus ». Theory and Practice in Language Studies 5, no 12 (13 décembre 2015) : 2571. http://dx.doi.org/10.17507/tpls.0512.19.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Demyanchuk, Y. I. « THE MAIN FEATURES OF TERM-COMBINATION HIGHLIGHTING IN CORPUS-APPLIED TRANSLATION STUDIES ». Writings in Romance-Germanic Philology, no 1(50) (13 octobre 2023) : 53–64. http://dx.doi.org/10.18524/2307-4604.2023.1(50).285550.

Texte intégral
Résumé :
The article is dedicated to the analysis of the main features of term-combination highlighting in corpus-applied translation studies. The fundamental theoretical milestones of the emergence of quantitative indicators for identifying term combinations in a text corpus, corpus-based indicators of contextual adequacy, and corpus-based indicators of grammatical consistency have been defined. It is worth noting that the strengthening of logical determinism of the linguistic phenomenon reflects an integrated hierarchical approach to document classification based on word distance in corpus-applied translation studies. This is explained by clustering meaningful units that can be unambiguously identified and translated into quantitative indicators that correlate with term-combinations. The research justifies the use of a data metric that allows tracking the genre groups of relevant documents and proposes additional research potential in the field of terminology research in official and business texts of NATO, UN, WTO. The mechanism for evaluating vector representations of term-combinations established on official and business documents, designed to reflect semantic similarity between words, sentences, and texts, is revealed. The scientific inquiry substantiates the use of cosine distance to represent a word in input data and in language models, which serve as a means of computation during document comparison and incorporate the terminology of international organizations such as NATO, UN, WTO. It is proven that an important aspect of the quantitative indicator is ensuring consistency of the analyzed phenomenon, which contributes to the precision of reproducing epistemological terminology. Additionally, a valuable reference point in the process of reproducing term-combinations in corpus-based applied translation studies is the corpus-based indicator of contextual adequacy, which involves contextual analysis where the term-combination is used. The presented feature of contextual adequacy expands the linguistic formality of official and business texts. Thus, the scientific article emphasizes the axiomatic function of the aforementioned characteristics, which lies in reproducing term-combinations in different contexts to conceptually understand semantic nuances of language elements and consider lexical-semantic regularities of term combination reproduction. It is revealed that the corpus-based indicator of grammatical consistency draws attention to the specificity of adequate grammatical structures in corpus-applied translation studies, which contributes to preserving the overall structure and semantics of the text during the translation of term-combinations and demonstrates heuristic models, grammatical guidelines, and priorities of grammatical correspondence between the original and the translation.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Semantic Adequacy"

1

Hagström, Anne-Christine. « Un miroir aux alouettes ? : Stratégies pour la traduction des métaphores ». Doctoral thesis, Uppsala University, Department of Romance Languages, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-2629.

Texte intégral
Résumé :

This dissertation has three goals : to establish an inventory of translation strategies applicable to the translation of metaphor, to investigate how the application of these strategies affects the balance in metaphorical quality between source text and target text, and, finally, to determine whether this balance is a useful indicator of the direction of the translation as a whole, towards either adequacy or acceptability.

To carry out this research the author has established a corpus comprising 250 metaphors from the novel La goutte d’or by Michel Tournier and its Swedish translation, Gulddroppen by C.G. Bjurström. Based on the criteria thematical and contextual connection 158 metaphors from this corpus have been selected for analysis.The strategy used in the translation of each metaphor has been established. The degree of balance in metaphorical quality between the two texts has then been determined and its significance as an indicator of the direction of the translation as a whole has been discussed.

The underlying theory and methodology of the study are those of Gideon Toury as outlined in his book Descriptive Translation Studies and beyond. The study is thus essentially descriptive in nature.

The dissertation is divided into two parts. The first part gives a survey of well known research within the fields of metaphor theory and translation theory. Various theories concerning the requested equality between a text and its translation are presented, as well as inventories of translation strategies established by a number of researchers. The second part contains the analysis of the selected metaphors and establishes a set of strategies for this purpose.

Styles APA, Harvard, Vancouver, ISO, etc.
2

Hynie, Michaela. « On the adequacy of feature lists as a measure of attribute relevance ». Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=60071.

Texte intégral
Résumé :
It has generally been assumed that production frequency on a feature listing task measures the strength of some relationship between features and concepts. The nature of this relationship, however, has not yet been determined. This study examines the relationship between feature list production frequency and feature relevance, or informativeness. Also tested was the hypothesis, inherent in current concept theories, that different feature types bear different relationships to a given concept, and vary widely in their informativeness. An overall relationship between production frequency and relevance was found, but is attributable to significant correlations present for only a subset of the feature types under consideration. The findings contradict the predictions of two earlier studies; namely, that parts should be the most informative feature type, and that feature type informativeness should depend on the object category. These results are discussed with respect to both feature list studies, and general theories of concepts.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Faille, Juliette. « Data-Based Natural Language Generation : Evaluation and Explainability ». Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0305.

Texte intégral
Résumé :
Les modèles de génération de langage naturel (NLG) ont récemment atteint de très hautes performances. Les textes qu'ils produisent sont généralement corrects sur le plan grammatical et syntaxique, ce qui les rend naturels. Bien que leur sens soit correct dans la grande majorité des cas, même les modèles de NLG les plus avancés produisent encore des textes avec des significations partiellement inexactes. Dans cette thèse, en nous concentrons sur le cas particulier des problèmes liés au contenu des textes générés, nous proposons d'évaluer et d'analyser les modèles utilisés dans les tâches de verbalisation de graphes RDF (Resource Description Framework) et de génération de questions conversationnelles. Tout d'abord, nous étudions la tâche de verbalisation des graphes RDF et en particulier les omissions et hallucinations d'entités RDF, c'est-à-dire lorsqu'un texte généré automatiquement ne mentionne pas toutes les entités du graphe RDF d'entrée ou mentionne d'autres entités que celles du graphe d'entrée. Nous évaluons 25 modèles de verbalisation de graphes RDF sur les données WebNLG. Nous développons une méthode pour détecter automatiquement les omissions et les hallucinations d'entités RDF dans les sorties de ces modèles. Nous proposons une métrique basée sur le nombre d'omissions ou d'hallucinations pour quantifier l'adéquation sémantique des modèles NLG avec l'entrée. Nous constatons que cette métrique est corrélée avec ce que les annotateurs humains considèrent comme sémantiquement correct et nous montrons que même les modèles les plus globalement performants sont sujets à des omissions et à des hallucinations. Suite à cette observation sur la tendance des modèles de verbalisation RDF à générer des textes avec des problèmes liés au contenu, nous proposons d'analyser l'encodeur de deux de ces modèles, BART et T5. Nous utilisons une méthode d'explicabilité par sondage et introduisons deux sondes de classification, l'une paramétrique et l'autre non paramétrique, afin de détecter les omissions et les déformations des entités RDF dans les plongements lexicaux des modèles encodeur-décodeur. Nous constatons que ces classifieurs sont capables de détecter ces erreurs dans les encodages, ce qui suggère que l'encodeur des modèles est responsable d'une certaine perte d'informations sur les entités omises et déformées. Enfin, nous proposons un modèle de génération de questions conversationnelles basé sur T5 qui, en plus de générer une question basée sur un graphe RDF d'entrée et un contexte conversationnel, génère à la fois une question et le triplet RDF correspondant. Ce modèle nous permet d'introduire une procédure d'évaluation fine évaluant automatiquement la cohérence avec le contexte de la conversation et l'adéquation sémantique avec le graphe RDF d'entrée. Nos contributions s'inscrivent dans les domaines de l'évaluation en NLG et de l'explicabilité. Nous empruntons des techniques et des méthodologies à ces deux domaines de recherche afin d'améliorer la fiabilité des modèles de génération de texte
Recent Natural Language Generation (NLG) models achieve very high average performance. Their output texts are generally grammatically and syntactically correct which makes them sound natural. Though the semantics of the texts are right in most cases, even the state-of-the-art NLG models still produce texts with partially incorrect meanings. In this thesis, we propose evaluating and analyzing content-related issues of models used in the NLG tasks of Resource Description Framework (RDF) graphs verbalization and conversational question generation. First, we focus on the task of RDF verbalization and the omissions and hallucinations of RDF entities, i.e. when an automatically generated text does not mention all the input RDF entities or mentions other entities than those in the input. We evaluate 25 RDF verbalization models on the WebNLG dataset. We develop a method to automatically detect omissions and hallucinations of RDF entities in the outputs of these models. We propose a metric based on omissions or hallucination counts to quantify the semantic adequacy of the NLG models. We find that this metric correlates well with what human annotators consider to be semantically correct and show that even state-of-the-art models are subject to omissions and hallucinations. Following this observation about the tendency of RDF verbalization models to generate texts with content-related issues, we propose to analyze the encoder of two such state-of-the-art models, BART and T5. We use the probing explainability method and introduce two probing classifiers (one parametric and one non-parametric) to detect omissions and distortions of RDF input entities in the embeddings of the encoder-decoder models. We find that such probing classifiers are able to detect these mistakes in the encodings, suggesting that the encoder of the models is responsible for some loss of information about omitted and distorted entities. Finally, we propose a T5-based conversational question generation model that in addition to generating a question based on an input RDF graph and a conversational context, generates both a question and its corresponding RDF triples. This setting allows us to introduce a fine-grained evaluation procedure automatically assessing coherence with the conversation context and the semantic adequacy with the input RDF. Our contributions belong to the fields of NLG evaluation and explainability and use techniques and methodologies from these two research fields in order to work towards providing more reliable NLG models
Styles APA, Harvard, Vancouver, ISO, etc.
4

Martin, Alan J. « Reasoning Using Higher-Order Abstract Syntax in a Higher-Order Logic Proof Environment : Improvements to Hybrid and a Case Study ». Thesis, Université d'Ottawa / University of Ottawa, 2010. http://hdl.handle.net/10393/19711.

Texte intégral
Résumé :
We present a series of improvements to the Hybrid system, a formal theory implemented in Isabelle/HOL to support specifying and reasoning about formal systems using higher-order abstract syntax (HOAS). We modify Hybrid's type of terms, which is built definitionally in terms of de Bruijn indices, to exclude at the type level terms with `dangling' indices. We strengthen the injectivity property for Hybrid's variable-binding operator, and develop rules for compositional proof of its side condition, avoiding conversion from HOAS to de Bruijn indices. We prove representational adequacy of Hybrid (with these improvements) for a lambda-calculus-like subset of Isabelle/HOL syntax, at the level of set-theoretic semantics and without unfolding Hybrid's definition in terms of de Bruijn indices. In further work, we prove an induction principle that maintains some of the benefits of HOAS even for open terms. We also present a case study of the formalization in Hybrid of a small programming language, Mini-ML with mutable references, including its operational semantics and a type-safety property. This is the largest case study in Hybrid to date, and the first to formalize a language with mutable references. We compare four variants of this formalization based on the two-level approach adopted by Felty and Momigliano in other recent work on Hybrid, with various specification logics (SLs), including substructural logics, formalized in Isabelle/HOL and used in turn to encode judgments of the object language. We also compare these with a variant that does not use an intermediate SL layer. In the course of the case study, we explore and develop new proof techniques, particularly in connection with context invariants and induction on SL statements.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Semantic Adequacy"

1

Shieh, Sanford. Truth, Objectivity, and Realism. Sous la direction de Michael Glanzberg. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199557929.013.14.

Texte intégral
Résumé :
This chapter is concerned with a semantic (as opposed to ontological) approach to metaphysics, developed by Michael Dummett and Crispin Wright, that takes truth as fundamental, and explicates debates about realisms in terms of truth. On this approach realism is fundamentally concerned with the objectivity of truth, where objectivity does not consist in the existence of entities. The chapter shows that Dummett worked with three separable criteria for the objectivity of truth, which support a subtle and flexible framework for characterizing various degrees of realism. It argues that Dummett’s so-called “manifestation” arguments against semantic realism can handle many objections that have been brought against them. It discusses Wright’s minimalism about truth, his four semantic criteria of realism, their inter-relations, and their connections to Dummett’s criteria. It concludes with reflections on the meta-philosophical status of the semantic approach: the reasons in favor of pursuing it and its adequacy to metaphysical reflection.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Tennant, Neil. The Road to Core Logic. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198777892.003.0002.

Texte intégral
Résumé :
We situate Core Logic and Classical Core Logic within a wider logical landscape. Core Logic lies at the intersection of two orthogonal lines of reform of Classical Logic—constructivization and relevantization. We explain the genesis of Core Logic and describe its carefully formulated rules of inference. We reveal how Core Logic arises as a smooth generalization of the proto-logic involved in working out the truth values of sentences under particular interpretations; and the case for the complete methodological adequacy of Core Logic for constructive deductive reasoning, and of Classical Core Logic for non-constructive deductive reasoning. Core Logic deserves the label ‘Core’, because it is both fully employed, and sufficient, as the metalogic involved in any process of rational belief revision. No rule of Core Logic can be surrendered. We end by speculating on two possible explanations—semantic and methodological—of how Core Logic might have been bloated to Classical Logic.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Tennant, Neil. Epistemic Gain. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198777892.003.0007.

Texte intégral
Résumé :
Core Logic avoids the Lewis First Paradox, even though it contains ∨-Introduction, and a form of ∨-Elimination that permits core proof of Disjunctive Syllogism. The reason for this is that the method of cut-elimination will unearth the fact that the newly combined premises form an inconsistent set. A new formal-semantical relation of logical consequence, according to which B is not a consequence of A,¬A, is available as an alternative to the conventionally defined relation of logical consequence. Nevertheless we can make do with the conventional definition, and still show that (Classical) Core Logic is adequate unto it. Although Core Logic eschews unrestricted Cut, nevertheless (i) Core Logic is adequate for all intuitionistic mathematical deduction; (ii) Classical Core Logic is adequate for all classical mathematical deduction; and (iii) Core Logic is adequate for all the deduction involved in the empirical testing of scientific theories.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Weinberg, Jonathan M. Intuitions. Sous la direction de Herman Cappelen, Tamar Szabó Gendler et John Hawthorne. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199668779.013.25.

Texte intégral
Résumé :
This article examines the philosophical methodology of intuitions beginning with an argument developed by Max Deutsch and Herman Cappelen over the descriptive adequacy of what Cappelen calls “methodological rationalism”, and their own preferred view, “intuition nihilism”. Based on inadequacies in both accounts, it offers a descriptive take on intuition-deploying philosophical practice today via what it calls “Protean Crypto-Rationalism”. It then describes the epistemic profile of the appeal to intuition, listing four key aspects of the basic shape of intuition-deploying philosophical practice: primacy of cases, flexibility of report format, freedom of stipulation, and interpretation-hungry. It also considers several sources of error for intuitions featured in at least the informal methodological lore of philosophy, namely: misconstruals, modal confusions, pragmatics/semantics confusion, and “tin ear”. Finally, it explores the problem of methodological ignorance and inferential demand, particularly the typical practices of philosophical inference that operate on the premises delivered by appeal to intuitions.
Styles APA, Harvard, Vancouver, ISO, etc.
5

DeRose, Keith. Two Substantively Moorean Responses and the Project of Refuting Skepticism. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199564477.003.0003.

Texte intégral
Résumé :
In this chapter, substantive Mooreanism, according to which one does know that one is not a brain in a vat, is explained, and two main varieties of it are distinguished. Contextualist Mooreanism, (a) on which it is only claimed that one knows that one is not a brain in a vat according to ordinary standards for knowledge, and (b) on which one seeks to defeat bold skepticism (according to which one doesn’t know simple, seemingly obvious truths about the external world, even by ordinary standards for knowledge), is contrasted with Putnam-style responses, on which one seeks to refute the skeptic, utilizing semantic externalism. Problems with the Putnam-style attempt to refute skepticism are identified, and then, more radically, it is argued that in important ways, such a refutation of skepticism would not have provided an adequate response to skepticism even if it could have been accomplished.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Moss, Sarah. The case for probabilistic assertion. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198792154.003.0002.

Texte intégral
Résumé :
This chapter develops and defends the thesis that we can assert probabilistic contents. The chapter begins by recounting some familiar arguments against the standard view that we only ever assert propositions. A probabilistic theory of assertion is then defended with three novel arguments. These arguments are less empirical than familiar arguments against the standard view, and more foundational in character. It is argued that probabilistic contents of assertion provide a unified account of how we communicate probabilistic beliefs and full beliefs, a unified account of belief and assertion, and an adequate account of how probabilistic beliefs can figure in joint reasoning and guide our collective actions. The chapter concludes with some remarks about probabilistic models of communication, as well as remarks about the conclusions that we should draw from contemporary debates about the semantics of epistemic modals.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Semantic Adequacy"

1

Poggiolesi, Francesca. « Semantic Adequacy ». Dans Gentzen Calculi for Modal Propositional Logic, 165–74. Dordrecht : Springer Netherlands, 2010. http://dx.doi.org/10.1007/978-90-481-9670-8_8.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Péchoux, Romain, Simon Perdrix, Mathys Rennela et Vladimir Zamdzhiev. « Quantum Programming with Inductive Datatypes : Causality and Affine Type Theory ». Dans Lecture Notes in Computer Science, 562–81. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45231-5_29.

Texte intégral
Résumé :
AbstractInductive datatypes in programming languages allow users to define useful data structures such as natural numbers, lists, trees, and others. In this paper we show how inductive datatypes may be added to the quantum programming language QPL. We construct a sound categorical model for the language and by doing so we provide the first detailed semantic treatment of user-defined inductive datatypes in quantum programming. We also show our denotational interpretation is invariant with respect to big-step reduction, thereby establishing another novel result for quantum programming. Compared to classical programming, this property is considerably more difficult to prove and we demonstrate its usefulness by showing how it immediately implies computational adequacy at all types. To further cement our results, our semantics is entirely based on a physically natural model of von Neumann algebras, which are mathematical structures used by physicists to study quantum mechanics.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Baroni, Pietro, et Massimiliano Giacomin. « Evaluating Argumentation Semantics with Respect to Skepticism Adequacy ». Dans Lecture Notes in Computer Science, 329–40. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11518655_29.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Burns, Kathy J., et Anthony R. Davis. « Building and Maintaining a Semantically Adequate Lexicon Using Cyc ». Dans Breadth and Depth of Semantic Lexicons, 121–43. Dordrecht : Springer Netherlands, 1999. http://dx.doi.org/10.1007/978-94-017-0952-1_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bellosta von Colbe, Valeriano. « Is Role and Reference Grammar an adequate grammatical theory for punctuation ? » Dans Investigations of the Syntax–Semantics–Pragmatics Interface, 245–61. Amsterdam : John Benjamins Publishing Company, 2008. http://dx.doi.org/10.1075/slcs.105.19bel.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Grüger, Joscha, Tobias Geyer, David Jilg et Ralph Bergmann. « SAMPLE : A Semantic Approach for Multi-perspective Event Log Generation ». Dans Lecture Notes in Business Information Processing, 328–40. Cham : Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27815-0_24.

Texte intégral
Résumé :
AbstractData and process mining techniques can be applied in many areas to gain valuable insights. For many reasons, accessibility to real-world business and medical data is severely limited. However, research, but especially the development of new methods, depends on a sufficient basis of realistic data. Due to the lack of data, this progress is hindered. This applies in particular to domains that use personal data, such as healthcare. With adequate quality, synthetic data can be a solution to this problem. In the procedural field, some approaches have already been presented that generate synthetic data based on a process model. However, only a few have included the data perspective so far. Data semantics, which is crucial for the quality of the generated data, has not yet been considered. Therefore, in this paper we present the multi-perspective event log generation approach SAMPLE that considers the data perspective and, in particular, its semantics. The evaluation of the approach is based on a process model for the treatment of malignant melanoma. As a result, we were able to integrate the semantic of data into the log generation process and identify new challenges.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Christiansen, Jan, Daniel Seidel et Janis Voigtländer. « An Adequate, Denotational, Functional-Style Semantics for Typed FlatCurry ». Dans Functional and Constraint Logic Programming, 119–36. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20775-4_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Purushothaman, S., et Jill Seaman. « An adequate operational semantics of sharing in lazy evaluation ». Dans ESOP '92, 435–50. Berlin, Heidelberg : Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/3-540-55253-7_26.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Fränzle, Martin. « The Quest for an Adequate Semantic Basis of Dense-Time Metric Temporal Logic ». Dans Lecture Notes in Computer Science, 201–12. Cham : Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-15629-8_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Dvir, Yotam, Ohad Kammar et Ori Lahav. « A Denotational Approach to Release/Acquire Concurrency ». Dans Programming Languages and Systems, 121–49. Cham : Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57267-8_5.

Texte intégral
Résumé :
AbstractWe present a compositional denotational semantics for a functional language with first-class parallel composition and shared-memory operations whose operational semantics follows the Release/Acquire weak memory model (RA). The semantics is formulated in Moggi’s monadic approach, and is based on Brookes-style traces. To do so we adapt Brookes’s traces to Kang et al.’s view-based machine for RA, and supplement Brookes’s mumble and stutter closure operations with additional operations, specific to RA. The latter provides a more nuanced understanding of traces that uncouples them from operational interrupted executions. We show that our denotational semantics is adequate and use it to validate various program transformations of interest. This is the first work to put weak memory models on the same footing as many other programming effects in Moggi’s standard monadic approach.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Semantic Adequacy"

1

Faille, Juliette, Albert Gatt et Claire Gardent. « Entity-Based Semantic Adequacy for Data-to-Text Generation ». Dans Findings of the Association for Computational Linguistics : EMNLP 2021. Stroudsburg, PA, USA : Association for Computational Linguistics, 2021. http://dx.doi.org/10.18653/v1/2021.findings-emnlp.132.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Dias, Jéssica Azevedo, Matheus de Serpa Vale, Raquel Penido Oliveira et Thaís Helen Rezende Pio. « Alzheimer’s : what is the difference between or frontotemporal dementia ». Dans XIV Congresso Paulista de Neurologia. Zeppelini Editorial e Comunicação, 2023. http://dx.doi.org/10.5327/1516-3180.141s1.544.

Texte intégral
Résumé :
Introduction: In the study of neurodegenerative diseases, Alzheimer’s disease (AD) has been classically considered with a typical presentation of cognitive symptoms and neuroanatomical changes. However, there are clinical phenotypes of AD whose neurobiological bases are similar to frontotemporal dementia (FTD). In this sense, the heterogeneity of these pictures leads to inaccurate evaluation and diagnosis processes, due to the scant knowledge about their neurocognitive symptoms. The early stages of Alzheimer’s-type dementia are classically characterized by memory impairment, whereas behavioral and personality changes appear in the early stages of FTD. However, in clinical practice, the differential diagnosis is difficult. Objectives: The objective of this systematic review is to establish neuropsychological characteristics and similarities in patients with AD and FTD, identifying key elements for their differential diagnosis, through clinical and imaging exams, with the objective of enhancing the clinical management of the patient. Methods: A bibliographic survey was carried out in SciELO and PubMed databases and indexers, using the terms “frontotemporal dementia”, “alzheimer’s” and “neuropsychology”, in Portuguese and English. Six updated articles were then selected, considering their adequacy to the objective of the work. Results: Evidence suggests that there are important differences in cognitive domains such as language (eg, verbal fluency), memory, social cognition, executive functioning, and behavior; These aspects should be considered fundamental in any process of neuropsychological evaluation and diagnosis. Conclusion: There are linguistic aspects that promise to be powerful biomarkers for the differential diagnosis between AD and FTD, namely semantic and phonemic verbal fluency and semantic-grammatical alterations, which may be fundamental in differentiating the diseases leading to an adequate conduct for each patient.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Suzen, Neslihan, Alexander N. Gorban, Jeremy Levesley et Evgeny M. Mirkes. « An Informational Space based Semantic Analysis for Scientific Texts ». Dans 10th International Conference on Foundations of Computer Science & Technology (FCST 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120807.

Texte intégral
Résumé :
One major problem in Natural Language Processing is the automatic analysis and representation of human language. Human language is ambiguous and deeper understanding of semantics and creating human-to-machine interaction have required an effort in creating the schemes for act of communication and building common-sense knowledge bases for the ‘meaning’ in texts. This paper introduces computational methods for semantic analysis and the quantifying the meaning of short scientific texts. Computational methods extracting semantic feature are used to analyse the relations between texts of messages and ‘representations of situations’ for a newly created large collection of scientific texts, Leicester Scientific Corpus. The representation of scientific-specific meaning is standardised by replacing the situation representations, rather than psychological properties, with the vectors of some attributes: a list of scientific subject categories that the text belongs to. First, this paper introduces ‘Meaning Space’ in which the informational representation of the meaning is extracted from the occurrence of the word in texts across the scientific categories, i.e., the meaning of a word is represented by a vector of Relative Information Gain about the subject categories. Then, the meaning space is statistically analysed for Leicester Scientific Dictionary-Core and we investigate ‘Principal Components of the Meaning’ to describe the adequate dimensions of the meaning. The research in this paper conducts the base for the geometric representation of the meaning of texts.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Dawod, Zainb, et David Bell. « Enhancing the Learning of Special Educational Needs children with Dynamic Content Annotations ». Dans 8th International Conference on Human Interaction and Emerging Technologies. AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002756.

Texte intégral
Résumé :
Communication is difficult for students who have little or no clear speech. Consequently, a range of communication systems, including symbols, pictures, or gestures, is used as an alternative to speech. Semantic web technology has had an impact in the educational field and offers the potential for greater engagement with a rich web of content. Students’ behaviour and learning engagement are among the significant problems in managing any group with special needs. Pupils with learning difficulties tend to be more off-task in class, are required to receive more teacher attention, off-task behaviour, ask fewer educational questions with shorter response times, and give less feedback than other pupils. Communication systems have been used since the 1970s to support face-to-face communication with children who have little or no speech ability. From the literature, teaching using communication symbols requires an adequate number of trained staff and an understanding of the complexity of young peoples’ disabilities and behaviour. Teachers often feel overwhelmed in preparing class resources, where more than one resource may be needed to explain each thought (O’Brien, 2019). A new evolution of the web is called the “Semantic Web.” The Semantic Web is an extension of the current traditional World Wide Web - adding semantic descriptions and ontologies. One benefit is that such characterization and modelling help provide additional meaning to the web content; making content machine-understandable (Berners-Lee et al, 2001). Although the Semantic web is applied in different fields including education, there is limited research in the field of mainstream education, particularly for those with special needs. This research was conducted to show the impact of applying semantic annotation techniques in improving the engagement, concentration, and behaviour of children with special needs. This study follows a Design Science Research Methodology (DSRM), a research process to discover practical solutions by evaluating the results in a set of iterations to design a SENTP model. The findings present a novel approach to teaching children with various needs by introducing educational prototypes using different semantic annotation content in an educational website. We investigated the impact of the annotation content using the symbol communication systems (Makaton, Widgit, and PECS), pictures, or audios, which are part of the current methods for teaching in UK schools. We selected an appropriate annotation editor to test the SENTP prototype for testing in the study after exploring different techniques. We collected the data from seven schools in the UK: two nursery schools; two special need high schools; one primary state school; and one preschool for children with language and communication difficulties. A total of 23 educators approved to participate in this study. The data are recorded, transcribed, and thematically analysed using NVivo 11. The findings from the in-school experiment indicated that annotated content using semantic annotations could have a significant impact on making the learning process more effective with better class management for students with special needs, including pupils with autistic spectrum disorders.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Simão, Adenilso da Silva, Auri Marcelo Rizzo Vincenzi et José Carlos Maldonado. « mudelgen : A Tool for Processing Mutant Operator Descriptions ». Dans Simpósio Brasileiro de Engenharia de Software. Sociedade Brasileira de Computação, 2002. http://dx.doi.org/10.5753/sbes.2002.23970.

Texte intégral
Résumé :
Mutation Testing is a testing approach for assessing the adequacy of a set of test cases by analyzing their ability in distinguishing the product under test from a set of alternative products, the so-called mutants. The mutants are generated from the product under test by applying a set of mutant operators, which systematically yield products with slight syntactical differences. Aiming at automating the generation of mutants, we have designed a language — named MuDeL — for describing mutant operators. In this paper, we describe the mudelgen system, which was developed to support the language MuDeL. mudelgen was developed using concepts that come from transformational and logical programming paradigms, as well as from context-free grammar and denotational semantics theories.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Dumitran, Angela. « TRANSLATION ERROR IN GOOGLE TRANSLATE FROM ENGLISH INTO ROMANIAN IN TEXTS RELATED TO CORONAVIRUS ». Dans eLSE 2021. ADL Romania, 2021. http://dx.doi.org/10.12753/2066-026x-21-078.

Texte intégral
Résumé :
Both the emergence of the pandemic and lack of knowledge and/or time needed to translate texts related to this topic brought about an increased interest in analyzing Neural Machine Translation (NMT) performance. This study aims to identify and analyze lexical and semantic errors of language aspects that appear in medical texts translated by Google Translate from English into Romanian. The data used for investigation comprises official prospects of 5 vaccines that were approved to be used against the current coronavirus. The focus is on the lexical and semantic errors, as researchers state that these errors made by Machine Translation have the highest frequency compared to morphological or syntactic errors. Moreover, the lexical errors may affect the meaning, the message, and may easily lead to mistranslation, misunderstanding and, therefore, misinformation. The texts to be analyzed are collected from official websites and translated using Google Translate and Google Languages Tools. From the data analyzed, there are 22 lexical and semantic errors that are approached through descriptive methodology. By examining types of errors in translation from English into Romanian and analyzing the potential causes of errors, the results will be used to illustrate the quality and accuracy of Google Translate when translating public health information from English into Romanian, to observe how much the message is affected by the error, in order to sharpen up linguistic awareness. The results of the study can ultimately help improve of the quality of NMT in terms of better lexical selection and attempt to give inputs as a contribution for a more adequate translation into Romanian by Google Machine Translation.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Zhao, Yang, Jiajun Zhang, Yu Zhou et Chengqing Zong. « Knowledge Graphs Enhanced Neural Machine Translation ». Dans Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California : International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/559.

Texte intégral
Résumé :
Knowledge graphs (KGs) store much structured information on various entities, many of which are not covered by the parallel sentence pairs of neural machine translation (NMT). To improve the translation quality of these entities, in this paper we propose a novel KGs enhanced NMT method. Specifically, we first induce the new translation results of these entities by transforming the source and target KGs into a unified semantic space. We then generate adequate pseudo parallel sentence pairs that contain these induced entity pairs. Finally, NMT model is jointly trained by the original and pseudo sentence pairs. The extensive experiments on Chinese-to-English and Englishto-Japanese translation tasks demonstrate that our method significantly outperforms the strong baseline models in translation quality, especially in handling the induced entities.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Adamovich, Alexei Igorevich, et Andrei Valentinovich Klimov. « On Theory of Names to be Used in Semantics of References in Functional and Object-Oriented Languages ». Dans 23rd Scientific Conference “Scientific Services & Internet – 2021”. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/abrau-2021-1-ceur.

Texte intégral
Résumé :
The long-standing problem of adequate formalization of local names in mathematical formulas and semantics of references in object-oriented languages taken on their own without objects, is discussed. Reasons why the existing approaches cannot be considered suitable solutions, are explained. An introduction is given to the relatively recent works on the theories of names and references of the group lead by Andrew Pitts. The concept of referential transparency, in which contextual equivalence is used instead of the usual equality of values, is analyzed. It is the main property, which these theories are based upon: such modified referential transparency is preserved when a purely functional language is extended with names and references as data. An outline of a constructive denotational semantics of the extended functional language is given. It is argued that the modified referential transparency, along with many other valuable properties, can be also preserved for mutable objects that change to a limited extent. This leads to a model of computation between functional and object-oriented ones, allowing for a deterministic parallel implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Souza, Erick Nilsen Pereira, et Daniela Barreiro Claro. « Detecção Multilíngue de Serviços Web Duplicados Baseada na Similaridade Textual ». Dans X Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação - SBC, 2014. http://dx.doi.org/10.5753/sbsi.2014.6140.

Texte intégral
Résumé :
O agrupamento por similaridade representa uma etapa relevante nas estratégias de descoberta e composição de serviços web. Muitos métodos de agrupamento processam as descrições dos serviços em linguagem natural para estimar o grau de correlação entre eles. Entretanto, a utilização de bases de conhecimento em idiomas específicos limita a aplicabilidade desses métodos. Neste artigo e proposto um modelo multilíngue para agrupamento de serviços web similares a partir das suas descrições em linguagem natural. Em particular, foi aplicado o Latent Semantic Indexing (LSI), um método de Recuperação da Informação (RI) independente da língua e do domínio. Além disso, foi feita uma análise experimental com três medidas de similaridade, a fim de determinar qual delas e mais adequada à detecção de serviços web duplicados a partir das descrições dos serviços em dois idiomas.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Anisimova, Alexandra, et Olga Vishnyakova. « Corpus in Translation Classroom : A Case Study of Translating Economic Terms ». Dans 14th International Scientific Conference "Rural Environment. Education. Personality. (REEP)". Latvia University of Life Sciences and Technologies. Faculty of Engineering. Institute of Education and Home Economics, 2021. http://dx.doi.org/10.22616/reep.2021.14.029.

Texte intégral
Résumé :
The article deals with the role of corpus in translation and translation studies. The paper focuses on different aspects which should be taken into consideration when compiling a representative corpus. The researchers focus on the role the corpus of professional texts plays when choosing translation equivalents for terms, including just created and not yet registered in terminological dictionaries. The aim of the research is to elaborate the approach to the use of corpus material in the course of translation in specialized and professional fields, with particular attention to some aspects of translation competence development. The analysis based on the comparative, definitional and contextual methods proved that parallel text corpora provide professional experts, as well as students of translation, with reliable knowledge of linguistic units functioning and semantic meaning actualization within certain contexts in the Language for Specific Purposes (LSP) domain. The studies have shown that a comparative statistical analysis of a corpus of professional texts might be recommended when looking for an adequate equivalent for a term. The scope of application of the methodology suggested is not confined to certain terminological systems or fields of knowledge. The translation competence development that includes compiling text corpora and making adequate choices by students dealing with appropriate instructions on the part of the teacher, as the task concerns with high level of knowledge acquisition as refers to both linguistic and translation expertise.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie