Academic literature on the topic 'Probabilistic grammar'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Probabilistic grammar.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Probabilistic grammar"

1

Nitay, Dolav, Dana Fisman, and Michal Ziv-Ukelson. "Learning of Structurally Unambiguous Probabilistic Grammars." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (May 18, 2021): 9170–78. http://dx.doi.org/10.1609/aaai.v35i10.17107.

Full text
Abstract:
The problem of identifying a probabilistic context free grammar has two aspects: the first is determining the grammar's topology (the rules of the grammar) and the second is estimating probabilistic weights for each rule. Given the hardness results for learning context-free grammars in general, and probabilistic grammars in particular, most of the literature has concentrated on the second problem. In this work we address the first problem. We restrict attention to structurally unambiguous weighted context-free grammars (SUWCFG) and provide a query learning algorithm for strucuturally unambiguous probabilistic context-free grammars (SUPCFG). We show that SUWCFG can be represented using co-linear multiplicity tree automata (CMTA), and provide a polynomial learning algorithm that learns CMTAs. We show that the learned CMTA can be converted into a probabilistic grammar, thus providing a complete algorithm for learning a strucutrally unambiguous probabilistic context free grammar (both the grammar topology and the probabilistic weights) using structured membership queries and structured equivalence queries. We demonstrate the usefulness of our algorithm in learning PCFGs over genomic data.
APA, Harvard, Vancouver, ISO, and other styles
2

KROTOV, ALEXANDER, MARK HEPPLE, ROBERT GAIZAUSKAS, and YORICK WILKS. "Evaluating two methods for Treebank grammar compaction." Natural Language Engineering 5, no. 4 (December 1999): 377–94. http://dx.doi.org/10.1017/s1351324900002308.

Full text
Abstract:
Treebanks, such as the Penn Treebank, provide a basis for the automatic creation of broad coverage grammars. In the simplest case, rules can simply be ‘read off’ the parse-annotations of the corpus, producing either a simple or probabilistic context-free grammar. Such grammars, however, can be very large, presenting problems for the subsequent computational costs of parsing under the grammar. In this paper, we explore ways by which a treebank grammar can be reduced in size or ‘compacted’, which involve the use of two kinds of technique: (i) thresholding of rules by their number of occurrences; and (ii) a method of rule-parsing, which has both probabilistic and non-probabilistic variants. Our results show that by a combined use of these two techniques, a probabilistic context-free grammar can be reduced in size by 62% without any loss in parsing performance, and by 71% to give a gain in recall, but some loss in precision.
APA, Harvard, Vancouver, ISO, and other styles
3

Benedikt Szmrecsanyi. "Diachronic Probabilistic Grammar." English Language and Linguistics 19, no. 3 (December 2013): 41–68. http://dx.doi.org/10.17960/ell.2013.19.3.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Daland, Robert. "Long words in maximum entropy phonotactic grammars." Phonology 32, no. 3 (December 2015): 353–83. http://dx.doi.org/10.1017/s0952675715000251.

Full text
Abstract:
A phonotactic grammar assigns a well-formedness score to all possible surface forms. This paper considers whether phonotactic grammars should be probabilistic, and gives several arguments that they need to be. Hayes & Wilson (2008) demonstrate the promise of a maximum entropy Harmonic Grammar as a probabilistic phonotactic grammar. This paper points out a theoretical issue with maxent phonotactic grammars: they are not guaranteed to assign a well-defined probability distribution, because sequences that contain arbitrary repetitions of unmarked sequences may be underpenalised. The paper motivates a solution to this issue: include a *Structconstraint. A mathematical proof of necessary and sufficient conditions to avoid the underpenalisation problem are given in online supplementary materials.
APA, Harvard, Vancouver, ISO, and other styles
5

Shih, Stephanie S. "Constraint conjunction in weighted probabilistic grammar." Phonology 34, no. 2 (August 2017): 243–68. http://dx.doi.org/10.1017/s0952675717000136.

Full text
Abstract:
This paper examines a key difference between constraint conjunction and constraint weight additivity, arguing that the two do not have the same empirical coverage. In particular, constraint conjunction in weighted probabilistic grammar allows for superadditive constraint interaction, where the effect of violating two constraints goes beyond the additive combination of the two constraints’ weights alone. A case study from parasitic tone harmony in Dioula d'Odienné demonstrates superadditive local and long-distance segmental feature similarities that increase the likelihood of tone harmony. Superadditivity in Dioula d'Odienné is formally captured in Maximum Entropy Harmonic Grammar by weighted constraint conjunction. Counter to previous approaches that supplant constraint conjunction with weight additivity in Harmonic Grammar, information-theoretic model comparison reveals that weighted constraint conjunction improves the grammar's explanatory power when modelling quantitative natural language patterns.
APA, Harvard, Vancouver, ISO, and other styles
6

CASACUBERTA, FRANCISCO. "GROWTH TRANSFORMATIONS FOR PROBABILISTIC FUNCTIONS OF STOCHASTIC GRAMMARS." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 03 (May 1996): 183–201. http://dx.doi.org/10.1142/s0218001496000153.

Full text
Abstract:
Stochastic Grammars are the most usual models in Syntactic Pattern Recognition. Both components of a Stochastic Grammar, the characteristic grammar and the probabilities attached to the rules, can be learnt automatically from training samples. In this paper, first a review of some algorithms are presented to infer the probabilistic component of Stochastic Regular and Context-Free Grammars under the framework of the Growth Transformations. On the other hand, with Stochastic Grammars, the patterns must be represented as strings over a finite set of symbols. However, the most natural representation in many Syntactic Pattern Recognition applications (i.e. speech) is as sequences of vectors from a feature vector space, that is, a continuous representation. Therefore, to obtain a discrete representation of the patterns, some quantization errors are introduced in the representation process. To avoid this drawback, a formal presentation of a semi-continuous extension of the Stochastic Regular and Context-Free Grammars is studied and probabilistic estimation algorithms are developed in this paper. In this extension, sequences of vectors, instead of strings of symbols, can be processed with Stochastic Grammars.
APA, Harvard, Vancouver, ISO, and other styles
7

Han, Young S., and Key-Sun Choi. "Best parse parsing with Earley's and Inside algorithms on probabilistic RTN." Natural Language Engineering 1, no. 2 (June 1995): 147–61. http://dx.doi.org/10.1017/s1351324900000127.

Full text
Abstract:
AbstractInside parsing is a best parse parsing method based on the Inside algorithm that is often used in estimating probabilistic parameters of stochastic context free grammars. It gives a best parse in O(N3G3) time where N is the input size and G is the grammar size. Earley algorithm can be made to return best parses with the same complexity in N.By way of experiments, we show that Inside parsing can be more efficient than Earley parsing with sufficiently large grammar and sufficiently short input sentences. For instance, Inside parsing is better with sentences of 16 or less words for a grammar containing 429 states. In practice, parsing can be made efficient by employing the two methods selectively.The redundancy of Inside algorithm can be reduced by the topdown filtering using the chart produced by Earley algorithm, which is useful in training the probabilistic parameters of a grammar. Extensive experiments on Penn Tree corpus show that the efficiency of Inside computation can be improved by up to 55%.
APA, Harvard, Vancouver, ISO, and other styles
8

Kita, Kenji. "Mixture Probabilistic Context-Free Grammar." Journal of Natural Language Processing 3, no. 4 (1996): 103–13. http://dx.doi.org/10.5715/jnlp.3.4_103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

DAI, Yin-Tang, Cheng-Rong WU, Sheng-Xiang MA, and Yi-Ping ZHONG. "Hierarchically Classified Probabilistic Grammar Parsing." Journal of Software 22, no. 2 (March 25, 2011): 245–57. http://dx.doi.org/10.3724/sp.j.1001.2011.03809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arthi, K., and Kamala Krithivasan. "Probabilistic Parallel Communicating Grammar Systems." International Journal of Computer Mathematics 79, no. 1 (January 2002): 1–26. http://dx.doi.org/10.1080/00207160211914.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Probabilistic grammar"

1

Kwiatkowski, Thomas Mieczyslaw. "Probabilistic grammar induction from sentences and structured meanings." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6190.

Full text
Abstract:
The meanings of natural language sentences may be represented as compositional logical-forms. Each word or lexicalised multiword-element has an associated logicalform representing its meaning. Full sentential logical-forms are then composed from these word logical-forms via a syntactic parse of the sentence. This thesis develops two computational systems that learn both the word-meanings and parsing model required to map sentences onto logical-forms from an example corpus of (sentence, logical-form) pairs. One of these systems is designed to provide a general purpose method of inducing semantic parsers for multiple languages and logical meaning representations. Semantic parsers map sentences onto logical representations of their meanings and may form an important part of any computational task that needs to interpret the meanings of sentences. The other system is designed to model the way in which a child learns the semantics and syntax of their first language. Here, logical-forms are used to represent the potentially ambiguous context in which childdirected utterances are spoken and a psycholinguistically plausible training algorithm learns a probabilistic grammar that describes the target language. This computational modelling task is important as it can provide evidence for or against competing theories of how children learn their first language. Both of the systems presented here are based upon two working hypotheses. First, that the correct parse of any sentence in any language is contained in a set of possible parses defined in terms of the sentence itself, the sentence’s logical-form and a small set of combinatory rule schemata. The second working hypothesis is that, given a corpus of (sentence, logical-form) pairs that each support a large number of possible parses according to the schemata mentioned above, it is possible to learn a probabilistic parsing model that accurately describes the target language. The algorithm for semantic parser induction learns Combinatory Categorial Grammar (CCG) lexicons and discriminative probabilistic parsing models from corpora of (sentence, logical-form) pairs. This system is shown to achieve at or near state of the art performance across multiple languages, logical meaning representations and domains. As the approach is not tied to any single natural or logical language, this system represents an important step towards widely applicable black-box methods for semantic parser induction. This thesis also develops an efficient representation of the CCG lexicon that separately stores language specific syntactic regularities and domain specific semantic knowledge. This factorised lexical representation improves the performance of CCG based semantic parsers in sparse domains and also provides a potential basis for lexical expansion and domain adaptation for semantic parsers. The algorithm for modelling child language acquisition learns a generative probabilistic model of CCG parses from sentences paired with a context set of potential logical-forms containing one correct entry and a number of distractors. The online learning algorithm used is intended to be psycholinguistically plausible and to assume as little information specific to the task of language learning as is possible. It is shown that this algorithm learns an accurate parsing model despite making very few initial assumptions. It is also shown that the manner in which both word-meanings and syntactic rules are learnt is in accordance with observations of both of these learning tasks in children, supporting a theory of language acquisition that builds upon the two working hypotheses stated above.
APA, Harvard, Vancouver, ISO, and other styles
2

Stüber, Torsten. "Consistency of Probabilistic Context-Free Grammars." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-86943.

Full text
Abstract:
We present an algorithm for deciding whether an arbitrary proper probabilistic context-free grammar is consistent, i.e., whether the probability that a derivation terminates is one. Our procedure has time complexity $\\\\mathcal O(n^3)$ in the unit-cost model of computation. Moreover, we develop a novel characterization of consistent probabilistic context-free grammars. A simple corollary of our result is that training methods for probabilistic context-free grammars that are based on maximum-likelihood estimation always yield consistent grammars.
APA, Harvard, Vancouver, ISO, and other styles
3

Afrin, Taniza. "Extraction of Basic Noun Phrases from Natural Language Using Statistical Context-Free Grammar." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/33353.

Full text
Abstract:
The objective of this research was to extract simple noun phrases from natural language texts using two different grammars: stochastic context-free grammar (SCFG) and non-statistical context free grammar (CFG). Precision and recall were calculated to determine how many precise and correct noun phrases were extracted using these two grammars. Several text files containing sentences from English natural language specifications were analyzed manually to obtain the test-set of simple noun-phrases. To obtain precision and recall, this test-set of manually extracted noun phrases was compared with the extracted-sets of noun phrases obtained using the both grammars SCFG and CFG. A probabilistic chart parser was developed by modifying a deterministic parallel chart parser. Extraction of simple noun-phrases with the SCFG was accomplished using this probabilistic chart parser, a dictionary containing word probabilities along with the meaning, context-free grammar rules associated with rule probabilities and finally an algorithm to extract most likely parses of a sentence. The probabilistic parsing algorithm and the algorithm to determine figures of merit were implemented using C++ programming language.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Hsu, Hsin-jen. "A neurophysiological study on probabilistic grammatical learning and sentence processing." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/243.

Full text
Abstract:
Syntactic anomalies reliably elicit P600 effects in natural language processing. A survey of previous work converged on a conclusion that the mean amplitude of the P600 seems to be associated with the goodness of fit of a target word with expectation generated based on already unfolded materials. Based on this characteristic of the P600 effects, the current study aimed to look for evidence indicating the influence of input statistics in shaping grammatical knowledge/representations, and as a result leading to probabilistically-based competition/expectation generation processes of online sentence processing. An artificial grammar learning (AGL) task with 4 different conditions varying in probabilities were used to test this hypothesis. Results from this task indicated graded mean amplitude of the P600 effects across conditions, and the pattern of gradience is consistent with the variation of the input statistics. The use of the artificial language to simulate natural language learning process was further justified with statistically undistinguishable P600 effects elicited in a natural language sentence processing (NLSP) task. Together, the results indicate that the same neural mechanisms are recruited for both syntactic processing of natural language stimuli and sentence strings in an artificial language.
APA, Harvard, Vancouver, ISO, and other styles
5

Brookes, James William Rowe. "Probabilistic and multivariate modelling in Latin grammar : the participle-auxiliary alternation as a case study." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/probabilistic-and-multivariate-modelling-in-latin-grammar-the-participleauxiliary-alternation-as-a-case-study(4ff5b912-c410-41f2-94f2-859eb1ce5b21).html.

Full text
Abstract:
Recent research has shown that language is sensitive to probabilities and a whole host of multivariate conditioning factors. However, most of the research in this arena centres on the grammar of English, and, as yet, there is no statistical modelling on the grammar of Latin, studies of which have to date been largely philological. The rise in advanced statistical methodologies allows us to capture the underlying structure of the rich datasets which this corpus only language can potentially offer. This thesis intends to remedy this deficit by applying probabilistic and multivariate models to a specific case study, namely the alternation of word order in Latin participle auxiliary clusters (pacs), which alternate between participle-auxiliary order, as in mortuus est ‘dead is’ and est mortuus ‘is dead’. The broad research questions to be explored in this thesis are the following: (i) To what extent are probabilistic models useful and reflective of Latin syntax variation phenomena?, (ii) What are the most useful statistical models to use?, (iii) What types of linguistic variables influence variation, (iv) What theoretical implications and explanations do the statistical models suggest?Against this backdrop, a dataset of 2409 pac observations are extracted from Late Re- publican texts of the first century bc. The dataset is annotated for an “information space” of thirty-three predictor variables from various levels of linguistics: text and lemma-based variability, prosody and phonology, grammar, semantics and pragmatics, and usage-based features such as frequency. The study exploits such statistical tools as generalized linear models and multilevel generalized linear models for the regression modelling of the binary categorical outcome. However, because of the potential collinearity, and the many predictor terms, amongst other issues, the use of these models to assess the joint effect of all predictors is particularly problematic. As such, the new statistical toolkit of random forests is utilized for evaluating the relative contribution of each predictor. Overall, it is found that Latin is indeed probabilistic in its grammar, and the condition- ing factors that govern it are spread widely throughout the language space. It is also noted that probabilistic models, such as the ones used in this study, have practical applications in traditional areas of philology, including textual criticism and literary stylistics.
APA, Harvard, Vancouver, ISO, and other styles
6

Buys, Jan Moolman. "Probabilistic tree transducers for grammatical error correction." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85592.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: We investigate the application of weighted tree transducers to correcting grammatical errors in natural language. Weighted finite-state transducers (FST) have been used successfully in a wide range of natural language processing (NLP) tasks, even though the expressiveness of the linguistic transformations they perform is limited. Recently, there has been an increase in the use of weighted tree transducers and related formalisms that can express syntax-based natural language transformations in a probabilistic setting. The NLP task that we investigate is the automatic correction of grammar errors made by English language learners. In contrast to spelling correction, which can be performed with a very high accuracy, the performance of grammar correction systems is still low for most error types. Commercial grammar correction systems mostly use rule-based methods. The most common approach in recent grammatical error correction research is to use statistical classifiers that make local decisions about the occurrence of specific error types. The approach that we investigate is related to a number of other approaches inspired by statistical machine translation (SMT) or based on language modelling. Corpora of language learner writing annotated with error corrections are used as training data. Our baseline model is a noisy-channel FST model consisting of an n-gram language model and a FST error model, which performs word insertion, deletion and replacement operations. The tree transducer model we use to perform error correction is a weighted top-down tree-to-string transducer, formulated to perform transformations between parse trees of correct sentences and incorrect sentences. Using an algorithm developed for syntax-based SMT, transducer rules are extracted from training data of which the correct version of sentences have been parsed. Rule weights are also estimated from the training data. Hypothesis sentences generated by the tree transducer are reranked using an n-gram language model. We perform experiments to evaluate the performance of different configurations of the proposed models. In our implementation an existing tree transducer toolkit is used. To make decoding time feasible sentences are split into clauses and heuristic pruning is performed during decoding. We consider different modelling choices in the construction of transducer rules. The evaluation of our models is based on precision and recall. Experiments are performed to correct various error types on two learner corpora. The results show that our system is competitive with existing approaches on several error types.
AFRIKAANSE OPSOMMING: Ons ondersoek die toepassing van geweegde boomoutomate om grammatikafoute in natuurlike taal outomaties reg te stel. Geweegde eindigetoestand outomate word suksesvol gebruik in ’n wye omvang van take in natuurlike taalverwerking, alhoewel die uitdrukkingskrag van die taalkundige transformasies wat hulle uitvoer beperk is. Daar is die afgelope tyd ’n toename in die gebruik van geweegde boomoutomate en verwante formalismes wat sintaktiese transformasies in natuurlike taal in ’n probabilistiese raamwerk voorstel. Die natuurlike taalverwerkingstoepassing wat ons ondersoek is die outomatiese regstelling van taalfoute wat gemaak word deur Engelse taalleerders. Terwyl speltoetsing in Engels met ’n baie hoë akkuraatheid gedoen kan word, is die prestasie van taalregstellingstelsels nog relatief swak vir meeste fouttipes. Kommersiële taalregstellingstelsels maak oorwegend gebruik van reël-gebaseerde metodes. Die algemeenste benadering in onlangse navorsing oor grammatikale foutkorreksie is om statistiese klassifiseerders wat plaaslike besluite oor die voorkoms van spesifieke fouttipes maak te gebruik. Die benadering wat ons ondersoek is verwant aan ’n aantal ander benaderings wat geïnspireer is deur statistiese masjienvertaling of op taalmodellering gebaseer is. Korpora van taalleerderskryfwerk wat met foutregstellings geannoteer is, word as afrigdata gebruik. Ons kontrolestelsel is ’n geraaskanaal eindigetoestand outomaatmodel wat bestaan uit ’n n-gram taalmodel en ’n foutmodel wat invoegings-, verwyderings- en vervangingsoperasies op woordvlak uitvoer. Die boomoutomaatmodel wat ons gebruik vir grammatikale foutkorreksie is ’n geweegde bo-na-onder boom-na-string omsetteroutomaat geformuleer om transformasies tussen sintaksbome van korrekte sinne en foutiewe sinne te maak. ’n Algoritme wat ontwikkel is vir sintaksgebaseerde statistiese masjienvertaling word gebruik om reëls te onttrek uit die afrigdata, waarvan sintaksontleding op die korrekte weergawe van die sinne gedoen is. Reëlgewigte word ook vanaf die afrigdata beraam. Hipotese-sinne gegenereer deur die boomoutomaat word herrangskik met behulp van ’n n-gram taalmodel. Ons voer eksperimente uit om die doeltreffendheid van verskillende opstellings van die voorgestelde modelle te evalueer. In ons implementering word ’n bestaande boomoutomaat sagtewarepakket gebruik. Om die dekoderingstyd te verminder word sinne in frases verdeel en die soekruimte heuristies besnoei. Ons oorweeg verskeie modelleringskeuses in die samestelling van outomaatreëls. Die evaluering van ons modelle word gebaseer op presisie en herroepvermoë. Eksperimente word uitgevoer om verskeie fouttipes reg te maak op twee leerderkorpora. Die resultate wys dat ons model kompeterend is met bestaande benaderings op verskeie fouttipes.
APA, Harvard, Vancouver, ISO, and other styles
7

Shan, Yin Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "Program distribution estimation with grammar models." Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2005. http://handle.unsw.edu.au/1959.4/38737.

Full text
Abstract:
This thesis studies grammar-based approaches in the application of Estimation of Distribution Algorithms (EDA) to the tree representation widely used in Genetic Programming (GP). Although EDA is becoming one of the most active fields in Evolutionary computation (EC), the solution representation in most EDA is a Genetic Algorithms (GA) style linear representation. The more complex tree representations, resembling GP, have received only limited exploration. This is unfortunate, because tree representations provide a natural and expressive way of representing solutions for many problems. This thesis aims to help fill this gap, exploring grammar-based approaches to extending EDA to GP-style tree representations. This thesis firstly provides a comprehensive survey of current research on EDA with emphasis on EDA with GP-style tree representation. The thesis attempts to clarify the relationship between EDA with conventional linear representations and those with a GP-style tree representation, and to reveal the unique difficulties which face this research. Secondly, the thesis identifies desirable properties of probabilistic models for EDA with GP-style tree representation, and derives the PRODIGY framework as a consequence. Thirdly, following the PRODIGY framework, three methods are proposed. The first method is Program Evolution with Explicit Learning (PEEL). Its incremental general-to-specific grammar learning method balances the effectiveness and efficiency of the grammar learning. The second method is Grammar Model-based Program Evolution (GMPE). GMPE realises the PRODIGY framework by introducing elegant inference methods from the formal grammar field. GMPE provides good performance on some problems, but also provides a means to better understand some aspects of conventional GP, especially the building block hypothesis. The third method is Swift GMPE (sGMPE), which is an extension of GMPE, aiming at reducing the computational cost. Fourthly, a more accurate Minimum Message Length metric for grammar learning in PRODIGY is derived in this thesis. This metric leads to improved performance in the GMPE system, but may also be useful in grammar learning in general. It is also relevant to the learning of other probabilistic graphical models.
APA, Harvard, Vancouver, ISO, and other styles
8

Pinnow, Eleni. "The role of probabilistic phonotactics in the recognition of reduced pseudowords." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mora, Randall P., and Jerry L. Hill. "Service-Based Approach for Intelligent Agent Frameworks." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595661.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
This paper describes a service-based Intelligent Agent (IA) approach for machine learning and data mining of distributed heterogeneous data streams. We focus on an open architecture framework that enables the programmer/analyst to build an IA suite for mining, examining and evaluating heterogeneous data for semantic representations, while iteratively building the probabilistic model in real-time to improve predictability. The Framework facilitates model development and evaluation while delivering the capability to tune machine learning algorithms and models to deliver increasingly favorable scores prior to production deployment. The IA Framework focuses on open standard interoperability, simplifying integration into existing environments.
APA, Harvard, Vancouver, ISO, and other styles
10

Torres, Parra Jimena Cecilia. "A Perception Based Question-Answering Architecture Derived from Computing with Words." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967797581&sid=1&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Probabilistic grammar"

1

C, Bunt Harry, and Nijholt Anton 1946-, eds. Advances in probabilistic and other parsing technologies. Dordrecht: Kluwer Academic Publishers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bunt, Harry. Advances in Probabilistic and Other Parsing Technologies. Dordrecht: Springer Netherlands, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liang, Percy, Michael Jordan, and Dan Klein. Probabilistic grammars and hierarchical Dirichlet processes. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.27.

Full text
Abstract:
This article focuses on the use of probabilistic context-free grammars (PCFGs) in natural language processing involving a large-scale natural language parsing task. It describes detailed, highly-structured Bayesian modelling in which model dimension and complexity responds naturally to observed data. The framework, termed hierarchical Dirichlet process probabilistic context-free grammar (HDP-PCFG), involves structured hierarchical Dirichlet process modelling and customized model fitting via variational methods to address the problem of syntactic parsing and the underlying problems of grammar induction and grammar refinement. The central object of study is the parse tree, which can be used to describe a substantial amount of the syntactic structure and relational semantics of natural language sentences. The article first provides an overview of the formal probabilistic specification of the HDP-PCFG, algorithms for posterior inference under the HDP-PCFG, and experiments on grammar learning run on the Wall Street Journal portion of the Penn Treebank.
APA, Harvard, Vancouver, ISO, and other styles
4

(Editor), H. Bunt, and Anton Nijholt (Editor), eds. Advances in Probabilistic and Other Parsing Technologies (Text, Speech and Language Technology, Volume 16) (Text, Speech and Language Technology). Springer, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dresher, B. Elan, and Harry van der Hulst, eds. The Oxford History of Phonology. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198796800.001.0001.

Full text
Abstract:
This volume is an up-to-date history of phonology from the earliest known examples of phonological thinking through the rise of phonology as a field in the 20th century and up to the present time. The volume is divided into five parts. Part I, Early insights in phonology, begins with writing systems and has chapters devoted to the great ancient and medieval intellectual traditions of phonological thought that form the foundation of later thinking and continue to enrich phonological theory. Part II, The founders of phonology, describes the important schools and individuals of the late nineteenth and early twentieth centuries who shaped phonology as an organized scientific field. Part III takes up Mid-twentieth-century developments in phonology in the Soviet Union, Northern and Western Europe, and North America; it continues with precursors to generative grammar, and culminates in a chapter on Chomsky & Halle’s The Sound Pattern of English (SPE). Part IV, Phonology after SPE, shows how phonological theorists responded to SPE with respect to derivations, representations, and phonology-morphology interaction. Theories discussed include Dependency Phonology, Government Phonology, Constraint-and-Repair theories, and Optimality Theory. This part ends with a chapter on the study of variation. Part V, New methods and approaches, has chapters on phonetic explanation, corpora and phonological analysis, probabilistic phonology, computational modelling, models of phonological learning, and the evolution of phonology. This exploration of the history of phonology from various viewpoints provides new perspectives on where phonology has been and throws light on where it is going.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Probabilistic grammar"

1

Kanchan Devi, K., and S. Arumugam. "Probabilistic Conjunctive Grammar." In Theoretical Computer Science and Discrete Mathematics, 119–27. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64419-6_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wong, Pak-Kan, Man-Leung Wong, and Kwong-Sak Leung. "Learning Grammar Rules in Probabilistic Grammar-Based Genetic Programming." In Theory and Practice of Natural Computing, 208–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49001-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Eshghi, Arash, Matthew Purver, Julian Hough, and Yo Sato. "Probabilistic Grammar Induction in an Incremental Semantic Framework." In Constraint Solving and Language Processing, 92–107. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41578-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Araujo, L. "Evolutionary Parsing for a Probabilistic Context Free Grammar." In Rough Sets and Current Trends in Computing, 590–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45554-x_74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Hyun-Tae, and Chang Wook Ahn. "A New Grammatical Evolution Based on Probabilistic Context-free Grammar." In Proceedings in Adaptation, Learning and Optimization, 1–12. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-13356-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Houshmand, Shiva, and Sudhir Aggarwal. "Using Personal Information in Targeted Grammar-Based Probabilistic Password Attacks." In Advances in Digital Forensics XIII, 285–303. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67208-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Csuhaj-Varjú, Erzsébet, and Jürgen Dassow. "On the Size of Components of Probabilistic Cooperating Distributed Grammar Systems." In Theory Is Forever, 49–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27812-2_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Goodman, Joshua. "Probabilistic Feature Grammars." In Text, Speech and Language Technology, 63–84. Dordrecht: Springer Netherlands, 2000. http://dx.doi.org/10.1007/978-94-015-9470-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mosbah, Mohamed. "Probabilistic graph grammars." In Graph-Theoretic Concepts in Computer Science, 236–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56402-0_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Saranyadevi, S., R. Murugeswari, S. Bathrinath, and M. S. Sabitha. "Hybrid Association Rule Miner Using Probabilistic Context-Free Grammar and Ant Colony Optimization for Rainfall Prediction." In Advances in Intelligent Systems and Computing, 683–95. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-16657-1_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Probabilistic grammar"

1

Kim, Yoon, Chris Dyer, and Alexander Rush. "Compound Probabilistic Context-Free Grammars for Grammar Induction." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/p19-1228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Naganuma, Hiroaki, Diptarama Hendrian, Ryo Yoshinaka, Ayumi Shinohara, and Naoki Kobayashi. "Grammar Compression with Probabilistic Context-Free Grammar." In 2020 Data Compression Conference (DCC). IEEE, 2020. http://dx.doi.org/10.1109/dcc47342.2020.00093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pu, Xiaoying, and Matthew Kay. "A Probabilistic Grammar of Graphics." In CHI '20: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3313831.3376466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wong, Pak-Kan, Man-Leung Wong, and Kwong-Sak Leung. "Probabilistic grammar-based deep neuroevolution." In GECCO '19: Genetic and Evolutionary Computation Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3319619.3326778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiong, Hanwei, Jun Xu, Chenxi Xu, and Ming Pan. "Automating 3D reconstruction using a probabilistic grammar." In Applied Optics and Photonics China (AOPC2015), edited by Chunhua Shen, Weiping Yang, and Honghai Liu. SPIE, 2015. http://dx.doi.org/10.1117/12.2202966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Saparov, Abulhair, Vijay Saraswat, and Tom Mitchell. "A Probabilistic Generative Grammar for Semantic Parsing." In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). Stroudsburg, PA, USA: Association for Computational Linguistics, 2017. http://dx.doi.org/10.18653/v1/k17-1026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cekan, Ondrej, Jakub Podivinsky, and Zdenek Kotasek. "Program Generation Through a Probabilistic Constrained Grammar." In 2018 21st Euromicro Conference on Digital System Design (DSD). IEEE, 2018. http://dx.doi.org/10.1109/dsd.2018.00049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kawabata, Takeshi. "Dynamic probabilistic grammar for spoken language disambiguation." In 3rd International Conference on Spoken Language Processing (ICSLP 1994). ISCA: ISCA, 1994. http://dx.doi.org/10.21437/icslp.1994-211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Devi, K. Kanchan, and S. Arumugam. "Password Cracking Algorithm using Probabilistic Conjunctive Grammar." In 2019 IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS). IEEE, 2019. http://dx.doi.org/10.1109/incos45849.2019.8951390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Probabilistic Regular Grammar Inference Algorithm Using Incremental Technique." In 2018 the 8th International Workshop on Computer Science and Engineering. WCSE, 2018. http://dx.doi.org/10.18178/wcse.2018.06.129.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Probabilistic grammar"

1

Lafferty, John, Saniel Sleator, and Davy Temperley. Grammatical Trigrams: A Probabilistic Model of Link Grammar. Fort Belvoir, VA: Defense Technical Information Center, September 1992. http://dx.doi.org/10.21236/ada256365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography