Auswahl der wissenschaftlichen Literatur zum Thema „Regular Grammar Induction“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Regular Grammar Induction" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Regular Grammar Induction":

1

Liu, Linlin. „English Pedagogical Grammar: Teaching Present Perfect and Present Perfect Continuous by Deductive and Inductive Approaches“. Studies in English Language Teaching 8, Nr. 3 (22.08.2020): p138. http://dx.doi.org/10.22158/selt.v8n3p138.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This research endeavor aims to present the English Pedagogical Grammar Teaching, discussing the use and form of the present perfect and present perfect continuous tenses, and the regular verbs’ past participle and irregular verbs’ past participle. The study is based on two main assumptions that cause difficulties for learners of English, namely, the forms of verbs and the difficulty of distinguishing between the present from the past simple tenses. The study discusses the use of deductive and inductive approaches in English pedagogical grammar teaching, and evaluates these approaches from A-factor and E-factor description. Overall results of the analysis show that the deductive and inductive approaches are helpful in language teaching and learning. And the forms of verbs and differences between the present and past simple tense made English learning difficult. By using appropriate teaching methods, English grammar can be taught and learned in an efficient way.
2

-, Dr S. Ansar Hussain, und Dr R. V. Jayanth Kasyap -. „Tools of Language Learning - A Pedagogical Perspective“. International Journal For Multidisciplinary Research 6, Nr. 1 (29.02.2024). http://dx.doi.org/10.36948/ijfmr.2024.v06i01.11595.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In the present technological breakthroughs English language learning has turned out to be more receptive, accessible, exciting and challenging task at the same time in terms of implementation. This advancement has made the assigned task more interesting, interactive and thrilled. Though electronic devices cannot substitute a resourceful teacher, these would certainly help to improve receptive skill and at the same time a remedial teaching measures for average learners. One such active source is YouTube videos teaching English grammar to higher secondary students. Lessons on core grammar topics such as Articles, Prepositions etc are presented in an active and engaging manner. These short videos have received adequate attention due to their effective presentation with the help of contextual animated pictures. Mostly inductive methods are applied in these grammar tutorial classes. These videos are released on regular intervals. Furthermore, some of YouTube channels are organizing live sessions by connecting their subscribers through SMS (Short Message Services) alerts. Overall the videos are interestingly designed in a way that to facilitate slow learners also. They can re-play the video recordings as many times as they could. Now-a-days watching YouTube videos is quiet a common phenomenon. Precisely this resource fulfils the specific needs of the students. Thus, these digital devices play an important role in the process of teaching and learning. The paper aims at analysing a few learning videos of English grammar for higher secondary school students in terms of their usefulness and effective pedagogical aspects.

Dissertationen zum Thema "Regular Grammar Induction":

1

Grand, Maxence. „Apprentissage de Modèle d'Actions basé sur l'Induction Grammaticale Régulière pour la Planification en Intelligence Artificielle“. Electronic Thesis or Diss., Université Grenoble Alpes, 2022. http://www.theses.fr/2022GRALM044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Le domaine de l’intelligence artificielle vise à concevoir des agents autonomes capables de percevoir, d’apprendre et d’agir sans aucune intervention humaine pour accomplir des tâches complexes. Pour accomplir des tâches complexes, l’agent autonome doit planifier les meilleures actions possibles et les exécuter. Pour ce faire, l’agent autonome a besoin d’un modèle d’action. Un modèle d’action est une représentation sémantique des actions qu’il peut exécuter. Dans un modèle d’actions, une action est représentée à l’aide (1) d’une précondition: l’ensemble des conditions qui doivent être satisfaites pour que l’action puisse être exécutée et (2) des effets: l’ensemble des propriétés du monde qui vont être modifiées par l’exécution de l’action. La planification STRIPS est une méthode classique pour concevoir ces modèles d’actions. Cependant, les modèles d’actions STRIPS sont généralement trop restrictifs pour être utilisés dans des applications réelles. Il existe d’autres forme de modèles d’actions: les modèles d’actions temporels permettant de représenter des actions pouvant être exécutées en concurrence, les modèles d’actions HTN permettant de représenter les actions sous formes de tâches et de sous tâches, etc. Ces modèles sont moins restrictifs, mais moins les modèles sont restrictifs plus ils sont difficiles à concevoir. Dans cette thèse, nous nous intéressons aux méthodes facilitant l’acquisition de ces modèles d’actions basées sur les techniques d’apprentissage automatique.Dans cette thèse, nous présentons AMLSI (Action Model Learning with State machine Interaction), une approche d’apprentissage de modèles d’actions basée sur l’induction grammaticale régulière. Dans un premier temps nous montrerons que l’approche AMLSI permet d’apprendre des modèles d’actions STRIPS. Nous montrerons les différentes propriétés de l’approche prouvant son efficacité: robustesse, convergence, requiert peu de données d’apprentissage, qualité des modèles appris. Dans un second temps, nous proposerons deux extensions pour l’apprentissage de modèles d’actions temporels et de modèles d’actions HTN
The field of artificial intelligence aims to design and build autonomous agents able to perceive, learn and act without any human intervention to perform complex tasks. To perform complex tasks, the autonomous agent must plan the best possible actions and execute them. To do this, the autonomous agent needs an action model. An action model is a semantic representation of the actions it can execute. In an action model, an action is represented using (1) a precondition: the set of conditions that must be satisfied for the action to be executed and (2) the effects: the set of properties of the world that will be altered by the execution of the action. STRIPS planning is a classical method to design these action models. However, STRIPS action models are generally too restrictive to be used in real-world applications. There are other forms of action models: temporal action models allowing to represent actions that can be executed concurrently, HTN action models allowing to represent actions as tasks and subtasks, etc. These models are less restrictive, but the less restrictive the models are the more difficult they are to design. In this thesis, we are interested in approaches facilitating the acquisition of these action models based on machine learning techniques.In this thesis, we present AMLSI (Action Model Learning with State machine Interaction), an approach for action model learning based on Regular Grammatical Induction. First, we show that the AMLSI approach allows to learn (STRIPS) action models. We will show the different properties of the approach proving its efficiency: robustness, convergence, require few learning data, quality of the learned models. In a second step, we propose two extensions for temporal action model learning and HTN action model learning
2

Packer, Thomas L. „Scalable Detection and Extraction of Data in Lists in OCRed Text for Ontology Population Using Semi-Supervised and Unsupervised Active Wrapper Induction“. BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4258.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Lists of records in machine-printed documents contain much useful information. As one example, the thousands of family history books scanned, OCRed, and placed on-line by FamilySearch.org probably contain hundreds of millions of fact assertions about people, places, family relationships, and life events. Data like this cannot be fully utilized until a person or process locates the data in the document text, extracts it, and structures it with respect to an ontology or database schema. Yet, in the family history industry and other industries, data in lists goes largely unused because no known approach adequately addresses all of the costs, challenges, and requirements of a complete end-to-end solution to this task. The diverse information is costly to extract because many kinds of lists appear even within a single document, differing from each other in both structure and content. The lists' records and component data fields are usually not set apart explicitly from the rest of the text, especially in a corpus of OCRed historical documents. OCR errors and the lack of document structure (e.g. HMTL tags) make list content hard to recognize by a software tool developed without a substantial amount of highly specialized, hand-coded knowledge or machine learning supervision. Making an approach that is not only accurate but also sufficiently scalable in terms of time and space complexity to process a large corpus efficiently is especially challenging. In this dissertation, we introduce a novel family of scalable approaches to list discovery and ontology population. Its contributions include the following. We introduce the first general-purpose methods of which we are aware for both list detection and wrapper induction for lists in OCRed or other plain text. We formally outline a mapping between in-line labeled text and populated ontologies, effectively reducing the ontology population problem to a sequence labeling problem, opening the door to applying sequence labelers and other common text tools to the goal of populating a richly structured ontology from text. We provide a novel admissible heuristic for inducing regular expression wrappers using an A* search. We introduce two ways of modeling list-structured text with a hidden Markov model. We present two query strategies for active learning in a list-wrapper induction setting. Our primary contributions are two complete and scalable wrapper-induction-based solutions to the end-to-end challenge of finding lists, extracting data, and populating an ontology. The first has linear time and space complexity and extracts highly accurate information at a low cost in terms of user involvement. The second has time and space complexity that are linear in the size of the input text and quadratic in the length of an output record and achieves higher F1-measures for extracted information as a function of supervision cost. We measure the performance of each of these approaches and show that they perform better than strong baselines, including variations of our own approaches and a conditional random field-based approach.
3

Gebhardt, Kilian. „Induction, Training, and Parsing Strategies beyond Context-free Grammars“. 2019. https://tud.qucosa.de/id/qucosa%3A71398.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis considers the problem of assigning a sentence its syntactic structure, which may be discontinuous. It proposes a class of models based on probabilistic grammars that are obtained by the automatic refinement of a given grammar. Different strategies for parsing with a refined grammar are developed. The induction, refinement, and application of two types of grammars (linear context-free rewriting systems and hybrid grammars) are evaluated empirically on two German and one Dutch corpus.

Buchteile zum Thema "Regular Grammar Induction":

1

Unold, Olgierd. „Regular Language Induction with Grammar-based Classifier System“. In Engineering the Computer Science and IT. InTech, 2009. http://dx.doi.org/10.5772/7768.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Regular Grammar Induction":

1

Belcak, Peter, David Hofer und Roger Wattenhofer. „A Neural Model for Regular Grammar Induction“. In 2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2022. http://dx.doi.org/10.1109/icmla55696.2022.00064.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Zur Bibliographie