Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Reasonning“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Reasonning" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Reasonning"
Bénatouïl, Thomas. „Épictète et la doctrine des indifférents et du telos d’Ariston à Panétius“. Elenchos 40, Nr. 1 (06.08.2019): 99–121. http://dx.doi.org/10.1515/elen-2019-0004.
Der volle Inhalt der QuelleNihad, El Ghouch, Kouissi Mohamed und En-Naimi El Mokhtar. „Designing and modeling of a multi-agent adaptive learning system (MAALS) using incremental hybrid case-based reasoning (IHCBR)“. International Journal of Electrical and Computer Engineering (IJECE) 10, Nr. 2 (01.04.2020): 1980. http://dx.doi.org/10.11591/ijece.v10i2.pp1980-1992.
Der volle Inhalt der QuelleMolitor, Christian. „Zur Frage der realwirtschaftlichen Konvergenz in der Europäischen Union“. Zeitschrift für Wirtschaftspolitik 46, Nr. 3 (01.01.1997). http://dx.doi.org/10.1515/zfwp-1997-0306.
Der volle Inhalt der QuelleDissertationen zum Thema "Reasonning"
Hashem, Hadi. „Modélisation intégratrice du traitement BigData“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL005/document.
Der volle Inhalt der QuelleNowadays, multiple actors of Internet technology are producing very large amounts of data. Sensors, social media or e-commerce, all generate real-time extending information based on the 3 Vs of Gartner: Volume, Velocity and Variety. In order to efficiently exploit this data, it is important to keep track of the dynamic aspect of their chronological evolution by means of two main approaches: the polymorphism, a dynamic model able to support type changes every second with a successful processing and second, the support of data volatility by means of an intelligent model taking in consideration key-data, salient and valuable at a specific moment without processing all volumes of history and up to date data.The primary goal of this study is to establish, based on these approaches, an integrative vision of data life cycle set on 3 steps, (1) data synthesis by selecting key-values of micro-data acquired by different data source operators, (2) data fusion by sorting and duplicating the selected key-values based on a de-normalization aspect in order to get a faster processing of data and (3) the data transformation into a specific format of map of maps of maps, via Hadoop in the standard MapReduce process, in order to define the related graph in applicative layer.In addition, this study is supported by a software prototype using the already described modeling tools, as a toolbox compared to an automatic programming software and allowing to create a customized processing chain of BigData
Yang, Hui. „Knowledge extraction from large ontologies“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG033.
Der volle Inhalt der QuelleBecause widely used real-world ontologies are often complex and large, one crucial challenge has emerged: designing tools for users to focus on sub-ontologies corresponding to their specific interests. To this end, this work investigates three different approaches for extracting knowledge from large ontologies: (1) Justification, a minimal sub-ontology of the original ontology that derives a specific conclusion; (2) Deductive module, a sub-ontology that preserves all entailments wrt a given vocabulary capturing the user interest; and (3) General module, a new ontology not necessarily a sub-ontology, that ensures to perform the same set of entailments as the original one over a given vocabulary. For computing justifications and deductive modules, we propose SAT-based methods that are conducted in two steps: (i) encoding the derivation of justifications (resp. deductive modules) as Horn-clauses; (ii) computing justifications (resp. deductive modules) by resolution over these Horn-clauses. For encoding the derivation of justifications, we construct a graph representation of ontologies and propose a new set of inference rules, which are more compact than existing ones. For encoding the derivation of deductive modules, we introduced a new notion called the forest, which relies on a graph representation, capturing all the logical entailments over a given vocabulary. For computing general modules, we proposed a new resolution-based method inspired by the existing approach for computing uniform interpolants. This method is, in general, more efficient and generates modules of better quality. Finally, each proposed method has been evaluated by implementing a prototype used to test large real-world ontologies and the experimental results have been compared to those obtained with state-of-the-art methods, showing the advantages of our method in terms of efficiency and quality
Baro, Johanna. „Modélisation multi-échelles de la morphologie urbaine à partir de données carroyées de population et de bâti“. Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1004/document.
Der volle Inhalt der QuelleSince a couple of decades the relationships between urban form and travel patterns are central to reflection on sustainable urban planning and transport policy. The increasing distribution of regular grid data is in this context a new perspective for modeling urban structures from measurements of density freed from the constraints of administrative division. Population density data are now available on 200 meters grids covering France. We complete these data with built area densities in order to propose two types of classified images adapted to the study of travel patterns and urban development: classifications of urban fabrics and classifications of morphotypes of urban development. The construction of such classified images is based on theoretical and experimental which raise methodological issues regarding the classification of a statistically various urban spaces. To proceed exhaustively those spaces, we proposed a per-pixel classification method of urban fabrics by supervised transfer learning. Hidden Markov random fields are used to take into account the dependencies in the spatial data. The classifications of morphotypes are then obtained by broadening the knowledge of urban fabrics. These classifications are formalized from chorematique theoretical models and implemented by qualitative spatial reasoning. The analysis of these classifications by methods of quantitative spatial reasoning and factor analysis allowed us to reveal the morphological diversity of 50 metropolitan areas. It highlights the relevance of these classifications to characterize urban areas in accordance with various development issues related to the density or multipolar development
El, Hader Carla. „L'effet du guidage dans l'environnement GeoGebra et au niveau du raisonnement déductif : une propédeutique à la résolution des problèmes de démonstration de géométrie plane en 6e dans les écoles libanaises francophones homologuées“. Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM3053.
Der volle Inhalt der QuelleOur research work is related to the problem of learning Geometry in grade 6, particularly the difficulties imposed by the resolution of the problems of demonstration. Our goal is to study the cognitive functioning of students on the basis of knowledge mobilized and the rate of the cognitive load generated by the resolution of the problems, in order to put in place a strategy to remedy the difficulties of the students and to optimize the intellectual performance in a situation of resolution of problems in the domain of geometry.Pressing on the different theories of cognitive psychology (the theory of the instrumentation, the theory of the cognitive load, etc.) and those of didactics (theory of situations and the theory of conceptual fields), we have made the assumption that a cognitive analysis of the activity of the student in the environment paper-pen, allows us to collect the relevant indices to identify the types of knowledge which mobilization proves to be problematic for the students, as well as the elements of the task that engender a high cognitive load.From the items retrieved, we have designed and tested a specific guidance in the environment of dynamic geometry GeoGebra for the resolution of problems of demonstration, related to the drawing of figures, as well as the development of a deductive reasoning
Louarn, Amaury. „A topological approach to virtual cinematography“. Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S063.
Der volle Inhalt der QuelleResearch in the domain of virtual cinematography has mostly focused on specific aspects of cinematography but fail to take into account the interdependencies between all the entities in the scene. Indeed, while most approaches take into account the fact that the cameras and lights are constrained by the characters, most fail to acknowledge that the characters are also constrained by the cameras and lights. In this thesis we tackle these interdependencies by modeling the relations between the entities and the topology of the environment. To this end, we propose a language to be used to formally describe a scene thanks to high-level constraints that represent entity relations, to which are associated formal operators that can be used to enforce these constraints through geometry. Our second contribution is a cinematographic staging system that generates staging configurations in a virtual environment thanks to a description written in our formal language. Our third contribution is a real-time camera placement system that builds on a subset of our formal language and generates camera tracks in a virtual environment to be used to guide the camera in real-time
Vandecasteele, Arnaud. „Modélisation ontologique des connaissances expertes pour l'analyse de comportements à risque : application à la surveillance maritime“. Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00819259.
Der volle Inhalt der QuelleGouia, Mouna. „Proposition d’une approche d’apprentissage de la foule au sein des plateformes Crowdsourcing (Cas d’une plateforme de Backlinks)“. Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4349.
Der volle Inhalt der QuelleThis thesis is situated in an innovative line of research in engineering and management information systems, it articulates both the aspects of four disciplines of research in the Computer Science, Information Systems, Human Sciences and practical aspects related to Web 2.0 companies. The "Crowdsourcing" as its name suggests, refers to the sourcing by the crowd, studies and research on this topic are infrequent but those that exist confirm the managerial interest of Crowdsourcing platforms, thanks to their undeniable role in value creation. Nevertheless, the crowd is composed of heterogeneous group of amateurs that is why it is also a source of incompetence. Our operating hypothesis posits that learning the crowd stimulates the creation of value in the Crowdsourcing platforms. Thus, our work is mainly organized around the design and development of a tool for learning the crowd in Crowdsourcing platforms. This work is complex and involves both a research work and practical engineering. That is why we choose an exploratory qualitative constructivist approach and an ingénierique research method to define and develop a suitable approach of learning adapted to the Crowdsourcing platforms and implement it thereafter within our test Crowdsourcing platform specializes in Backlinking. Experiments based on semi-structured interviews will, confirm or deny our hypotheses
Ben, Rabah Nourhène. „APPROCHE INTELLIGENTE À BASE DE RAISONNEMENT À PARTIR DE CAS POUR LE DIAGNOSTIC EN LIGNE DES SYSTÈMES AUTOMATISÉS DE PRODUCTION“. Thesis, Reims, 2018. http://www.theses.fr/2018REIMS036/document.
Der volle Inhalt der QuelleAutomated production systems (APS) represents an important class of industrial systems that are increasingly complex given the large number of interactions and interconnections between their different components. As a result, they are more susceptible to malfunctions, whose consequences can be significant in terms of productivity, safety and quality of production. A major challenge is to develop an intelligent approach that can be used to diagnose these systems to ensure their operational safety. In this thesis, we are only interested in the diagnosis of APS with discrete dynamics. We present in the first chapter these systems, the possible malfunctions and the used terminology for the diagnosis. Then, we present a state of the art of the existing methods for the diagnosis of this class of systems and also a synthesis of these methods. This synthesis motivated us to choose a data-based approach that relies on a machine learning technique, which is Case-Based Reasoning (CBR). For this reason, we presented in the second chapter a state of the art on machine learning and its different methods with a focus mainly on the CBR and its uses for the diagnosis of industrial systems. This study allowed us to propose in Chapter 3 a Case Based Decision Support System for the diagnosis of APS. This system is based on an online block and an offline block. The Offline block is used to define a case representation format and to build a Normal Case Base (NCB) and a Faulty Case Base (FCB) from a historical database. The online block helps human operators of monitoring to make the most appropriate diagnosis decision. The experiments results perform on a sorting system presented the pillars of this approach, which reside in the proposed case representation format and in the used case base. To solve these problems and improve the results, a new case representation format is proposed in chapter 4. According to this format and from the data acquired from the simulated system after its emulation in normal and faulty mode, cases of the initial case base are build. Then, a reasoning and incremental learning phase is presented. This phase allows the system diagnosis and the enrichment of the case base following the appearance of new unknown behaviors. The experiments presented in Chapter 5 and perform on the 'turntable' which is a subsystem of the 'sorting system” allowed to show the improvement of the results and also to evaluate and compare the performances of the proposed approach with some automatic learning approaches and with a model-based approach to turntable diagnosis
Parès, Yves Jean Vincent. „Méthodes structurelles et sémantiques pour la mise en correspondance de cas textuels de dysmorphies fœtales“. Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066568/document.
Der volle Inhalt der QuelleThis thesis is set within the context of Accordys, a knowledge engineering project aiming at providing a case-based reasoning system for fetopathology, i.e. the medical domain studying rare diseases and dysmorphia of fetuses. The project is based on a corpus of french fetal exam reports. This material consists in raw text reports diplaying a very specific vocabulary (only partially formalized in french medical terminologies), a "note taking" style that makes difficult to use tools analysing the grammar in the text, and a layout and formatting that shows a latent common structuration (organisation in sections, sub-sections, observations). This thesis aims at testing the hypothesis that a uniformisation of the representation of cases that could exploit this arborescent structure by mapping it with a tree-shaped case model can support the constitution of a case base which preserves the information contained in original reports and the similarity measurement between two cases. Mapping a case with the model (instanciating the case model) is done through a Monte Carlo tree matching method. We compare this with similarity measurements obtained by representing our reports (both without further processing and after semantic enrichment through a semantic annotator) in a vector model
Buchteile zum Thema "Reasonning"
Lbath, R., N. Giambiasi und C. Delorme. „FRaHM: Framework for Reasonning about Hierarchical/multi-vues Models“. In System Fault Diagnostics, Reliability and Related Knowledge-Based Approaches, 29–41. Dordrecht: Springer Netherlands, 1987. http://dx.doi.org/10.1007/978-94-009-3931-8_4.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Reasonning"
Mu-kun, Cao. „E-Commerce Automated Negotiation Based on Agent Reasonning“. In 2010 Fourth International Conference on Mangement of E-Commerce and E-Government (ICMeCG). IEEE, 2010. http://dx.doi.org/10.1109/icmecg.2010.15.
Der volle Inhalt der Quelle