Дисертації з теми "Document Intelligence"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Document Intelligence".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Šprta, Vlastimil. "Inteligentní dokument." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219662.
Chen, Hsinchun, K. J. Lynch, K. Basu, and Tobun Dorbin Ng. "Generating, Integrating, and Activating Thesauri for Concept-based Document Retrieval." IEEE, 1993. http://hdl.handle.net/10150/105378.
This Blackboard-based design uses a neural-net spreading-activation algorithm to traverse multiple thesauri. Guided by heuristics, the algorithm activates related terms in the thesauri and converges on the most pertinent concepts.
Chen, Hsinchun, and K. J. Lynch. "Automatic Construction of Networks of Concepts Characterizing Document Databases." IEEE, 1992. http://hdl.handle.net/10150/105175.
The results of a study that involved the creation of knowledge bases of concepts from large, operational textual databases are reported. Two East-bloc computing knowledge bases, both based on a semantic network structure, were created automatically using two statistical algorithms. With the help of four East-bloc computing experts, we evaluated the two knowledge bases in detail in a concept-association experiment based on recall and recognition tests. In the experiment, one of the knowledge bases that exhibited the asymmetric link property out-performed all four experts in recalling relevant concepts in East-bloc computing. The knowledge base, which contained about 20,O00 concepts (nodes) and 280,O00 weighted relationships (links), was incorporated as a thesaurus-like component into an intelligent retrieval system. The system allowed users to perform semantics-based information management and information retrieval via interactive, conceptual relevance feedback.
Sangupamba, Mwilu Odette. "De la business intelligence interne vers la business intelligence dans le cloud : modèles et apports méthodologiques." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1168/document.
BI and cloud computing are two major areas of computer science research and in particular in information system. A research combining these two concepts has a double interest : On the one hand, in business, the BI becomes increasingly an important part of the information system which requires investment in terms of computing performance and data volumes. On the other hand, cloud computing offers new opportunities to manage data for analysis.Given the possibilities of cloud, migration question of the information system including BI is of great interest. In particular, researchers must provide models and methods to help professional in BI migration to the cloud.The research question is : how to migrate BI to the cloud?In this thesis, we address this issue using design science research approach. We implement a decision-making help for BI migration to the cloud based on taxonomies. We provide an operational guidance model that is instantiated by a BI taxonomy in the cloud and from that rules for BI migration to the cloud are arised
Donolo, Rosa Marina. "Contributions to geovisualization for territorial intelligence." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0075/document.
This PhD research work is placed in the domain of Geovisualization used to implement Territorial Intelligence and decision support systems. This research work was born through the establishment of an agreement between Tor Vergata University, Rome, and INSA (Institut National des Sciences Appliquées), Lyon. The co-supervision of this thesis was born from the necessity of a multidisciplinary approach to the research topic, taking advantage of the skills in urban planning, environment and territory modeling at the Geoinformation doctoral school of Tor Vergata University, and taking advantage of the skills in Spatial Information Systems and Geovisualization at the LIRIS Laboratory of INSA. The motivation that led us to deal with this research topic was the perception of a lack of systematic methods and universally approved empirical experiments in data visualization domain. The experiments should consider different typologies of data, different environmental contexts, different indicators and methods of representations, etc., in order to support expert users in decision making, in the urban and territorial planning and in the implementation of environmental policies. In modern societies, we have to deal with a great amount of data every day and Geovisualization permits the management, exploration and display of big and heterogeneous data in an interactive way that facilitates decision making processes. Geovisualization gives the opportunity to the user to change the visual appearance of the maps, to explore different layers of data and to highlight problems in some areas by the citizens. Despite these advantages, one of the most common problems in Information Visualization is to represent data in a clear and comprehensible way. Spatial data have a complex structure that includes spatial component, thematic attributes, and often the temporal component Actually there are limited scientific foundations to guide researchers in visual design of spatial data, and there are limited systematic and standard methods to evaluate the effectiveness of the solutions proposed. In this Phd research work, some contributions will be provided to the creation of a systematic assessment method to evaluate and to develop effective geovisualization displays. An empirical evaluation test is proposed to assess the effectiveness of some map displays, analyzing the use of three elements of visual design: 1. the spatial indicators to be represented and their context of visualization, 2. the physical dimensions of map displays, 3. the visual variables to represent different layers of information
Sasa, Yuko. "Intelligence Socio-Affective pour un Robot : primitives langagières pour une interaction évolutive d'un robot de l’habitat intelligent." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM041/document.
The Natural Language Processing (NLP) has technically improved regarding human speech vocabulary extension, morphosyntax scope, style and aesthetic. Affective Computing also tends to integrate an “emotional” dimension with a common goal shared with NLP which is to disambiguate the natural language and increase the human-machine interaction naturalness. Within social robotics, the interaction is modelled in dialogue systems trying to reach out an attachment dimension which effects need to an ethical and collective control. However, the situated natural language dynamics is undermining the automated system’s efficiency, which is trying to respond with useful and suitable feedbacks. This thesis hypothesis supposes the existence of a “socio-affective glue” in every interaction, set up in between two individuals, each with a social role depending on a communication context. This glue is so the consequence of dynamics generated by a process which mechanisms rely on an altruistic dimension, but independent of dominance dimension as seen in emotions studies. This glue would allow the exchange of the language events between interlocutors, by regularly modifying their relation and their role, which is changing themselves this glue, to ensure the communication continuity. The second hypothesis proposes the glue as built by “socio-affective pure prosody” forms that enable this relational construction. These cues are supposed to be carried by hearable and visible micro-expressions. The interaction events effect would also be gradual following the degree of the communication’s intentionality control. The graduation will be continuous through language primitives as 1) mouth noises (neither phonetics nor phonological sounds), 2) pre-lexicalised sounds, 3) interjections and onomatopoeias, 4) controlled command-based imitations with the same socio-affective prosody supposed to create and modify the glue. Within the Domus platform, we developed an almost living-lab methodology. It functions on agile and iterative loops co-constructed with industrial and societal partners. A wizard of oz approach – EmOz – is used to control the vocal primitives proposed as the only language tools of a Smart Home butler robot interacting with relationally isolated elderly. The relational isolation allows the dimensions the socio-affective glue in a contrastive situation where it is damaged. We could thus observe the primitives’ effects through multimodal language cues. One of the gerontechnology social motivation showed the isolation to be a phenomenon amplifying the frailty so can attest the emergence of assistive robotics. A vicious circle leads by the elderly communicational characteristics convey them to some difficulties to maintain their relational tissue while their bonds are beneficial for their health and well-being. If the proposed primitives could have a real effect on the glue, the automated system will be able to train the persons to regain some unfit mechanisms underlying their relational construction, and so possibly increase their desire to communicate with their human social surroundings. The results from the collected EEE corpus show the relation changes through various interactional cues, temporally organised. These denoted parameters tend to build an incremental dialogue system in perspectives – SASI. The first steps moving towards this system reside on a speech recognition prototype which robustness is not based on the accuracy of the recognised language content but on the possibility to identify the glue degree (i.e. the relational state) between the interlocutors. Thus, the recognition errors avoid the system to be rejected by the user, by tempting to be balanced by this system’s adaptive socio-affective intelligence
Bernardes, Vitor Giovani. "Urban environment perception and navigation using robotic vision : conception and implementation applied to automous vehicle." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2155/document.
The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context,where the road layout may be very complex, the presence of objects such as trees, bicycles,cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to dea lwith uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully,understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement basedon decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, with out the need to adapt the infrastructure,without requiring previous knowledge of the environment and considering the presenceof dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and tofollow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensors
Karim, Jahanvash. "Emotional Intelligence : a Cross-Cultural Psychometric Analysis." Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX32028/document.
Despite the rather large literature concerning emotional intelligence, the vast majority of studies concerning development and validation of emotional intelligence scales have been done in the Western countries. Hence, a major limitation in this literature is its decidedly Western focus. The aim of this research was to assess the psychometric properties of the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), the Trait Emotional Intelligence Questionnaire (TEIQue), and the Self-Report Emotional Intelligence Test (SREIT) in a cross-cultural comparative context involving the collectivist Pakistani (Eastern culture) and the individualist French (Western culture) students. Results of this study showed that participants from the French culture scored higher than participants from the Pakistani sample on the MSCEIT but not on the TEIQue and the SREIT. Multi-sample analyses revealed that the MSCEIT, the TEIQue, and the SREIT factor structures remained invariant across both cultures. Regarding discriminant validity, in both cultures, self-ratings of emotional intelligence, as assessed by the SREIT and the TEIQue, and performance measure of emotional intelligence, as assessed by the MSCEIT, were not strongly correlated. Furthermore, in both cultures, scores on the MSCEIT, the TEIQue, and the SREIT revealed to be unrelated to cognitive intelligence and communication styles. Finally, low to moderate correlations were observed between the EI measures and the Big Five personality dimensions. Regarding convergent validity of the self-report EI measures, in both cultures the scores on the TEIQue strongly correlated with the scores on the SREIT. With regard to incremental validity, in both cultures, after statistically controlling for the Big Five personality dimensions and cognitive ability, the MSCEIT and the SREIT revealed to be unrelated to satisfaction with life, positive affect, negative affect, and psychological distress. In contrast, the TEIQue factors accounted for a significant amount of variance in outcome variables after controlling for the Big Five personality dimensions and the cognitive intelligence. However, further analyses revealed that the associations were mainly because of the TEIQue’s well-being factor. Finally, in both cultures, females scored higher than males on the MSCEIT but not on the TEIQue and the SREIT. In sum, the results of this study provide evidence for the factorial, discriminant, and convergent validity of these emotional intelligence measures in both cultures. However, results regarding incremental validity of these measures are less promising than anticipated
Cripwell, Liam. "Controllable and Document-Level Text Simplification." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0186.
Text simplification is a task that involves rewriting a text to make it easier to read and understand for a wider audience, while still expressing the same core meaning. This has potential benefits for disadvantaged end-users (e.g. non-native speakers, children, the reading impaired), while also showing promise as a preprocessing step for downstream NLP tasks. Recent advancement in neural generative models have led to the development of systems that are capable of producing highly fluent outputs. However, these end-to-end systems often rely on training corpora to implicitly learn how to perform the necessary rewrite operations. In the case of simplification, these datasets are lacking in both quantity and quality, with most corpora either being very small, automatically constructed, or subject to strict licensing agreements. As a result, many systems tend to be overly conservative, often making no changes to the original text or being limited to the paraphrasing of short word sequences without substantial structural modifications. Furthermore, most existing work on text simplification is limited to sentence-level inputs, with attempts to iteratively apply these approaches to document-level simplification failing to coherently preserve the discourse structure of the document. This is problematic, as most real-world applications of text simplification concern document-level texts. In this thesis, we investigate strategies for mitigating the conservativity of simplification systems while promoting a more diverse range of transformation types. This involves the creation of new datasets containing instances of under-represented operations and the implementation of controllable systems capable of being tailored towards specific transformations and simplicity levels. We later extend these strategies to document-level simplification, proposing systems that are able to consider surrounding document context and use similar controllability techniques to plan which sentence-level operations to perform ahead of time, allowing for both high performance and scalability. Finally, we analyze current evaluation processes and propose new strategies that can be used to better evaluate both controllable and document-level simplification systems
El, Mernissi Karim. "Une étude de la génération d'explication dans un système à base de règles." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066332/document.
The concept of “Business Rule Management System” (BRMS) has been introduced in order to facilitate the design, the management and the execution of company-specific business policies. Based on a symbolic approach, the main idea behind these tools is to enable the business users to manage the business rule changes in the system without requiring programming skills. It is therefore a question of providing them with tools that enable to formulate their business policies in a near natural language form and automate their processing. Nowadays, with the expansion of intelligent systems, we have to cope with more and more complex decision logic and large volumes of data. It is not straightforward to identify the causes leading to a decision. There is a growing need to justify and optimize automated decisions in a short time frame, which motivates the integration of advanced explanatory component into its systems. Thus, the main challenge of this research is to provide an industrializable approach for explaining the decision-making processes of business rules applications and more broadly rule-based systems. This approach should be able to provide the necessary information for enabling a general understanding of the decision, to serve as a justification for internal and external entities as well as to enable the improvement of existing rule engines. To this end, the focus will be on the generation of the explanations in themselves as well as on the manner and the form in which they will be delivered
Ferry, Aurélien. "L’accompagnement entrepreneurial : la métamorphose des accompagnateurs en facilitateurs." Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1217/document.
Today, humanity finds itself subject to profound changes that have remained until know unknown, changes that question the relationship between humans and the world. Each change is the foundation of new areas of activity that did not exist before. To invent these new activities, the new generations of entrepreneurs and startuppers integrate into networks (collective intelligence), collaborate with applications (artificial intelligence), use their emotions (emotional intelligence), and look for adult education courses (andragogical intelligence).Coaches gradually turn into facilitators. Since 2014, we have trained, equipped, and sensitized a hundred of them, in five French regions and five Moroccan regions. In total, 12,900 people wishing to become entrepreneurs have benefited from this new form of enabling support. We were able to create a skills map based on these new facilitators to transform current coaches
Nguyen, Kim-Anh Laura. "Document Understanding with Deep Learning Techniques." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS077.
The field of Document Understanding, which addresses the problem of solving an array of Natural Language Processing tasks for visually-rich documents, faces challenges due to the complex structures and diverse formats of documents. Real-world documents rarely follow a strictly sequential structure. The visual presentation of a document, especially its layout, conveys rich semantic information, highlighting the crucial need for document understanding systems to include multimodal information. Despite notable advancements attributed to the emergence of Deep Learning, the field still grapples with various challenges in real-world applications. This thesis addresses two key challenges: 1) developing efficient and effective methods to encode the multimodal nature of documents, and 2) formulating strategies for efficient and effective processing of long and complex documents, considering their visual appearance. Our strategy to address the first research question involves designing approaches that rely only on layout to build meaningful representations. Multimodal pre-trained models for Document Understanding often neglect efficiency and fail to fully capitalize on the strong correlation between text and layout. We address these issues by introducing an attention mechanism based exclusively on layout information, enabling performance improvement and attention sparsification. Furthermore, we introduce a strategy based solely on layout to address reading order issues. While layout inherently captures the correct reading order of documents, existing pre-training methods for Document Understanding rely solely on OCR or PDF parsing to establish the reading order of documents, potentially introducing inaccuracies that can impact the entire text processing pipeline. Therefore, we discard sequential position information and propose a model that strategically leverages layout information as an alternative means to determine the reading order of documents. In addressing the second research axis, we explore the potential of leveraging layout to enhance the performance of models for tasks related to long and complex documents. The importance of document structure in information processing, particularly in the context of long documents, underscores the need for efficient modeling of layout information. To fill a notable void in resources and approaches for multimodal long document modeling, we introduce a dataset collection for summarization of long documents with consideration for their visual appearance, and present novel baselines that can handle long documents with awareness of their layout
Kuchmann-Beauger, Nicolas. "Question Answering System in a Business Intelligence Context." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2013. http://www.theses.fr/2013ECAP0021/document.
The amount and complexity of data generated by information systems keep increasing in Warehouses. The domain of Business Intelligence (BI) aims at providing methods and tools to better help users in retrieving those data. Data sources are distributed over distinct locations and are usually accessible through various applications. Looking for new information could be a tedious task, because business users try to reduce their work overload. To tackle this problem, Enterprise Search is a field that has emerged in the last few years, and that takes into consideration the different corporate data sources as well as sources available to the public (e.g. World Wide Web pages). However, corporate retrieval systems nowadays still suffer from information overload. We believe that such systems would benefit from Natural Language (NL) approaches combined with Q&A techniques. Indeed, NL interfaces allow users to search new information in their own terms, and thus obtain precise answers instead of turning to a plethora of documents. In this way, users do not have to employ exact keywords or appropriate syntax, and can have faster access to new information. Major challenges for designing such a system are to interface different applications and their underlying query languages on the one hand, and to support users’ vocabulary and to be easily configured for new application domains on the other hand. This thesis outlines an end-to-end Q&A framework for corporate use-cases that can be configured in different settings. In traditional BI systems, user-preferences are usually not taken into account, nor are their specific contextual situations. State-of-the art systems in this field, Soda and Safe do not compute search results on the basis of users’ situation. This thesis introduces a more personalized approach, which better speaks to end-users’ situations. Our main experimentation, in this case, works as a search interface, which displays search results on a dashboard that usually takes the form of charts, fact tables, and thumbnails of unstructured documents. Depending on users’ initial queries, recommendations for alternatives are also displayed, so as to reduce response time of the overall system. This process is often seen as a kind of prediction model. Our work contributes to the following: first, an architecture, implemented with parallel algorithms, that leverages different data sources, namely structured and unstructured document repositories through an extensible Q&A framework, and this framework can be easily configured for distinct corporate settings; secondly, a constraint-matching-based translation approach, which replaces a pivot language with a conceptual model and leads to more personalized multidimensional queries; thirdly, a set of NL patterns for translating BI questions in structured queries that can be easily configured in specific settings. In addition, we have implemented an iPhone/iPad™ application and an HTML front-end that demonstrate the feasibility of the various approaches developed through a series of evaluation metrics for the core component and scenario of the Q&A framework. To this end, we elaborate on a range of gold-standard queries that can be used as a basis for evaluating retrieval systems in this area, and show that our system behave similarly as the well-known WolframAlpha™ system, depending on the evaluation settings
Lehmann, Alberto Joseph. "Causation in artificial intelligence and law a modelling approach /." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2003. http://dare.uva.nl/document/67544.
Mabrouki, Olfa. "Semantic Framework for Managing Privacy Policies in Ambient Intelligence." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112319/document.
This thesis aims at proposing a semantic framework that integrates a meta-model and reasoning tools allowing any ubiquitous system designer to easily implement mechanisms to manage privacy policies. The proposed framework includes a generic middleware architecture that provides components to define, manage and monitor the implementation of privacy policies. Our approach is an hybrid one based on Model-Driven Engineering and a reasoning based on ontologies and inference rules operating on the assumption of the closed world. The proposed meta-model is characterized by a high level of abstraction and expressiveness to define privacy policies management regardless of the domain application and can be adapted to different contexts. It defines, also, a conceptual framework for generic decidable modelling rules to make consistent control decisions on user privacy. These model rules are implemented using the SmartRules language that could implement an adaptive control. The latter is based on a non-monotonic reasoning and representation of instances of concepts according to the unique name assumption. We have validated the proposed semantic framework through a typical scenario that implements support ambient intelligence privacy-aware services for elderly
Raddaoui, Badran. "Contributions aux approches logiques de l'argumentation en intelligence artificielle." Thesis, Artois, 2013. http://www.theses.fr/2013ARTO0412/document.
This thesis focus on the field of argumentation models in artificial intelligence. These models form very popular tools to study reasoning under inconsistency in knowledge bases, negotiation between agents, and also in decision making. An argumentative model is an interactional process mainly based on the construction of arguments and counter-arguments, then studying the relations between these arguments, and finally the introduction of some criteria to identifying the status of each argument in order to select the (most) acceptable of them.In this context, this work was dealt with the study of a particular system: the deductive argumentation framework. An argument is then understood as a pair premises-conclusion such that conclusion is a logical formula entailed by premises, a non-ordered collection of logical formulas. We have addressed several issues. First of all, on the basis that reductio ad absurdum is valid in classical propositional logic, we propose a method to compute arguments for a given statement. This approach is extended to generate canonical undercuts, arguments identified as the representative of all counter-arguments. Contrary to the other approaches proposed in the literature, our technique is complete in the sense that all arguments relative to the statement at hand are generated and so are all relevant counter-arguments. Secondly, we proposed a logic based argumentation in conditional logic. Conditional logic is often regarded as an appealing setting for the formalization of hypothetical reasoning. Their conditional connective is often regarded as a very suitable connective to encode many implicative reasoning patterns real-life and attempts to avoid some pitfalls of material implication of propositional logic. This allows us to put in light and encompass a concept of conditional contrariety thats covers both usual inconsistency-based conflict and a specific form of conflict that often occurs in real-life argumentation: i.e., when an agent asserts an If then rule, it can be argued that the satisfaction of additional conditions are required for the conclusion of a rule to hold. Then, in that case we study the main foundational concepts of an argumentation theory in conditional logic. Finally, the last point investigated in this work concerns the reasoning about bounded resources, within a framework in which logical formulas are themselves consumed in the deductive process. First, a simple variant of Boolean logic is introduced, allowing us to reason about consuming resources. Then, the main concepts of logic-based argumentation are revisited in this framework
Paudel, Subodh. "Methodology to estimate building energy consumption using artificial intelligence." Thesis, Nantes, Ecole des Mines, 2016. http://www.theses.fr/2016EMNA0237/document.
High-energy efficiency building standards (as Low energy building LEB) to improve building consumption have drawn significant attention. Building standards is basically focused on improving thermal performance of envelope and high heat capacity thus creating a higher thermal inertia. However, LEB concept introduces alarge time constant as well as large heat capacity resulting in a slower rate of heat transfer between interior of building and outdoor environment. Therefore, it is challenging to estimate and predict thermal energy demand for such LEBs. This work focuses on artificial intelligence (AI) models to predict energy consumptionof LEBs. We consider two kinds of AI modeling approaches: “all data” and “relevant data”. The “all data” uses all available data and “relevant data” uses a small representative day dataset and addresses the complexity of building non-linear dynamics by introducing past day climatic impacts behavior. This extraction is based on either simple physical understanding: Heating Degree Day (HDD), modified HDD or pattern recognition methods: Frechet Distance and Dynamic Time Warping (DTW). Four AI techniques have been considered: Artificial Neural Network (ANN), Support Vector Machine (SVM), Boosted Ensemble Decision Tree (BEDT) and Random forest (RF). In a first part, numerical simulations for six buildings (heat demand in the range [25 – 85 kWh/m².yr]) have been performed. The approach “relevant data” with (DTW, SVM) shows the best results. Real data of the building “Ecole des Mines de Nantes” proves the approach is still relevant
Jiao, Yang. "Applications of artificial intelligence in e-commerce and finance." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0002/document.
Artificial Intelligence has penetrated into every aspect of our lives in this era of Big Data. It has brought revolutionary changes upon various sectors including e-commerce and finance. In this thesis, we present four applications of AI which improve existing goods and services, enables automation and greatly increase the efficiency of many tasks in both domains. Firstly, we improve the product search service offered by most e-commerce sites by using a novel term weighting scheme to better assess term importance within a search query. Then we build a predictive model on daily sales using a time series forecasting approach and leverage the predicted results to rank product search results in order to maximize the revenue of a company. Next, we present the product categorization challenge we hold online and analyze the winning solutions, consisting of the state-of-the-art classification algorithms, on our real dataset. Finally, we combine skills acquired previously from time series based sales prediction and classification to predict one of the most difficult but also the most attractive time series: stock. We perform an extensive study on every single stocks of S&P 500 index using four state-of-the-art classification algorithms and report very promising results
Boutet, Charles-Victor. "Le cycle de l’information en intelligence économique, à la lumière du web 2.0." Thesis, Toulon, 2011. http://www.theses.fr/2011TOUL0010/document.
The information cycle, from collection to dissemination is a cornerstone in competitive intelligence. On the other hand, in recent years, web 2.0, writable web, changed the face of the Internet. Our work is to study about the impact that web 2.0 has on the famous cycle and propose methods and tools to take advantage of this new paradigm, and this for each stage of the cycle
Kerinska, Nikoleta. "Art et intelligence artificielle : dans le contexte d'une expérimentation artistique." Thesis, Paris 1, 2014. http://www.theses.fr/2014PA010541/document.
This thesis finds its origins in the way we are making art, and it is marked by our unflagging interest in the relationship between art and digital technology. Its primary idea consists in examining the possibilities offered by artificial intelligence in the context of art. The hypothesis of this study suggests that artworks endowed with artificial intelligence present a type of problematic that is common and identifiable in the general landscape of computer works of art. Our aim is to understand how the notion of intelligence is evoked by the behavior of certain artworks, and in what way current art productions are enriched from a conceptual and formal point of view, by techniques of artificial intelligence. At first we propose a study about the computer art main definitions and current trends. Next, we develop a definition of artworks endowed with artificial intelligence. Then, we present our artistic projects and their respective problematic. As part of this dissertation we have developed two projects that engage in an artistic reflection on natural language as a interface of the communication between man and machine, as well as on the notion of automaton and intelligent agents in the context of computer art
Ben, Slymen Syrine. "Sentiment d'appartenance et intelligence territoriale : une application au contexte tunisien." Thesis, Nice, 2014. http://www.theses.fr/2014NICE2038/document.
Our research has been established as part of the interdisciplinary research program languages, objects, territories and hospitality. Our goal is to understand the informational, communicational and management processes for the development of areas of Nabeul and Medenine under the CGDR and ODS, and to estimate and evaluate the nature of the relationship between sense of belonging and the Territorial Enhancement through the collective intelligence devices. The anchor of our research in interdisciplinary fields of management, information and communication sciences has enabled us to understand our research object in a logic of exploitation practices and devices management planning and in the spirit of communication, transmission of information, funding and dissemination of knowledge. All these practices are selected through the felt to the area in terms of identity, attachment and solidarity. Ultimately this thesis reveals the causal links between the sense of belonging and the dimensions of the TIE and some dimensions of the TKM and the PTI. Our proposals invite the diagnosis, evaluation and understanding the reason of the abandonment of certain practices of territorial intelligence, the adoption of new forms of territorial public communication and to strengthen the sense of belonging and to ensure better sharing and dissemination of information
Maillard, Adrien. "Flexible Scheduling for Agile Earth Observing Satellites." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0024/document.
Earth-observation satellites are space sensors which acquire data, compress and record it on board, and then download it to the ground. Some uncertainties make planning and scheduling satellite activities offline on the ground more and more arguable as worst-case assumptions are made about uncertain parameters and plans are suboptimal. This dissertation details our efforts at designing a flexible decision-making scheme that allows to profit from the realization of uncertain parameters on board while keeping a fair level of predictability on the ground. Our first contribution concerns the data download problem. A flexible decision-making mechanism has been designed where only high-priority acquisition downloads are scheduled with worst-case assumptions. Other acquisition downloads are scheduled with expected parameters and conditioned by resource availability. The plan is then adapted on board. Our second contribution concerns the acquisition planning problem. A lot of acquisitions that could have been done are eliminated when planning because of worst-case assumptions. In a new decision-making scheme, these high-level constraints are removed for low-priority acquisitions. Observation plans produced on the ground are conditional plans involving conditions for triggering low-priority acquisitions. Compared with pure ground and pure onboard methods, these two approaches avoid wastage of resource and allow more acquisitions to be executed and downloaded to the ground while keeping a fair level of predictability on the ground
Majd, Thomas. "Contribution à l’analyse des facteurs explicatifs de la performance des commerciaux en matière de veille marketing : esquisse d'un cadre conceptuel." Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCD075/document.
In the context of ever stiffer competition, ever better informed customers, and products becoming ever more undistinguishable, it is increasingly difficult for companies to sell their products and services and maintain a sustainable competitive advantage. Hence the need for them to have access to a proper perception of the evolutions, movements and practices of the main actors in their environment. Tools such as economic intelligence and market intelligence make it possible to respond to the aforementioned challenges. Among the actors likely to take on an important role in this field is the sales force, as an interface between the market and the company. Although salespeople are increasingly considered as veritable vectors of information in the field, the study of the factors that determine their performance in that sense has received little attention until now. In this context, the objective of this thesis is to propose a model analyzing the main factors likely to influence the performance of sales people in terms of market intelligence. The basic hypothesis of this model relies on the existence of two main categories of factors likely to favor the performance of sales people in terms of passing on field information on a regular basis (market intelligence): first, factors which are specific to salespeople, and second, factors linked to the management of the sales force
Belkhir, Nacim. "Per Instance Algorithm Configuration for Continuous Black Box Optimization." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS455/document.
This PhD thesis focuses on the automated algorithm configuration that aims at finding the best parameter setting for a given problem or a' class of problem. The Algorithm Configuration problem thus amounts to a metal Foptimization problem in the space of parameters, whosemetaFobjective is the performance measure of the given algorithm at hand with a given parameter configuration. However, in the continuous domain, such method can only be empirically assessed at the cost of running the algorithm on some problem instances. More recent approaches rely on a description of problems in some features space, and try to learn a mapping from this feature space onto the space of parameter configurations of the algorithm at hand. Along these lines, this PhD thesis focuses on the Per Instance Algorithm Configuration (PIAC) for solving continuous black boxoptimization problems, where only a limited budget confessionnalisations available. We first survey Evolutionary Algorithms for continuous optimization, with a focus on two algorithms that we have used as target algorithm for PIAC, DE and CMAFES. Next, we review the state of the art of Algorithm Configuration approaches, and the different features that have been proposed in the literature to describe continuous black box optimization problems. We then introduce a general methodology to empirically study PIAC for the continuous domain, so that all the components of PIAC can be explored in real Fworld conditions. To this end, we also introduce a new continuous black box test bench, distinct from the famous BBOB'benchmark, that is composed of a several multiFdimensional test functions with different problem properties, gathered from the literature. The methodology is finally applied to two EAS. First we use Differential Evolution as'target algorithm, and explore all the components of PIAC, such that we empirically assess the best. Second, based on the results on DE, we empirically investigate PIAC with Covariance Matrix Adaptation Evolution Strategy (CMAFES) as target algorithm. Both use cases empirically validate the proposed methodology on the new black box testbench for dimensions up to100
Matta, Natalie. "Vers une gestion décentralisée des données des réseaux de capteurs dans le contexte des smart grids." Thesis, Troyes, 2014. http://www.theses.fr/2014TROY0010/document.
This thesis focuses on the decentralized management of data collected by wireless sensor networks which are deployed in a smart grid, i.e. the evolved new generation electricity network. It proposes a decentralized architecture based on multi-agent systems for both data and energy management in the smart grid. In particular, our works deal with data management of sensor networks which are deployed in the distribution electric subsystem of a smart grid. They aim at answering two key challenges: (1) detection and identification of failure and disturbances requiring swift reporting and appropriate reactions; (2) efficient management of the growing volume of data caused by the proliferation of sensors and other sensing entities such as smart meters. The management of this data can call upon several methods, including the aggregation of data packets on which we focus in this thesis. To this end, we propose to aggregate (PriBaCC) and/or to correlate (CoDA) the contents of these data packets in a decentralized manner. Data processing will thus be done faster, consequently leading to rapid and efficient decision-making concerning energy management. The validation of our contributions by means of simulation has shown that they meet the identified challenges. It has also put forward their enhancements with respect to other existing approaches, particularly in terms of reducing data volume as well as transmission delay of high priority data
Klaimi, Joelle. "Gestion multi-agents des smart grids intégrant un système de stockage : cas résidentiel." Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0006/document.
This thesis focuses on the decentralized management using multi-agent systems of energy, including renewable energy sources, in the smart grid context. Our research aims to minimize consumers’ energy bills by answering two key challenges: (1) handle the problem of intermittency of renewable energy sources; (2) reduce energy losses. To overcome the problem of renewable resources intermittency and in order to minimize energy costs even during peak hours, we integrated an intelligent storage system. To this end, we propose many algorithms in order to use intelligent storage systems and multi-agent negotiation algorithm to reduce energy cost while maintaining a minimal discharge rate of the battery and minimal energy loss. The validation of our contributions has shown that our proposals respond to the identified challenges, including reducing the cost of energy for consumers, in comparison to the state of the art
Ramoly, Nathan. "Contextual integration of heterogeneous data in an open and opportunistic smart environment : application to humanoid robots." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLL003/document.
Personal robots associated with ambient intelligence are an upcoming solution for domestic care. In fact, helped with devices dispatched in the environment, robots could provide a better care to users. However, such robots are encountering challenges of perception, cognition and action.In fact, such an association brings issues of variety, data quality and conflicts, leading to the heterogeneity and uncertainty of data. These are challenges for both perception, i.e. context acquisition, and cognition, i.e. reasoning and decision making. With the knowledge of the context, the robot can intervene through actions. However, it may encounter task failures due to a lack of knowledge or context changes. This causes the robot to cancel or delay its agenda. While the literature addresses those topics, it fails to provide complete solutions. In this thesis, we proposed contributions, exploring both reasoning and learning approaches, to cover the whole spectrum of problems. First, we designed novel context acquisition tool that supports and models uncertainty of data. Secondly, we proposed a cognition technique that detects anomalous situation over uncertain data and takes a decision in accordance. Then, we proposed a dynamic planner that takes into consideration the last context changes. Finally, we designed an experience-based reinforcement learning approach to proactively avoid failures.All our contributions were implemented and validated through simulations and/or with a small robot in a smart home platform
Mazac, Sébastien. "Approche décentralisée de l'apprentissage constructiviste et modélisation multi-agent du problème d'amorçage de l'apprentissage sensorimoteur en environnement continu : application à l'intelligence ambiante." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10147/document.
The theory of cognitive development from Jean Piaget (1923) is a constructivist perspective of learning that has substantially influenced cognitive science domain. Within AI, lots of works have tried to take inspiration from this paradigm since the beginning of the discipline. Indeed it seems that constructivism is a possible trail in order to overcome the limitations of classical techniques stemming from cognitivism or connectionism and create autonomous agents, fitted with strong adaptation ability within their environment, modelled on biological organisms. Potential applications concern intelligent agents in interaction with a complex environment, with objectives that cannot be predefined. Like robotics, Ambient Intelligence (AmI) is a rich and ambitious paradigm that represents a high complexity challenge for AI. In particular, as a part of constructivist theory, the agent has to build a representation of the world that relies on the learning of sensori-motor patterns starting from its own experience only. This step is difficult to set up for systems in continuous environments, using raw data from sensors without a priori modelling.With the use of multi-agent systems, we investigate the development of new techniques in order to adapt constructivist approach of learning on actual cases. Therefore, we use ambient intelligence as a reference domain for the application of our approach
Tohalino, Jorge Andoni Valverde. "Extractive document summarization using complex networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24102018-155954/.
Devido à grande quantidade de informações textuais disponíveis na Internet, a tarefa de sumarização automática de documentos ganhou importância significativa. A sumarização de documentos tornou-se importante porque seu foco é o desenvolvimento de técnicas destinadas a encontrar conteúdo relevante e conciso em grandes volumes de informação sem alterar seu significado original. O objetivo deste trabalho de Mestrado é usar os conceitos da teoria de grafos para o resumo extrativo de documentos para Sumarização mono-documento (SDS) e Sumarização multi-documento (MDS). Neste trabalho, os documentos são modelados como redes, onde as sentenças são representadas como nós com o objetivo de extrair as sentenças mais relevantes através do uso de algoritmos de ranqueamento. As arestas entre nós são estabelecidas de maneiras diferentes. A primeira abordagem para o cálculo de arestas é baseada no número de substantivos comuns entre duas sentenças (nós da rede). Outra abordagem para criar uma aresta é através da similaridade entre duas sentenças. Para calcular a similaridade de tais sentenças, foi usado o modelo de espaço vetorial baseado na ponderação Tf-Idf e word embeddings para a representação vetorial das sentenças. Além disso, fazemos uma distinção entre as arestas que vinculam sentenças de diferentes documentos (inter-camada) e aquelas que conectam sentenças do mesmo documento (intra-camada) usando modelos de redes multicamada para a tarefa de Sumarização multi-documento. Nesta abordagem, cada camada da rede representa um documento do conjunto de documentos que será resumido. Além das medições tipicamente usadas em redes complexas como grau dos nós, coeficiente de agrupamento, caminhos mais curtos, etc., a caracterização da rede também é guiada por medições dinâmicas de redes complexas, incluindo simetria, acessibilidade e tempo de absorção. Os resumos gerados foram avaliados usando diferentes corpus para Português e Inglês. A métrica ROUGE-1 foi usada para a validação dos resumos gerados. Os resultados sugerem que os modelos mais simples, como redes baseadas em Noun e Tf-Idf, obtiveram um melhor desempenho em comparação com os modelos baseados em word embeddings. Além disso, excelentes resultados foram obtidos usando a representação de redes multicamada de documentos para MDS. Finalmente, concluímos que várias medidas podem ser usadas para melhorar a caracterização de redes para a tarefa de sumarização.
Toussaint, Ben-Manson. "Apprentissage automatique à partir de traces multi-sources hétérogènes pour la modélisation de connaissances perceptivo-gestuelles." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM063/document.
Perceptual-gestural knowledge is multimodal : they combine theoretical and perceptual and gestural knowledge. It is difficult to capture in Intelligent Tutoring Systems. In fact, its capture in such systems involves the use of multiple devices or sensors covering all the modalities of underlying interactions. The "traces" of these interactions -also referred to as "activity traces"- are the raw material for the production of key tutoring services that consider their multimodal nature. Methods for "learning analytics" and production of "tutoring services" that favor one or another facet over others, are incomplete. However, the use of diverse devices generates heterogeneous activity traces. Those latter are hard to model and treat.My doctoral project addresses the challenge related to the production of tutoring services that are congruent to this type of knowledge. I am specifically interested to this type of knowledge in the context of "ill-defined domains". My research case study is the Intelligent Tutoring System TELEOS, a simulation platform dedicated to percutaneous orthopedic surgery.The contributions of this thesis are threefold : (1) the formalization of perceptual-gestural interactions sequences; (2) the implementation of tools capable of reifying the proposed conceptual model; (3) the conception and implementation of algorithmic tools fostering the analysis of these sequences from a didactic point of view
Kinauer, Stefan. "Représentations à base de parties pour la vision 3D de haut niveau." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC059/document.
In this work we use Deformable Part Models (DPMs) to learn and detect object parts in 3 dimensions. Given a single RGB image of an object, the objective is to determine the location of the object’s parts. The resulting optimization problem is non-convex and challenging due to its large solution space.Our first contribution consists in extending DPMs into the third dimension through an efficient Branch-and-Bound algorithm. We devise a customized algorithm that is two orders of magnitude faster than a naive approach and guarantees global-optimality. We derive the model’s 3-dimensional geometry from one 3-dimensional structure, but train viewpoint-specific part appearance terms based on deep learning features. We demonstrate our approach on the task of 3D object pose estimation, determining the object pose within a fraction of a second.Our second contribution allows us to perform efficient inference with part-based models where the part connections form a graph with loops, thereby allowing for richer models. For this, we use the Alternating Direction Method of Multipliers (ADMM) to decouple the problem and solve iteratively a set of easier sub-problems. We compute 3-dimensional model parameters in a Convolutional Neural Network for 3D human pose estimation. Then we append the developed inference algorithm as final layer to this neural network. This yields state of the art performance in the 3D human pose estimation task
Harriet, Loïc. "L'intelligence économique à la lumière des concepts managériaux : l'étude de cas d'une entreprise du secteur énergétique." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0193/document.
"L'intelligence économique" is presented as a French conceptual exception after at a time of translations various English terms but also an aggregation of organizational functions related to information. These heteroclite bases are combined to an effervescent practice, “l’intelligence économique” never ceasing to develop in various forms in organizations. This thesis aims to propose a new theoretical basis for these experiments based on the managerial concepts through a case study of Gaz de Bordeaux, an energy firm. As part of an exploratory will based on a qualitative, the objective is to propose a definition based on the Management Science theories of asymmetric information, system and value
Jayles, Bertrand. "Effects of information quantity and quality on collective decisions in human groups." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30367/document.
In this thesis, we were interested in the impact of the quantity and quality of information ex- changed between individuals in a group on their collective performance in two very specific types of tasks. In a first series of experiments, subjects had to estimate quantities sequentially, and could revise their estimates after receiving the average estimate of other subjects as social information. We controlled this social information through virtual participants (which number we controlled) giving information (which value we controlled), unknowingly to the subjects. We showed that when subjects have little prior knowledge about a quantity to estimate, (the loga- rithms of) their estimates follow a Laplace distribution. Since the median is a good estimator of the center of a Laplace distribution, we defined collective performance as the proximity of the median (log) estimate to the true value. We found that after social influence, and when the information provided by the virtual agents is correct, the collective performance increases with the amount of information provided (fraction of virtual agents). We also analysed subjects' sensitivity to social influence, and found that it increases with the distance between personal estimate and social information. These analyses made it possible to define five behavioral traits: to keep one's opinion, to adopt that of others, to compromise, to amplify social information or to contradict it. Our results showed that the subjects who adopt the opinion of others are the ones who best improve their performance because they are able to benefit from the infor- mation provided by the virtual agents. We then used these analyses to construct and calibrate a model of collective estimation, which quantitatively reproduced the experimental results and predicted that a limited amount of incorrect information can counterbalance a cognitive bias that makes subjects underestimate quantities, and thus improve collective performance. Further experiments have validated this prediction. In a second series of experiments, groups of 22 pedestrians had to segregate into clusters of the same "color", without visual cue (the colors were unknown), after a short period of random walk. To help them accomplish their task, we used an information filtering system (analogous to a sensory device such as the retina), taking all the positions and colors of individuals in input, and returning an acoustic signal to the subjects (emitted by tags attached to their shoulders) when the majority of their k nearest neighbors was of a different color from theirs
Pruvost, Gaëtan. "Modélisation et conception d’une plateforme pour l’interaction multimodale distribuée en intelligence ambiante." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112017/document.
This thesis deals with ambient intelligence and the design of Human-Computer Interaction (HCI). It studies the automatic generation of user interfaces that are adapted to the interaction context in ambient environments. This problem raises design issues that are specific to ambient HCI, particularly in the reuse of multimodal and multidevice interaction techniques. The present work falls into three parts. The first part is an analysis of state-of-the-art software architectures designed to solve those issues. This analysis outlines the limits of current approaches and enables us to propose, in the second part, a new approach for the design of ambient HCI called DAME. This approach relies on the automatic and dynamic association of software components that build a user interface. We propose and define two complementary models that allow the description of ergonomic and architectural properties of the software components. The design of such components is organized in a layered architecture that identifies reusable levels of abstraction of an interaction language. A third model, called behavioural model, allows the specification of recommendations about the runtime instantiation of components. We propose an algorithm that allows the generation of context-adapted user interfaces and the evaluation of their quality according to the recommendations issued from the behavioural model. In the third part, we detail our implementation of a platform that implements the DAME approach. This implementation is used in a qualitative experiment that involves end-users. Encouraging preliminary results have been obtained and open new perspectives on multi-devices and multimodal HCI in ambient computing
Yacoubi, Alya. "Vers des agents conversationnels capables de réguler leurs émotions : un modèle informatique des tendances à l’action." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS378/document.
Conversational virtual agents with social behavior are often based on at least two different disciplines : computer science and psychology. In most cases, psychological findings are converted into computational mechanisms in order to make agents look and behave in a believable manner. In this work, we aim at increasing conversational agents’ belivielibity and making human-agent interaction more natural by modelling emotions. More precisely, we are interested in task-oriented conversational agents, which are used as a custumer-relationship channel to respond to users request. We propose an affective model of emotional responses’ generation and control during a task-oriented interaction. Our proposed model is based, on one hand, on the theory of Action Tendencies (AT) in psychology to generate emotional responses during the interaction. On the other hand, the emotional control mechanism is inspired from social emotion regulation in empirical psychology. Both mechanisms use agent’s goals, beliefs and ideals. This model has been implemented in an agent architecture endowed with a natural language processing engine developed by the company DAVI. In order to confirm the relevance of our approach, we realized several experimental studies. The first was about validating verbal expressions of action tendency in a human-agent dialogue. In the second, we studied the impact of different emotional regulation strategies on the agent perception by the user. This study allowed us to design a social regulation algorithm based on theoretical and empirical findings. Finally, the third study focuses on the evaluation of emotional agents in real-time interactions. Our results show that the regulation process contributes in increasing the credibility and perceived competence of agents as well as in improving the interaction. Our results highlight the need to take into consideration of the two complementary emotional mechanisms : the generation and regulation of emotional responses. They open perspectives on different ways of managing emotions and their impact on the perception of the agent
Baaziz, Abdelkader. "Synergie du triptyque : Knowledge Management, Intelligence Economique et Business Intelligence. Contribution à la réduction des riques liés aux décisions stratégiques dans les nouveaux environnements concurrentiels incertains : Cas des Entreprises Publiques Algériennes." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM5900/document.
Since 1988, Algeria has initiated deeper economic reforms supported by significant legislation and international agreements. The algerian government has abandoned its protector role without providing a required regulatory role. The transition from a planned economy based on state monopole to a market economy based, is characterized by the emergence of local and foreign private sector implies radical changes both politically and institutionally (regarding Algerian State) on the organizational, strategic and technological plans for State-Owned Firms. In this uncertain environment, Algerian State-Owned Firm cannot rely only on their internal capabilities. They should, create partnerships, both with suppliers, subcontractors, universities and even competitors. There is a need for these firms to: - Transform their organization to a new form improved for unexpected events and enough resilience to adapt to uncertain environments- Build a Strategic Intelligence Information System able to facilitate decision-making and reduce risks inherent to the strategic choices- Find ways to reverse choice when unexpected events occur.The aim of this thesis is to show the complexity of the political, legal, social and economic environments where the Algerian State-Owned Firms operate. Also, we show why it's necessary to handle the following risks: inertia against the process of organizational transformation, wrong understanding of the received signals from the environment and poor reaction of the decision-maker to signals and events in the environment. Here, the concepts of KM, CI and BI operate at different levels of management: from strategic to operational
Abid, Zied. "Gestion de la qualité de contexte pour l'intelligence ambiante." Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0041/document.
Context-aware computing aims to reduce the amount of explicit information required from a user for a system to perform a task. This is particularly true in the recent domain of ambient intelligence where everyday life objects are able to trigger an action or a spontaneous information exchange, without any interaction with the user. Technical advances in wireless communication, personal mobile devices, sensors and embedded software make context-aware services possible, but concrete applications are still very limited. The solutions proposed in the literature decompose context management into four functions: acquisition, interpretation, situation detection and application adaptation. The differentiating element in these proposals is the quality of the high-level context information obtained by inference and characterising the situation of the user. The limits of these solutions are the difficulty for composing context information scalability in terms of the quantity of context information and of the number of client applications, the absence of guarantee on the consistency of context information and the lack of middleware solutions able to free the designer of context-aware applications from the management of context data. In this thesis, we are interested in the management of the quality of context information (QoC) in an ambient environment. There are several key issues in QoC management: choosing the adequate method for context management, extracting the quality associated to the context, analysing and interpreting the quality of the context with regard to the requirements of context-aware applications. We propose to answer these questions by integrating QoC management into the COSMOS context management framework (http://picoforge.int-evry.fr/projects/svn/cosmos) developed by the MARGE team (http://www-inf.itsudparis.eu/MARGE) of Télécom SudParis.For this purpose, we have designed the necessary components dedicated to QoC management and we have implemented the mechanisms allowing a fine-grain manipulation of the QoC together with a limitation of the associated overhead. We also propose a design process based on model-driven engineering in order to automatically generate the elements responsible of QoC management. We validate our contributions through the development of two prototype applications running on mobile phones: a Flash sale offer application to be used in malls and a location detection application proposed to the students of a campus. The performance tests we have conducted allow to compare the results obtained with and without taking into account the QoC and show the low overhead associated to QoC manaqement with regard to the benefits brought to context-aware applications and services
Hirel, Julien. "Codage hippocampique par transitions spatio-temporelles pour l’apprentissage autonome de comportements dans des tâches de navigation sensori-motrice et de planification en robotique." Thesis, Cergy-Pontoise, 2011. http://www.theses.fr/2011CERG0552/document.
This thesis takes interest in the mechanisms facilitating the autonomous acquisition of behaviors in animals and proposes to use these mechanisms in the frame of robotic tasks. Artificialneural networks are used to model cerebral structures, both to understand how these structureswork and to design robust and adaptive algorithms for robot control.The work presented here is based on a model of the hippocampus capable of learning thetemporal relationship between perceptive events. The neurons performing this learning, calledtransition cells, can predict which future events the robot could encounter. These transitionssupport the building of a cognitive map, located in the prefrontal and/or parietal cortex. The mapcan be learned by a mobile robot exploring an unknown environment and then be used to planpaths in order to reach one or several goals.Apart from their use in building a cognitive map, transition cells are also the basis for thedesign of a model of reinforcement learning. A biologically plausible neural implementation ofthe Q-learning algorithm, using transitions, is made by taking inspiration from the basal ganglia.This architecture provides an alternative strategy to the cognitive map planning strategy. Thereinforcement learning strategy requires a longer learning period but corresponds more to an automatic low-level behavior. Experiments are carried out with both strategies used in cooperationand lesions of the prefrontal cortex and basal ganglia allow to reproduce experimental resultsobtained with rats.Transition cells can learn temporally precise relations predicting the exact timing when anevent should be perceived. In a model of interactions between the hippocampus and prefrontalcortex, we show how these predictions can explain in-vivo recordings in these cerebral structures, in particular when rat is carrying out a task during which it must remain stationary for 2seconds on a goal location to obtain a reward. The learning of temporal information about theenvironment and the behavior of the robot allows the system to detect regularity. On the contrary, the absence of a predicted event can signal a failure in the behavior of the robot, whichcan be detected and acted upon in order to modulate the failing behavior. Consequently, a failure detection system is developed, taking advantage of the temporal predictions provided by thehippocampus and the interaction between behavior modulation functions in the prefrontal cortexand reinforcement learning in the basal ganglia. Several robotic experiments are conducted, inwhich the failure signal is used to modulate, immediately at first, the behavior of the robot inorder to stop selecting actions which lead to failures and explore other strategies. The signal isthen used in a more lasting way by modulating the learning of the associations leading to theselection of an action so that the repeted failures of an action in a particular context lead to thesuppression of this association.Finally, after having used the model in the frame of navigation, we demonstrate its generalization capabilities by using it to control a robotic arm in a trajectory planning task. This workconstitutes an important step towards obtaining a generic and unified model allowing the controlof various robotic setups and the learning of tasks of different natures
Cully, Antoine. "Creative Adaptation through Learning." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066664/document.
Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, for example in search and rescue, disaster response, health care, and transportation. They are also invaluable tools for scientific exploration of distant planets or deep oceans. A major obstacle to their widespread adoption in more complex environments and outside of factories is their fragility. While animals can quickly adapt to injuries, current robots cannot “think outside the box” to find a compensatory behavior when they are damaged: they are limited to their pre-specified self-sensing abilities, which can diagnose only anticipated failure modes and strongly increase the overall complexity of the robot. In this thesis, we propose a different approach that considers having robots learn appropriate behaviors in response to damage. However, current learning techniques are slow even with small, constrained search spaces. To allow fast and creative adaptation, we combine the creativity of evolutionary algorithms with the learning speed of policy search algorithms through three contributions: the behavioral repertoires, the damage recovery using these repertoires and the transfer of knowledge across tasks. Globally, this work aims to provide the algorithmic foundations that will allow physical robots to be more robust, effective and autonomous
Beaugency, Aurélie. "Capacités dynamiques et compréhension des enjeux sectoriels : apports de l’intelligence technologique au cas de l’avionique." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0290/document.
The understanding of the scientific dynamics of an environment, whether technological orcompetitive, occupies a predominant place in the discussion of adaption and survival of firms. In thecase of avionics, the upheaval in the 2000’s is the consequence of profound changes in its two mainsectors, aeronautics and electronics. This drove the Computer Department, part of the AvionicsDivision of Thales Group, to question its ability to handle these evolutions. In this thesis, we examineone of these mechanisms, the sensing dynamic capability (defined as the aptitudes deployed by firmsin order to adapt routines and organizational capabilities) and we put it into practice throughtechnological intelligence capability.By studying the deployment of this ability inside the Department, we show how technologicalintelligence contributes to the learning process of the firm, as it is used by managers in order toinfluence the selection process of Product Policy. In order to achieve this, we adopted a researchinterventionmethodology (with the support of an industrial agreement CIFRE) based on two steps.First of all, we show that through the operationalization of the technological intelligence ability in thedepartment, managers put the latter to use in the selection of product policies. Secondly, the results ofthe technical studies conducted for this deployment add to the understanding of the scientific andtechnological dynamics of the avionics sector
Mkhida, Abdelhak. "Contribution à l'évaluation de la sûreté de fonctionnement des Systèmes Instrumentés de Sécurité à Intelligence Distribuée." Thesis, Vandoeuvre-les-Nancy, INPL, 2008. http://www.theses.fr/2008INPL083N/document.
The incorporation of intelligent instruments in safety loops leads towards the concept of intelligent safety and the systems become “Intelligent Distributed Safety Instrumented Systems (IDSIS)”. The justification for using these instruments in safety applications is not fully proven and the dependability evaluation of such systems is a difficult task. Achieved work in this thesis deals with modelling and thus the performance evaluation relating to the dependability for structures which have intelligence in the instruments constituting the Safety Instrumented Systems (SIS). In the modelling of the system, the functional and dysfunctional aspects coexist and the dynamic approach using the Stochastic Activity Network (SAN) is proposed to overcome the difficulties mentioned above. The introduction of performance indicators highlight the effect of the integration of intelligence levels in safety applications. Monte-Carlo method is used to assess the dependability parameters in compliance with safety standards related to SIS (IEC 61508 & IEC 61511). We have proposed a method and associated tools to approach this evaluation by simulation and thus provide assistance in designing Safety Instrumented Systems (SIS) integrating some features of intelligent tools
Ciupak, Liége Franken. "Business intelligence na gestão universitária : um estudo de aplicabilidade na UNIOESTE." Universidade Estadual de Londrina. CECA. Prog. Pós-Graduação em Gestão da Inf. Escola de Governo do Paraná, 2011. http://www.bibliotecadigital.uel.br/document/?code=vtls000166566.
The concept of Business Intelligence (BI) covers several technologies that make easy the achievement and visualization of information and, as in a competitive enterprise, the university management must update itself, so it adopts the use of processes that assist on a decision making, which also meet with ability and quality the society demands. The Information Systems (IS) are essential elements and must be more than routine processing activities, since they will contribute to the strategic point of view. The Western Paraná State University (UNIOESTE) has several IS that store a lot of data, on the other hand, its users still have difficulty on obtaining information according to their intended format. Thus, this trial aimed at researching BI technologies, mainly On-Line Analytical Processing (OLAP) and at implementing a prototype application that will make easy the retrieval of information at the Pro-rectory of Planning (PROPLAN), whose sources are many IS from UNIOESTE. In order to reach that goal, an approach based on a qualitative research was developed, as an exploratory/descriptive research through literature and public documents. As a result, a prototype was developed with a Business Intelligence Development Studio (BIDS) tool, a component of Microsoft SQL Server 2008, composed with the Excel 2010 spreadsheet editor, as an end-user interface. As part of the evaluation process, the SUS questionnaire was used to measure user satisfaction on using the same prototype, in which the average score, given by the participants, was 91.5 on a scale from 0 to 100, thus making a very positive evaluation. As the result was a prototype, it is expected that UNIOESTE supports this initiative as well as invests in training and encourages more people to contribute to the implementation of BI projects at UNIOESTE in order to meet the users' needs, which in this trial is represented by PROPLAN.
Bange, Carsten. "Business intelligence aus Kennzahlen und Dokumenten : Integration strukturierter und unstrukturierter Daten in entscheidungsunterstützenden Informationssystemen /." Hamburg : Kovac, 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=012863212&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Kotevska, Olivera. "Learning based event model for knowledge extraction and prediction system in the context of Smart City." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM005/document.
Billions of “things” connected to the Internet constitute the symbiotic networks of communication devices (e.g., phones, tablets, and laptops), smart appliances (e.g., fridge, coffee maker and so forth) and networks of people (e.g., social networks). So, the concept of traditional networks (e.g., computer networks) is expanding and in future will go beyond it, including more entities and information. These networks and devices are constantly sensing, monitoring and generating a vast amount of data on all aspects of human life. One of the main challenges in this area is that the network consists of “things” which are heterogeneous in many ways, the other is that their state of the interconnected objects is changing over time, and there are so many entities in the network which is crucial to identify their interdependency in order to better monitor and predict the network behavior. In this research, we address these problems by combining the theory and algorithms of event processing with machine learning domains. Our goal is to propose a possible solution to better use the information generated by these networks. It will help to create systems that detect and respond promptly to situations occurring in urban life so that smart decision can be made for citizens, organizations, companies and city administrations. Social media is treated as a source of information about situations and facts related to the users and their social environment. At first, we tackle the problem of identifying the public opinion for a given period (year, month) to get a better understanding of city dynamics. To solve this problem, we proposed a new algorithm to analyze complex and noisy textual data such as Twitter messages-tweets. This algorithm permits an automatic categorization and similarity identification between event topics by using clustering techniques. The second challenge is combing network data with various properties and characteristics in common format that will facilitate data sharing among services. To solve it we created common event model that reduces the representation complexity while keeping the maximum amount of information. This model has two major additions: semantic and scalability. The semantic part means that our model is underlined with an upper-level ontology that adds interoperability capabilities. While the scalability part means that the structure of the proposed model is flexible in adding new entries and features. We validated this model by using complex event patterns and predictive analytics techniques. To deal with the dynamic environment and unexpected changes we created dynamic, resilient network model. It always chooses the optimal model for analytics and automatically adapts to the changes by selecting the next best model. We used qualitative and quantitative approach for scalable event stream selection, that narrows down the solution for link analysis, optimal and alternative best model. It also identifies efficient relationship analysis between data streams such as correlation, causality, similarity to identify relevant data sources that can act as an alternative data source or complement the analytics process
Gruet, Marina. "Intelligence artificielle et prévision de l'impact de l'activité solaire sur l'environnement magnétique terrestre." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0014/document.
In this thesis, we present models which belongs to the field of artificial intelligence to predict the geomagnetic index am based on solar wind parameters. This is done in terms to provide operational models based on data recorded by the ACE satellite located at the Lagrangian point L1. Currently, there is no model providing predictions of the geomagnetic index am. To predict this index, we have relied on nonlinear models called neural networks, allowing to model the complex and nonlinear dynamic of the Earth’s magnetosphere. First, we have worked on the development and the optimisation of basics neural networks like the multilayer perceptron. These models have proven in space weather to predict geomagnetic index specific to current systems like the Dst index, characteristic of the ring current, as well as the global geomagnetic index Kp. In particular, we have studied a temporal network, called the Time Delay Neural Network (TDNN) and we assessed its ability to predict the geomagnetic index am within one hour, base only on solar wind parameters. We have analysed the sensitivity of neural network performance when considering on one hand data from the OMNI database at the bow shock, and on the other hand data from the ACE satellite at the L1 point. After studying the ability of neural networks to predict the geomagnetic index am, we have developped a neural network which has never been used before in Space Weather, the Long Short Term Memory or LSTM. Like the TDNN, this network provides am prediction based only on solar wind parameters. We have optimised this network to model at best the magnetosphere behaviour and obtained better performance than the one obtained with the TDNN. We continued the development and the optimisation of the LSTM network by using coupling functions as neural network features, and by developing multioutput networks to predict the sectorial am also called aσ, specific to each Magnetical Local Time sector. Finally, we developped a brand new technique combining the LSTM network and gaussian process, to provide probabilistic predictions up to six hours ahead of geomagnetic index Dst and am. This method has been first developped to predict Dst to be able to compare the performance of this model with reference models, and then applied to the geomagnetic index am
Papadopoulos, Georgios. "Towards a 3D building reconstruction using spatial multisource data and computational intelligence techniques." Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0084/document.
Building reconstruction from aerial photographs and other multi-source urban spatial data is a task endeavored using a plethora of automated and semi-automated methods ranging from point processes, classic image processing and laser scanning. In this thesis, an iterative relaxation system is developed based on the examination of the local context of each edge according to multiple spatial input sources (optical, elevation, shadow & foliage masks as well as other pre-processed data as elaborated in Chapter 6). All these multisource and multiresolution data are fused so that probable line segments or edges are extracted that correspond to prominent building boundaries.Two novel sub-systems have also been developed in this thesis. They were designed with the purpose to provide additional, more reliable, information regarding building contours in a future version of the proposed relaxation system. The first is a deep convolutional neural network (CNN) method for the detection of building borders. In particular, the network is based on the state of the art super-resolution model SRCNN (Dong C. L., 2015). It accepts aerial photographs depicting densely populated urban area data as well as their corresponding digital elevation maps (DEM). Training is performed using three variations of this urban data set and aims at detecting building contours through a novel super-resolved heteroassociative mapping. Another innovation of this approach is the design of a modified custom loss layer named Top-N. In this variation, the mean square error (MSE) between the reconstructed output image and the provided ground truth (GT) image of building contours is computed on the 2N image pixels with highest values . Assuming that most of the N contour pixels of the GT image are also in the top 2N pixels of the re-construction, this modification balances the two pixel categories and improves the generalization behavior of the CNN model. It is shown in the experiments, that the Top-N cost function offers performance gains in comparison to standard MSE. Further improvement in generalization ability of the network is achieved by using dropout.The second sub-system is a super-resolution deep convolutional network, which performs an enhanced-input associative mapping between input low-resolution and high-resolution images. This network has been trained with low-resolution elevation data and the corresponding high-resolution optical urban photographs. Such a resolution discrepancy between optical aerial/satellite images and elevation data is often the case in real world applications. More specifically, low-resolution elevation data augmented by high-resolution optical aerial photographs are used with the aim of augmenting the resolution of the elevation data. This is a unique super-resolution problem where it was found that many of -the proposed general-image SR propositions do not perform as well. The network aptly named building super resolution CNN (BSRCNN) is trained using patches extracted from the aforementioned data. Results show that in comparison with a classic bicubic upscale of the elevation data the proposed implementation offers important improvement as attested by a modified PSNR and SSIM metric. In comparison, other proposed general-image SR methods performed poorer than a standard bicubic up-scaler.Finally, the relaxation system fuses together all these multisource data sources comprising of pre-processed optical data, elevation data, foliage masks, shadow masks and other pre-processed data in an attempt to assign confidence values to each pixel belonging to a building contour. Confidence is augmented or decremented iteratively until the MSE error fails below a specified threshold or a maximum number of iterations have been executed. The confidence matrix can then be used to extract the true building contours via thresholding
Fakeri, Tabrizi Ali. "Semi-supervised multi-view learning : an application to image annotation and multi-lingual document classification." Paris 6, 2013. http://www.theses.fr/2013PA066336.
In this thesis, we introduce two multiview learning approaches. In a first approach, we describe a self-training multiview strategy which trains different voting classifiers on different views. The margin distributions over the unlabeled training data, obtained with each view-specific classifier are then used to estimate an upper-bound on their transductive Bayes error. Minimizing this upper-bound provides an automatic margin-threshold which is used to assign pseudo-labels to unlabeled examples. Final class labels are then assigned to these examples, by taking a vote on the pool of the previous pseudo-labels. New view-specific classifiers are then trained using the original labeled and the pseudo-labeled training data. We consider applications to image-text and to multilingual document classification. In second approach, we propose a multiview semi-supervised bipartite ranking model which allows us to leverage the information contained in unlabeled sets of images to improve the prediction performance, using multiple descriptions, or views of images. For each topic class, our approach first learns as many view-specific rankers as there are available views using the labeled data only. These rankers are then improved iteratively by adding pseudo-labeled pairs of examples on which all view-specific rankers agree over the ranking of examples within these pairs
Sorici, Alexandru. "Un Intergiciel de Gestion du Contexte basé Multi-Agent pour les Applications d'Intelligence Ambiante." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0790/document.
The complexity and magnitude of Ambient Intelligence scenarios imply that attributes such as modeling expressiveness, flexibility of representation and deployment, as well as ease of configuration and development become central features for context management systems.However, existing works in the literature seem to explore these development-oriented attributes at a low degree.Our goal is to create a flexible and well configurable context management middleware, able to respond to different scenarios. To this end, our solution is built on the basis of principles and techniques of the Semantic Web and Multi-Agent Systems.We use the Semantic Web to provide a new context meta-model, allowing for an expressive and extensible modeling of content, meta-properties (e.g. temporal validity, quality parameters) and dependencies (e.g. integrity constraints).In addition, we develop a middleware architecture that relies on Multi-Agent Systems and a service component based design. Each agent of the system encapsulates a functional aspect of the context provisioning processes (acquisition, coordination, distribution, use).We introduce a new way to structure the deployment of agents depending on the multi-dimensionality aspects of the application's context model. Furthermore, we develop declarative policies governing the adaptation behavior of the agents managing the provisioning of context information.Simulations of an intelligent university scenario show that appropriate tooling built around our middleware can provide significant advantages in the engineering of context-aware applications
Rossit, Julien. "Fusion d'informations incertaines sans commensurabilité des échelles de référence." Thesis, Artois, 2009. http://www.theses.fr/2009ARTO0405/document.
The problem of merging multiple-source information is crucial for many applications, in particular when one requires to take into account several potentially conflicting pieces of information, such as distributed databases frameworks, multi-agent systems, or distributed information in general. The relevant pieces of information are provided by different sources and all existing pieces of information have to be confronted to obtain a global and coherent point of view. This problem is well-known as the data fusion problem. Most of existing merging methods are based on the following assumption: ranks associated with beliefs are commensurable from one source to another. This commensurability assumption can be too strong for several applications: comparing or combining ranks does not make sense if sources do not share the same meaning of scales. This thesis proposes different solutions to the problem of incommensurability for ranked beliefs merging. Our first main contribution consists of proposing a natural way to restore commensurability relying on the notion of compatible scales. The second one directly defines a partial pre-order between interpretations in a way similar to the one based on the Pareto criterion. Moreover, this thesis introduces several inference relations based on some selection functions of compatible scales. We analyze the impact of these selection functions on the satisfaction of rational postulates, and on the prudence of merging operators. In particular we introduce a stronger version of the fairness postulate, called the consensus postulate. We show that most of our defined merging operators constitute consensual approaches
Anseur, Ouardia. "Usages et besoins en information des agriculteurs en Algérie." Thesis, Lyon 2, 2009. http://www.theses.fr/2009LYO20059/document.
The development of scientific research, related with TIC, has given rise to the knowledge society, which is ours. As well as raw materials, intellectual capital is a source of development and innovation, provided its organization through avoiding dispersal mechanisms to favour the emergence of a collective intelligence.The author of this study, on the basis of the results inquest, wanted to measure the level of the knowledge integration in the development strategy of the agricultural sector in Algeria.The results presented highlight that the partitioning between the different actors producers of knowledge and/or information do not sub serves the mutualisation and synergy.In this context, the observatory of agricultural research in Algeria, under development, takes all its place; aiming to converge on a single platform, the main sources of information and knowledge sector