Дисертації з теми "Information extraction and fusion"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Information extraction and fusion".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Ahmad, Muhammad Imran. "Feature extraction and information fusion in face and palmprint multimodal biometrics." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2128.
Повний текст джерелаJin, Xiaoying. "Automatic extraction of man-made objects from high-resolution satellite imagery by information fusion." Diss., Columbia, Mo. : University of Missouri-Columbia, 2005. http://hdl.handle.net/10355/5816.
Повний текст джерелаThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (November 15, 2006) Vita. Includes bibliographical references.
Arif-Uz-Zaman, Kazi. "Failure and maintenance information extraction methodology using multiple databases from industry: A new data fusion approach." Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/116354/1/Kazi_Arif-Uz-Zaman_Thesis.pdf.
Повний текст джерелаThuillier, Etienne. "Extraction of mobility information through heterogeneous data fusion : a multi-source, multi-scale, and multi-modal problem." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCA019.
Повний текст джерелаToday it is a fact that we live in a world where ecological, economic and societal issues are increasingly pressing. At the crossroads of the various guidelines envisaged to address these problems, a more accurate vision of human mobility is a central and major axis, which has repercussions on all related fields such as transport, social sciences, urban planning, management policies, ecology, etc. It is also in the context of strong budgetary constraints that the main actors of mobility on the territories seek to rationalize the transport services and the movements of individuals. Human mobility is therefore a strategic challenge both for local communities and for users, which must be observed, understood and anticipated.This study of mobility is based above all on a precise observation of the movements of users on the territories. Nowadays mobility operators are mainly focusing on the massive use of user data. The simultaneous use of multi-source, multi-modal, and multi-scale data opens many possibilities, but the latter presents major technological and scientific challenges. The mobility models presented in the literature are too often focused on limited experimental areas, using calibrated data, etc., and their application in real contexts and on a larger scale is therefore questionable. We thus identify two major issues that enable us to meet this need for a better knowledge of human mobility, but also to a better application of this knowledge. The first issue concerns the extraction of mobility information from heterogeneous data fusion. The second problem concerns the relevance of this fusion in a real context, and on a larger scale. These issues are addressed in this dissertation: the first, through two data fusion models that allow the extraction of mobility information, the second through the application of these fusion models within the ANR Norm-Atis project.In this thesis, we finally follow the development of a whole chain of processes. Starting with a study of human mobility, and then mobility models, we present two data fusion models, and we analyze their relevance in a concrete case. The first model we propose allows to extract 12 types of mobility behaviors. It is based on an unsupervised learning of mobile phone data. We validate our results using official data from the INSEE, and we infer from our results, dynamic behaviors that can not be observed through traditional mobility data. This is a strong added-value of our model. The second model operates a mobility flows decompositoin into six mobility purposes. It is based on a supervised learning of mobility surveys data and static data from the land use. This model is then applied to the aggregated data within the Norm-Atis project. The computing times are sufficiently powerful to allow an application of this model in a real-time context
Foucard, Rémi. "Fusion multi-niveaux par boosting pour le tagging automatique." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0093/document.
Повний текст джерелаTags constitute a very useful tool for multimedia document indexing. This PhD thesis deals with automatic tagging, which consists in associating a set of tags to each song automatically, using an algorithm. We use boosting techniques to design a learning which better considers the complexity of the information expressed by music. A boosting algorithm is proposed, which can jointly use song descriptions associated to excerpts of different durations. This algorithm is used to fuse new descriptions, which belong to different abstraction levels. Finally, a new learning framework is proposed for automatic tagging, which better leverages the subtlety ofthe information expressed by music
Gulen, Elvan. "Fusing Semantic Information Extracted From Visual, Auditory And Textual Data Of Videos." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614582/index.pdf.
Повний текст джерелаanalyzing and uniting the semantic information that is extracted from multimodal data by utilizing concept interactions and consequently generating a semantic dataset which is ready to be stored in a database. Besides, experiments are conducted to compare results obtained from the proposed multimodal fusion operation with results obtained as an outcome of semantic information extraction from just one modality and other fusion methods. The results indicate that fusing all available information along with concept relations yields better results than any unimodal approaches and other traditional fusion methods in overall.
Muhammad, Hanif Shehzad. "Feature selection and classifier combination: Application to the extraction of textual information in scene images." Paris 6, 2009. http://www.theses.fr/2009PA066521.
Повний текст джерелаSkibinski, Sebastian [Verfasser], Heinrich [Akademischer Betreuer] Müller, and Uwe [Gutachter] Schwiegelshohn. "Extraction, localization, and fusion of collective vehicle data / Sebastian Skibinski ; Gutachter: Uwe Schwiegelshohn ; Betreuer: Heinrich Müller." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/1191990192/34.
Повний текст джерелаSkibinski, Sebastian [Verfasser], Heinrich Akademischer Betreuer] Müller, and Uwe [Gutachter] [Schwiegelshohn. "Extraction, localization, and fusion of collective vehicle data / Sebastian Skibinski ; Gutachter: Uwe Schwiegelshohn ; Betreuer: Heinrich Müller." Dortmund : Universitätsbibliothek Dortmund, 2019. http://d-nb.info/1191990192/34.
Повний текст джерелаApatean, Anca Ioana. "Contributions à la fusion des informations : application à la reconnaissance des obstacles dans les images visible et infrarouge." Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00621202.
Повний текст джерелаViardot, Geoffroy. "Reconnaissance des formes en présence d'incertitude sur l'expertise : application à l'étude des phases d'activation transitoire du sommeil chez l'homme." Troyes, 2002. http://www.theses.fr/2002TROY0004.
Повний текст джерелаDecision rules design from training data generally assumes that true labels of data are available. Then design consists of a regression problem of the variables to explain on observation space measurements. Unfortunately this situation is not always realistic due to the frequently corruption of human expertise by uncertainty. This phenomenon clearly appears when data are labelled by a pool of experts who have differing opinions. This often happens in the field of biomedical signal processing. Many works dealing with the use of differing opinions when designing decision rules have been considered in the literature. The main approach consists in synthesizing a consensual opinion from the differing opinions. Our work proposes a new solution to this problem which allows to characterize each expert relevance. It consists in optimizing the mutual information between the observations and a function of the labels provided by the pool of experts. The analysis of results allows th characterize each expert behavior, and the resulting labels can be used to design decision rules. Our method has been successfully applied to phasic arousals chich are transient events visible in poysomnogram during human sleep
König, Rikard. "Predictive Techniques and Methods for Decision Support in Situations with Poor Data Quality." Licentiate thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-3517.
Повний текст джерелаSponsorship:
This work was supported by the Information Fusion Research
Program (www.infofusion.se) at the University of Skövde, Sweden, in
partnership with the Swedish Knowledge Foundation under grant
2003/0104.
Döhling, Lars. "Extracting and Aggregating Temporal Events from Texts." Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18454.
Повний текст джерелаFinding reliable information about given events from large and dynamic text collections, such as the web, is a topic of great interest. For instance, rescue teams and insurance companies are interested in concise facts about damages after disasters, which can be found today in web blogs, online newspaper articles, social media, etc. Knowing these facts helps to determine the required scale of relief operations and supports their coordination. However, finding, extracting, and condensing specific facts is a highly complex undertaking: It requires identifying appropriate textual sources and their temporal alignment, recognizing relevant facts within these texts, and aggregating extracted facts into a condensed answer despite inconsistencies, uncertainty, and changes over time. In this thesis, we present and evaluate techniques and solutions for each of these problems, embedded in a four-step framework. Applied methods are pattern matching, natural language processing, and machine learning. We also report the results for two case studies applying our entire framework: gathering data on earthquakes and floods from web documents. Our results show that it is, under certain circumstances, possible to automatically obtain reliable and timely data from the web.
Kaytoue, Mehdi. "Traitement de données numériques par analyse formelle de concepts et structures de patrons." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00599168.
Повний текст джерелаBujor, Florentin. "Extraction - fusion d'informations en imagerie radar multi-temporelle." Chambéry, 2004. http://www.theses.fr/2004CHAMS023.
Повний текст джерелаThe work presented in this thesis is articulated around three axis : two methodological axis, information extraction and fusion in multi-temporal synthetic aperture radar (SAR) imagery, and one more thematic axis, the application of the proposed methods for change detection and for detection of geographical stable objects. The "extraction-fusion" strategy allows the use of multi-temporal SAR data in the operational context of deforestation monitoring or geographical map updating, in regions where satellite optical imagery meets bad weather conditions. The attributes developed in the information extraction axis bring an original contribution to the multi-temporal SAR image analysis. Based on the existing parameters for the single-date images, the attributes are extended to the multi-temporl case to exploit either the information redundancy for the detection improvement of the stable structures, or the radiometry modificatins to detect the changes occured between acquisistions. The second methodological axis consists in fusing the extracted information in the form of attributes by an original method designed for the special context of spce-borne remote sensing. The collaboration with geophysicists led us to develop an interactive fusion method which integrates the expert knowledge of the researched zones and about the behavior of the attributes. A symbolic fuzzy fusion system allows to reach this objective through a graphical user interface (GUI) built in IDL, the programming language of ENVI, a widespread software in the remote sensing community. In addition to the functionalities of interactive adjustment of the membership functions, the developed GUI incorporates a set of functionalities which make it user-friendly for operators belonging to the application fields
Labský, Martin. "Information Extraction from Websites using Extraction Ontologies." Doctoral thesis, Vysoká škola ekonomická v Praze, 2002. http://www.nusl.cz/ntk/nusl-77102.
Повний текст джерелаArpteg, Anders. "Intelligent semi-structured information extraction : a user-driven approach to information extraction /." Linköping : Dept. of Computer and Information Science, Univ, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/tek946s.pdf.
Повний текст джерелаSwampillai, Kumutha. "Information extraction across sentences." Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.575468.
Повний текст джерелаTablan, Mihai Valentin. "Toward portable information extraction." Thesis, University of Sheffield, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.522379.
Повний текст джерелаLeen, Gayle. "Context assisted information extraction." Thesis, University of the West of Scotland, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.446043.
Повний текст джерелаSottovia, Paolo. "Information Extraction from data." Doctoral thesis, Università degli studi di Trento, 2019. http://hdl.handle.net/11572/242992.
Повний текст джерелаKaupp, Tobias. "Probabilistic Human-Robot Information Fusion." University of Sydney, 2008. http://hdl.handle.net/2123/2554.
Повний текст джерелаThis thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability.
Xu, Philippe. "Information fusion for scene understanding." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP2153/document.
Повний текст джерелаImage understanding is a key issue in modern robotics, computer vison and machine learning. In particular, driving scene understanding is very important in the context of advanced driver assistance systems for intelligent vehicles. In order to recognize the large number of objects that may be found on the road, several sensors and decision algorithms are necessary. To make the most of existing state-of-the-art methods, we address the issue of scene understanding from an information fusion point of view. The combination of many diverse detection modules, which may deal with distinct classes of objects and different data representations, is handled by reasoning in the image space. We consider image understanding at two levels : object detection ans semantic segmentation. The theory of belief functions is used to model and combine the outputs of these detection modules. We emphazise the need of a fusion framework flexible enough to easily include new classes, new sensors and new object detection algorithms. In this thesis, we propose a general method to model the outputs of classical machine learning techniques as belief functions. Next, we apply our framework to the combination of pedestrian detectors using the Caltech Pedestrain Detection Benchmark. The KITTI Vision Benchmark Suite is then used to validate our approach in a semantic segmentation context using multi-modal information
Johansson, Ronnie. "Large-Scale Information Acquisition for Data and Information Fusion." Doctoral thesis, Stockholm, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3890.
Повний текст джерелаHouzelle, Stéphane. "Extraction automatique d'objets cartographiques par fusion d'informations extraites d'images satellites /." Paris : École nationale supérieure des télécommunications, 1993. http://catalogue.bnf.fr/ark:/12148/cb355798989.
Повний текст джерелаArpteg, Anders. "Adaptive Semi-structured Information Extraction." Licentiate thesis, Linköping University, Linköping University, KPLAB - Knowledge Processing Lab, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5688.
Повний текст джерелаThe number of domains and tasks where information extraction tools can be used needs to be increased. One way to reach this goal is to construct user-driven information extraction systems where novice users are able to adapt them to new domains and tasks. To accomplish this goal, the systems need to become more intelligent and able to learn to extract information without need of expert skills or time-consuming work from the user.
The type of information extraction system that is in focus for this thesis is semistructural information extraction. The term semi-structural refers to documents that not only contain natural language text but also additional structural information. The typical application is information extraction from World Wide Web hypertext documents. By making effective use of not only the link structure but also the structural information within each such document, user-driven extraction systems with high performance can be built.
The extraction process contains several steps where different types of techniques are used. Examples of such types of techniques are those that take advantage of structural, pure syntactic, linguistic, and semantic information. The first step that is in focus for this thesis is the navigation step that takes advantage of the structural information. It is only one part of a complete extraction system, but it is an important part. The use of reinforcement learning algorithms for the navigation step can make the adaptation of the system to new tasks and domains more user-driven. The advantage of using reinforcement learning techniques is that the extraction agent can efficiently learn from its own experience without need for intensive user interactions.
An agent-oriented system was designed to evaluate the approach suggested in this thesis. Initial experiments showed that the training of the navigation step and the approach of the system was promising. However, additional components need to be included in the system before it becomes a fully-fledged user-driven system.
Report code: LiU-Tek-Lic-2002:73.
Schierle, Martin. "Language Engineering for Information Extraction." Doctoral thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-81757.
Повний текст джерелаLam, Man I. "Business information extraction from web." Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1937939.
Повний текст джерелаJessop, David M. "Information extraction from chemical patents." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/238302.
Повний текст джерелаNguyen, Thien Huu. "Deep Learning for Information Extraction." Thesis, New York University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10260911.
Повний текст джерелаThe explosion of data has made it crucial to analyze the data and distill important information effectively and efficiently. A significant part of such data is presented in unstructured and free-text documents. This has prompted the development of the techniques for information extraction that allow computers to automatically extract structured information from the natural free-text data. Information extraction is a branch of natural language processing in artificial intelligence that has a wide range of applications, including question answering, knowledge base population, information retrieval etc. The traditional approach for information extraction has mainly involved hand-designing large feature sets (feature engineering) for different information extraction problems, i.e, entity mention detection, relation extraction, coreference resolution, event extraction, and entity linking. This approach is limited by the laborious and expensive effort required for feature engineering for different domains, and suffers from the unseen word/feature problem of natural languages.
This dissertation explores a different approach for information extraction that uses deep learning to automate the representation learning process and generate more effective features. Deep learning is a subfield of machine learning that uses multiple layers of connections to reveal the underlying representations of data. I develop the fundamental deep learning models for information extraction problems and demonstrate their benefits through systematic experiments.
First, I examine word embeddings, a general word representation that is produced by training a deep learning model on a large unlabelled dataset. I introduce methods to use word embeddings to obtain new features that generalize well across domains for relation extraction. This is done for both the feature-based method and the kernel-based method of relation extraction.
Second, I investigate deep learning models for different problems, including entity mention detection, relation extraction and event detection. I develop new mechanisms and network architectures that allow deep learning to model the structures of information extraction problems more effectively. Some extensive experiments are conducted on the domain adaptation and transfer learning settings to highlight the generalization advantage of the deep learning models for information extraction.
Finally, I investigate the joint frameworks to simultaneously solve several information extraction problems and benefit from the inter-dependencies among these problems. I design a novel memory augmented network for deep learning to properly exploit such inter-dependencies. I demonstrate the effectiveness of this network on two important problems of information extraction, i.e, event extraction and entity linking.
Lee, Ji Young Ph D. Massachusetts Institute of Technology. "Information extraction with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111905.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 85-97).
Electronic health records (EHRs) have been widely adopted, and are a gold mine for clinical research. However, EHRs, especially their text components, remain largely unexplored due to the fact that they must be de-identified prior to any medical investigation. Existing systems for de-identification rely on manual rules or features, which are time-consuming to develop and fine-tune for new datasets. In this thesis, we propose the first de-identification system based on artificial neural networks (ANNs), which achieves state-of-the-art results without any human-engineered features. The ANN architecture is extended to incorporate features, further improving the de-identification performance. Under practical considerations, we explore transfer learning to take advantage of large annotated dataset to improve the performance on datasets with limited number of annotations. The ANN-based system is publicly released as an easy-to-use software package for general purpose named-entity recognition as well as de-identification. Finally, we present an ANN architecture for relation extraction, which ranked first in the SemEval-2017 task 10 (ScienceIE) for relation extraction in scientific articles (subtask C).
by Ji Young Lee.
Ph. D.
Harik, Ralph 1979. "Structural and semantic information extraction." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87407.
Повний текст джерелаValenzuela, Escárcega Marco Antonio. "Interpretable Models for Information Extraction." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/613348.
Повний текст джерелаPerera, Pathirage Dinindu Sujan Udayanga. "Knowledge-driven Implicit Information Extraction." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1472474558.
Повний текст джерелаBatista-Navarro, Riza Theresa Bautista. "Information extraction from pharmaceutical literature." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/information-extraction-from-pharmaceutical-literature(3f8322b6-8b8d-44eb-a8cd-899026b267b9).html.
Повний текст джерелаKushmerick, Nicholas. "Wrapper induction for information extraction /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6867.
Повний текст джерелаJohansson, Ronnie. "Information Acquisition in Data Fusion Systems." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1673.
Повний текст джерелаBy purposefully utilising sensors, for instance by a datafusion system, the state of some system-relevant environmentmight be adequately assessed to support decision-making. Theever increasing access to sensors o.ers great opportunities,but alsoincurs grave challenges. As a result of managingmultiple sensors one can, e.g., expect to achieve a morecomprehensive, resolved, certain and more frequently updatedassessment of the environment than would be possible otherwise.Challenges include data association, treatment of con.ictinginformation and strategies for sensor coordination.
We use the term information acquisition to denote the skillof a data fusion system to actively acquire information. Theaim of this thesis is to instructively situate that skill in ageneral context, explore and classify related research, andhighlight key issues and possible future work. It is our hopethat this thesis will facilitate communication, understandingand future e.orts for information acquisition.
The previously mentioned trend towards utilisation of largesets of sensors makes us especially interested in large-scaleinformation acquisition, i.e., acquisition using many andpossibly spatially distributed and heterogeneous sensors.
Information acquisition is a general concept that emerges inmany di.erent .elds of research. In this thesis, we surveyliterature from, e.g., agent theory, robotics and sensormanagement. We, furthermore, suggest a taxonomy of theliterature that highlights relevant aspects of informationacquisition.
We describe a function, perception management (akin tosensor management), which realizes information acquisition inthe data fusion process and pertinent properties of itsexternal stimuli, sensing resources, and systemenvironment.
An example of perception management is also presented. Thetask is that of managing a set of mobile sensors that jointlytrack some mobile targets. The game theoretic algorithmsuggested for distributing the targets among the sensors proveto be more robust to sensor failure than a measurement accuracyoptimal reference algorithm.
Keywords:information acquisition, sensor management,resource management, information fusion, data fusion,perception management, game theory, target tracking
Nouranian, Saman. "Information fusion for prostate brachytherapy planning." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58305.
Повний текст джерелаApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Dalmas, Tiphaine. "Information fusion for automated question answering." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/27860.
Повний текст джерелаOreshkin, Boris. "Distributed information fusion in sensor networks." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86916.
Повний текст джерелаFor the distributed average consensus algorithm a memory based acceleration methodology is proposed. The convergence of the proposed methodology is investigated. For the two important settings of this methodology, optimal values of system parameters are determined and improvement with respect to the standard distributed average consensus algorithm is theoretically characterized. The theoretical improvement characterization matches well with the results of numerical experiments revealing significant and well scaling gain. The practical distributed on-line initialization scheme is devised. Numerical experiments reveal the feasibility of the proposed initialization scheme and superior performance of the proposed methodology with respect to several existing acceleration approaches.
For the collaborative signal and information processing methodology a number of theoretical performance guarantees is obtained. The collaborative signal and information processing framework consists in activating only a cluster of wireless sensors to perform target tracking task in the cluster head using particle filter. The optimal cluster is determined at every time instant and cluster head hand-off is performed if necessary. To reduce communication costs only an approximation of the filtering distribution is sent during hand-off resulting in additional approximation errors. The time uniform performance guarantees accounting for the additional errors are obtained in two settings: the subsample approximation and the parametric mixture approximation hand-off.
Cette thèse aborde le problème de la conception et l'analyse d'algorithmes distribuès servant à l'agrégation efficace et la fusion de l'information dans des reséaux capteurs sans fil. Ces algorithmes distribuès servent à addresser un bon nombre d'inconvénients qu'ont les approches de fusion centralisée telles que le point de défaillance unique, les protocoles de routage complexe, la consommation de puissance inégale dans les noeuds de capteurs, l'utilisation inefficace des voies de transmission sans-fil et l'extensibilité limitée. Ces inconvénients de l'approche centralisée ont comme effet de réduire la durée de vie du reséau, la robustesse des noeuds face aux défaillances et la capacité du réseau. Les algorithmes distribuès atténuent ces problèmes en utilisant des simples protocoles de messageries entre les noeuds ainsi que du traitement d'information localisé. Toutefois, pour ces algorithmes, les pertes de précision et/ou de temps nécessaire pour effectuer une tâche peuvent être importantes. C'est pourquoi la conception et l'analyse d'algorithmes distribuès rapide et précis est importante. Dans cette thèse, deux problèmes spécifiques associés à l'analyse et le conception de tels algorithms sont abordés.
En ce qui concerne l'algorithme de consensus sur la moyenne distribuè, une méthode d'accélération fondé sur la mémoire est proposée et sa convergence analysée. Pour les deux paramètres importants de cette méthodologie, les valeurs optimales pour le système sont déterminées et l'amélioration par rapport à l'algorithme de consensus de base est caractérisée de façon théorique. Cette caractérisation correspond aux resultants d'expériences numériques et révèlent des gains importants et extensibles. Le régime distribuè d'initialisation en ligne est conçu. Des expériences numériques révèlevent la faisabilité du régime d'initilisation proposé ainsi qu'un rendement supérieur à plusieurs approches existantes.
Pour la méthodologie de traitement de signaux et d'information collaborative, un certain nombre de garanties théoriques de performance sont obtenues. Ce cadre de travail consiste à activer seulement une grappe de capteurs sans fil pour effectuer les tâches de pistage d'objet au niveau deu chef de groupe en utilisant un filtre particulaire. La grappe optimale est déterminée à chaque intervale de temps et le transfert du titre de chef de groupe est réalisé au besoin. Pour réduire les coûts de communication, seulement une approximation de la distribution du filtre est envoyé pendant le transfert de responsabilités ce qui entraîne des erreurs supplémentaires. Les garanties de performance uniformes dans le temps tenant compte de ces erreurs supplémentaires sont obtenues dans deux contextes.
Peacock, Andrew M. "Information fusion for improved motion estimation." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/428.
Повний текст джерелаHoang, Thi Bich Ngoc. "Information diffusion, information and knowledge extraction from social networks." Thesis, Toulouse 2, 2018. http://www.theses.fr/2018TOU20078.
Повний текст джерелаThe popularity of online social networks has rapidly increased over the last decade. According to Statista, approximated 2 billion users used social networks in January 2018 and this number is still expected to grow in the next years. While serving its primary purpose of connecting people, social networks also play a major role in successfully connecting marketers with customers, famous people with their supporters, need-help people with willing-help people. The success of online social networks mainly relies on the information the messages carry as well as the spread speed in social networks. Our research aims at modeling the message diffusion, extracting and representing information and knowledge from messages on social networks. Our first contribution is a model to predict the diffusion of information on social networks. More precisely, we predict whether a tweet is going to be diffused or not and the level of the diffusion. Our model is based on three types of features: user-based, time-based and content-based features. Being evaluated on various collections corresponding to dozen millions of tweets, our model significantly improves the effectiveness (F-measure) compared to the state-of-the-art, both when predicting if a tweet is going to be retweeted or not, and when predicting the level of retweet. The second contribution of this thesis is to provide an approach to extract information from microblogs. While several pieces of important information are included in a message about an event such as location, time, related entities, we focus on location which is vital for several applications, especially geo-spatial applications and applications linked to events. We proposed different combinations of various existing methods to extract locations in tweets targeting either recall-oriented or precision-oriented applications. We also defined a model to predict whether a tweet contains a location or not. We showed that the precision of location extraction tools on the tweets we predict to contain a location is significantly improved as compared when extracted from all the tweets.Our last contribution presents a knowledge base that better represents information from a set of tweets on events. We combined a tweet collection with other Internet resources to build a domain ontology. The knowledge base aims at bringing users a complete picture of events referenced in the tweet collection (we considered the CLEF 2016 festival tweet collection)
Fidalgo, Luis Miguel. "Novel methods for droplet fusion, extraction and analysis in multiphase microfluidics." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611619.
Повний текст джерелаBellenger, Amandine. "Semantic Decision Support for Information Fusion Applications." Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00845918.
Повний текст джерелаCavanaugh, Andrew F. "Bayesian Information Fusion for Precision Indoor Location." Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/157.
Повний текст джерелаToledo, Testa Juan Ignacio. "Information extraction from heterogeneous handwritten documents." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667388.
Повний текст джерелаEl objetivo de esta tesis es la extracción de Información de documentos total o parcialmente manuscritos, con una cierta estructura. Básicamente trabajamos con dos escenarios de aplicación diferentes. El primer escenario son los documentos modernos altamente estructurados, como los formularios. En estos documentos, la información semántica está pre-definida en campos con una posición concreta en el documento i la extracción de información es equivalente a una transcripción. El segundo escenario son los documentos semi-estructurados totalmente manuscritos, donde, además de transcribir, es necesario asociar un valor semántico, de entre un conjunto conocido de valores posibles, a las palabras manuscritas. En ambos casos, la calidad de la transcripción tiene un gran peso en la precisión del sistema. Por ese motivo proponemos modelos basados en redes neuronales para transcribir el texto manuscrito. Para poder afrontar el reto de los documentos semi-estructurados, hemos generado un benchmark, compuesto de dataset, una serie de tareas y una métrica que fue presentado a la comunidad científica a modo de competición internacional. También proponemos diferentes modelos basados en Redes Neuronales Convolucionales y Recurrentes, capaces de transcribir y asignar diferentes etiquetas semánticas a cada palabra manuscrita, es decir, capaces de extraer información.
The goal of this thesis is information Extraction from totally or partially handwritten documents. Basically we are dealing with two different application scenarios. The first scenario are modern highly structured documents like forms. In this kind of documents, the semantic information is encoded in different fields with a pre-defined location in the document, therefore, information extraction becomes equivalent to transcription. The second application scenario are loosely structured totally handwritten documents, besides transcribing them, we need to assign a semantic label, from a set of known values to the handwritten words. In both scenarios, transcription is an important part of the information extraction. For that reason in this thesis we present two methods based on Neural Networks, to transcribe handwritten text.In order to tackle the challenge of loosely structured documents, we have produced a benchmark, consisting of a dataset, a defined set of tasks and a metric, that was presented to the community as an international competition. Also, we propose different models based on Convolutional and Recurrent neural networks that are able to transcribe and assign different semantic labels to each handwritten words, that is, able to perform Information Extraction.
Walessa, Marc. "Bayesian information extraction from SAR images." [S.l. : s.n.], 2001. http://deposit.ddb.de/cgi-bin/dokserv?idn=964273659.
Повний текст джерелаPopescu, Ana-Maria. "Information extraction from unstructured web text /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6935.
Повний текст джерелаWilliams, Dean Ashley. "Combining data integration and information extraction." Thesis, Birkbeck (University of London), 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.499152.
Повний текст джерелаDuarte, Lucio Mauro. "Behaviour Model Extraction Using Context Information." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.498466.
Повний текст джерела