Teses / dissertações sobre o tema "Détection des robots web"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Détection des robots web".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Chiapponi, Elisa. "Detecting and Mitigating the New Generation of Scraping Bots". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS490.
Texto completo da fonteEvery day an invisible war for data takes place between e-commerce websites and web scrapers. E-commerce websites own the data at the heart of the conflict and would like to provide it only to genuine users. Web scrapers aim to have illimited and continuous access to the above-mentioned data to capitalize on it. To achieve this goal, scrapers send large amounts of requests to e-commerce websites, causing them financial problems. This led the security industry to engage in an arms race against scrapers to create better systems to detect and mitigate their requests. At present, the battle continues, but scrapers appear to have the upper hand, thanks to the usage of Residential IP Proxies (RESIPs). In this thesis, we aim to shift the balance by introducing novel detection and mitigation techniques that overcome the limitations of current state-of-the-art methods. We propose a deceptive mitigation technique that lures scrapers into believing they have obtained their target data while they receive modified information. We present two new detection techniques based on network measurements that identify scraping requests proxied through RESIPs. Thanks to an ongoing collaboration with Amadeus IT Group, we validate our results on real-world operational data. Being aware that scrapers will not stop looking for new ways to avoid detection and mitigation, this thesis provides additional contributions that can help in building the next defensive weapons for fighting scrapers. We propose a comprehensive characterization of RESIPs, the strongest weapon currently at the disposal of scrapers. Moreover, we investigate the possibility of acquiring threat intelligence on the scrapers by geolocating them when they send requests through a RESIP
Wang, Xiao. "Détection de personnes multi-capteurs pour un robot mobile domestique". Paris 6, 2012. http://www.theses.fr/2012PA066548.
Texto completo da fonteRude, Howard Nathan. "Intelligent Caching to Mitigate the Impact of Web Robots on Web Servers". Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1482416834896541.
Texto completo da fonteGallastegi, Akaitz. "Web-based Real-Time Communication for Rescue Robots". Thesis, Linköpings universitet, Programvara och system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-108777.
Texto completo da fonteElarbi, Boudihir Mohamed. "Contribution au guidage des robots mobiles par vision". Vandoeuvre-les-Nancy, INPL, 1992. http://docnum.univ-lorraine.fr/public/INPL_T_1992_ELARBI_BOUDIHIR_M.pdf.
Texto completo da fonteLavergne, Thomas. "Détection des textes non-naturels". Paris, Télécom ParisTech, 2009. http://www.theses.fr/2009ENST0074.
Texto completo da fonteThis thesis concerns unnatural language detection, especially in the context of fighting web spam. The main goal is to improve the quality of results produced by web search engines by automatically distinguishing between legitimate and fake content. In the first part, the thesis focuses on various kinds of fake content that can be found on the web, how it can be used to generate Web spam, and on the existing methods used to detect it. In the second part, a more general problem of the essence of unnatural texts is studied. Three definitions are proposed and illustrated through a taxonomy of such texts, the last one being a pragmatic definition usable in the context of automatic detection of unnatural texts. Te last part describes detection methods adapted to the different kinds of unnatural texts found in Web spam. These methods, based on statistical models, use the structure as well as the content of texts and are validated on both synthetic and real data
Segura, Serge. "Détection de collisions et définition de stratégies d'évitement d'obstacles dans un environnement de programmation hors-ligne pour robots". Montpellier 2, 1987. http://www.theses.fr/1987MON20025.
Texto completo da fonteArezki, Amine. "Détection de trajectoires et analyse de comportement pour l'assistance aux piétons". Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0029.
Texto completo da fonteIn this PhD thesis we present a method to assure ambient assistance in urban environment, using a mobile agent, in order to anticipate assistance if needed. Therefore, we have to understand the human behavior and The pedestrian’s needs. We determine how to focus on a moving subject and how to use interaction between A mobile agent and A pedestrian for confirmation of the need to provide assistance. Therefore, the trajectory type concept is used to define the first step of analyses, which is called the approach step. Combining this step with the field information provided by the mobile agent, a certain type of assistance will be insured. In terms of observation, two different views are employed to detect assistance requirements, i. E. , the Fix Intelligent Device and the Ambient Intelligent Devices, both communicating by wireless. The Fix Intelligent Device is composed by a fix camera standing on a very top view allowing the detection of possible motions and an algorithm to classify the trajectories, using neural network. This classification result is subsequently communicated to the mobile agent. In our thesis, a Touch Ambient Intelligent Device is represented by mobile robot with three degrees of freedom, including a 3D camera (KinectTM) to detect human body poses, and additionally a touch screen tablet to interact with the subject. The conclusion can be made that human intervention may be required only IN critical cases
Serrour, Belkacem. "Détection et analyse de communautés dans les réseaux". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10332.
Texto completo da fonteThe study of the sub-structure of complex networks is of major importance to relate topology and functionality. Understanding the modular units (communities) of graphs is of utmost importance to grasping knowledge about the functionality and performance of such systems. A community is defined as a group of nodes such that connections between the nodes are denser than connections with the rest of the network. Generally, the members of one community share the same interest. Many efforts have been devoted to the analysis of the modular structure of networks. The most of these works are grouped into two parts: community detection and community analysis. Community detection consists on finding communities in networks whithout knowing there size and number. While the community analysis deals the study of the structural and semantic properties of the emerged communities, and the understanding of the functionality and the performance of the network. In this thesis, we are interested on the study of the community structures in networks. We give contributions in both community analysis and community detection parts. In the community analysis part, we study the communities of communication networks and the communities in web services. On the one hand, we study the community emergence in communication networks. We propose a classification of the emerged community structures in a given network. We model the networks by graphs and we characterize them by some parameters (network size, network density, number of resources in the network, number of providers in the network, etc.). We give also a direct correlation between the network parameters and the emerged community structures. On the other hand, we study the communities in the web service logs. We aim to discover the business protocol of services (sequences of messages exchanged between the service and a client to achieve a given goal). We analyze the log files and we model them by graphs. In our final tree graph (message graph), the paths represent the conversations (communities). In the community detection part, the main goal of our contribution is to determine communities using as building blocks triangular motifs. We propose an approach for triangle community detection based on modularity optimization using the spectral algorithm decomposition and optimization. The resulting algorithm is able to identify efficiently the best partition in communities of triangles of any given network, optimizing their correspondent modularity function
Fu, Wenhao. "Visual servoing for mobile robots navigation with collision avoidance and field-of-view constraints". Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0019/document.
Texto completo da fonteThis thesis is concerned with the problem of vision-based navigation for mobile robots in indoor environments. Many works have been carried out to solve the navigation using a visual path, namely appearance-based navigation. However, using this scheme, the robot motion is limited to the trained visual path. The potential collision during the navigation process can make robot deviate from the current visual path, in which the visual landmarks can be lost in the current field of view. To the best of our knowledge, seldom works consider collision avoidance and landmark loss in the framework of appearance-based navigation. We outline a mobile robot navigation framework in order to enhance the capability of appearance-based method, especially in case of collision avoidance and field-of-view constraints. Our framework introduces several technical contributions. First of all, the motion constraints are considered into the visual landmark detection to improve the detection performance. Next then, we model the obstacle boundary using B-Spline. The B-Spline representation has no accidented regions and can generate a smooth motion for the collision avoidance task. Additionally, we propose a vision-based control strategy, which can deal with the complete target loss. Finally, we use spherical image to handle the case of ambiguity and infinity projections due to perspective projection. The real experiments demonstrate the feasability and the effectiveness of our framework and methods
Meziou, Tarak Najah. "Système réactif pour l'évitement des obstacles en robotique mobile : architecture d'un contrôle d'exécution assurant l'interaction du système et d'une planification globale". Châtenay-Malabry, Ecole centrale de Paris, 1992. http://www.theses.fr/1992ECAP0249.
Texto completo da fonteAkrout, Rim. "Analyse de vulnérabilités et évaluation de systèmes de détection d'intrusions pour les applications Web". Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00782565.
Texto completo da fonteGastellier-Prevost, Sophie. "Vers une détection des attaques de phishing et pharming côté client". Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00699627.
Texto completo da fonteLiu, Jian. "Contribution à l'étude et à la réalisation d'un système de détection et de localisation en ligne des pannes franches d'un robot ou télémanipulateur". Aix-Marseille 3, 1988. http://www.theses.fr/1988AIX30065.
Texto completo da fonteMajorczyk, Frédéric. "Détection d'intrusions comportementale par diversification de COTS : application au cas des serveurs web". Phd thesis, Université Rennes 1, 2008. http://tel.archives-ouvertes.fr/tel-00355366.
Texto completo da fonteNotre travail s'inscrit dans le domaine de la détection d'intrusions, de manière essentielle, et permet une certaine tolérance aux intrusions. Contrairement aux méthodes de détection classiques en détection comportementale pour lesquelles il est nécessaire de définir et construire un modèle de référence du comportement de l'entité surveillée, nous avons suivi une méthode issue de la sureté de fonctionnement fondée sur la programmation N-versions pour laquelle le modèle de référence est implicite et est constitué par les autres logiciels constituant l'architecture. Nous proposons l'utilisation de COTS en remplacement des versions spécifiquement développées car développer N-versions est couteux et est réservé à des systèmes critiques du point de vue de la sécurité-innocuité. D'autres travaux et projets ont proposé des architectures fondées sur ces idées. Nos contributions se situent à différents niveaux. Nous avons pris en compte dans notre modèle général de détection d'intrusions les spécificités liées à l'utilisation de COTS en lieu et place de versions spécifiquement développées et proposé deux solutions pour parer aux problèmes induits par ces spécificités. Nous avons proposé deux approches de détection d'intrusions fondées sur cette architecture : l'une suivant une approche de type boite noire et l'autre suivant une approche de type boite grise. Notre méthode de type boite grise peut, en outre, aider l'administrateur de sécurité à effectuer un premier diagnostic des alertes. Nous avons réalisé une implémentation de ces deux approches dans le cadre des serveurs web et avons évalué pratiquement la pertinence et de la fiabilité de ces deux IDS.
Hossen, Karim. "Inférence automatique de modèles d'applications Web et protocoles pour la détection de vulnérabilités". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM077/document.
Texto completo da fonteIn the last decade, model-based testing (MBT) approaches have shown their efficiency in the software testing domain but a formal model of the system under test (SUT) is required and, most of the time, not available for several reasons like cost, time or rights. The goal of the SPaCIoS project is to develop a security testing tool using MBT approach. The goal of this work, funded by the SPaCIoS project, is to develop and implement a model inference method for Web applications and protocols. From the inferred model, vulnerability detection can be done following SPaCIoS model-checking method or by methods we have developed. We developed an inference algorithm adapted to Web applications and their properties. This method takes into account application data and their influence on the control flow. Using data mining algorithms, the inferred model is refined with optimized guards and output functions. We also worked on the automation of the inference. In active learning approaches, it is required to know the complete interface of the system in order to communicate with it. As this step can be time-consuming, this step has been made automatic using crawler and interface extraction method optimized for inference. This crawler is also available as standalone for third-party inference tools. In the complete inference algorithm, we have merged the inference algorithm and the interface extraction to build an automatic procedure. We present the free software SIMPA, containing the algorithms, and we show some of the results obtained on SPaCIoS case studies and protocols
Majorczyk, Frédéric. "Détection d’intrusions comportementale par diversification de COTS : application au cas des serveurs web". Rennes 1, 2008. https://tel.archives-ouvertes.fr/tel-00355366.
Texto completo da fonteInformation systems’ security is a fundamental issue. It is necessary to define a security policy for these systems and check that it is not violated. Preventive security mechanisms are generally insufficient. Intrusion detection systems (IDSes) can be used to detect violations of the security policy, that is intrusions. Intrusion tolerance tools and techniques can also be used. Our work is in the intrusion detection field and allows some intrusion tolerance. In classical anomaly-based approaches, it is necessary to build a behavioral model of the observed entity. To the contrary, in our approach, the behavioral model is implicit and is composed by the other software components in the architecture. This approach comes from the dependability field and is based on N-versions programming. We propose using COTS instead of specifically developped versions. Using COTS introduces some issues that we have taken into account in our general intrusion detection model. We have proposed solutions to bypass these issues. We have proposed two intrusion detection approaches based on this architecture : the first one following a black-box approach and the second one following a graybox approach. We have applied these approaches to web servers and evaluated the false positive and true positive rates of our IDSes
Makiou, Abdelhamid. "Sécurité des applications Web : Analyse, modélisation et détection des attaques par apprentissage automatique". Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0084/document.
Texto completo da fonteWeb applications are the backbone of modern information systems. The Internet exposure of these applications continually generates new forms of threats that can jeopardize the security of the entire information system. To counter these threats, there are robust and feature-rich solutions. These solutions are based on well-proven attack detection models, with advantages and limitations for each model. Our work consists in integrating functionalities of several models into a single solution in order to increase the detection capacity. To achieve this objective, we define in a first contribution, a classification of the threats adapted to the context of the Web applications. This classification also serves to solve some problems of scheduling analysis operations during the detection phase of the attacks. In a second contribution, we propose an architecture of Web application firewall based on two analysis models. The first is a behavioral analysis module, and the second uses the signature inspection approach. The main challenge to be addressed with this architecture is to adapt the behavioral analysis model to the context of Web applications. We are responding to this challenge by using a modeling approach of malicious behavior. Thus, it is possible to construct for each attack class its own model of abnormal behavior. To construct these models, we use classifiers based on supervised machine learning. These classifiers use learning datasets to learn the deviant behaviors of each class of attacks. Thus, a second lock in terms of the availability of the learning data has been lifted. Indeed, in a final contribution, we defined and designed a platform for automatic generation of training datasets. The data generated by this platform is standardized and categorized for each class of attacks. The learning data generation model we have developed is able to learn "from its own errors" continuously in order to produce higher quality machine learning datasets
Makiou, Abdelhamid. "Sécurité des applications Web : Analyse, modélisation et détection des attaques par apprentissage automatique". Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0084.
Texto completo da fonteWeb applications are the backbone of modern information systems. The Internet exposure of these applications continually generates new forms of threats that can jeopardize the security of the entire information system. To counter these threats, there are robust and feature-rich solutions. These solutions are based on well-proven attack detection models, with advantages and limitations for each model. Our work consists in integrating functionalities of several models into a single solution in order to increase the detection capacity. To achieve this objective, we define in a first contribution, a classification of the threats adapted to the context of the Web applications. This classification also serves to solve some problems of scheduling analysis operations during the detection phase of the attacks. In a second contribution, we propose an architecture of Web application firewall based on two analysis models. The first is a behavioral analysis module, and the second uses the signature inspection approach. The main challenge to be addressed with this architecture is to adapt the behavioral analysis model to the context of Web applications. We are responding to this challenge by using a modeling approach of malicious behavior. Thus, it is possible to construct for each attack class its own model of abnormal behavior. To construct these models, we use classifiers based on supervised machine learning. These classifiers use learning datasets to learn the deviant behaviors of each class of attacks. Thus, a second lock in terms of the availability of the learning data has been lifted. Indeed, in a final contribution, we defined and designed a platform for automatic generation of training datasets. The data generated by this platform is standardized and categorized for each class of attacks. The learning data generation model we have developed is able to learn "from its own errors" continuously in order to produce higher quality machine learning datasets
Balde, Mamadou Habib. "Contribution à l'étude de l'assemblage et des robots parallèles : application à l'assemblage d'éléments complexes". Paris 11, 1989. http://www.theses.fr/1989PA112417.
Texto completo da fonteZhang, Zhongkai. "Vision-based calibration, position control and force sensing for soft robots". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I001/document.
Texto completo da fonteThe modeling of soft robots which have, theoretically, infinite degrees of freedom, are extremely difficult especially when the robots have complex configurations. This difficulty of modeling leads to new challenges for the calibration and the control design of the robots, but also new opportunities with possible new force sensing strategies. This dissertation aims to provide new and general solutions using modeling and vision. The thesis at first presents a discrete-time kinematic model for soft robots based on the real-time Finite Element (FE) method. Then, a vision-based simultaneous calibration of sensor-robot system and actuators is investigated. Two closed-loop position controllers are designed. Besides, to deal with the problem of image feature loss, a switched control strategy is proposed by combining both the open-loop controller and the closed-loop controller. Using soft robot itself as a force sensor is available due to the deformable feature of soft structures. Two methods (marker-based and marker-free) of external force sensing for soft robots are proposed based on the fusion of vision-based measurements and FE model. Using both methods, not only the intensities but also the locations of the external forces can be estimated.As a specific application, a cable-driven continuum catheter robot through contacts is modeled based on FE method. Then, the robot is controlled by a decoupled control strategy which allows to control insertion and bending independently. Both the control inputs and the contact forces along the entire catheter can be computed by solving a quadratic programming (QP) problem with a linear complementarity constraint (QPCC)
Almanza, Ojeda Dora Luz. "Détection et suivi d'objets mobiles perçus depuis un capteur visuel embarqué". Toulouse 3, 2011. http://thesesups.ups-tlse.fr/2339/.
Texto completo da fonteThis dissertation concerns the detection and the tracking of mobile objets in a dynamic environment, using a camera embedded on a mobile robot. It is an important challenge because only a single camera is used to solve the problem. We must detect mobile objects in the scene, analyzing their apparent motions on images, excluding the motion caused by the ego-motion of the camera. First it is proposed a spatio-remporal analysis of the image sequence based on the sparse optical flow. The a contrario clustering method provides the grouping of dynamic points, without using a priori information and without parameter tuning. This method success is based on the accretion of sufficient information on positions and velocities of these points. We call tracking time, the time required in order to acquire images analyzed to provide the points characterization. A probabilistic map is built in order to find image areas with the higher probabilities to find a mobile objet; this map allows an active selection of new points close the previously detected mobile regions, making larger these regions. In a second step, it is proposed an iterative approach to perform the detection-clustering-tracking process on image sequences acquired from a fixed camera for indoor or outdoor applications. An object is described by an active contour, updated so that the initial object model remains inside the contour. Finally it is presented experimental results obtained on images acquired from a camera embedded on a mobile robot navigating in outdoor environments with rigid or non rigid mobile objects ; it is shown that the method works to detect obstacles during the navigation in a priori unknown environments, first with a weak speed, then with more a realistic speed, compensating the robot ego-motion in images
Khireddine, Mohammed-Salah. "Contribution à l'étude et à la réalisation d'un système centralisé de détection et de localisation en ligne des pannes des asservissements d'un robot industriel". Aix-Marseille 3, 1990. http://www.theses.fr/1990AIX30043.
Texto completo da fonteGastellier-Prevost, Sophie. "Vers une détection des attaques de phishing et pharming côté client". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0027.
Texto completo da fonteThe development of online transactions and "always-connected" broadband Internet access is a great improvement for Internet users, who can now benefit from easy access to many services, regardless of the time or their location. The main drawback of this new market place is to attract attackers looking for easy and rapid profits. One major threat is known as a phishing attack. By using website forgery to spoof the identity of a company that proposes financial services, phishing attacks trick Internet users into revealing confidential information (e.g. login, password, credit card number). Because most of the end-users check the legitimacy of a login website by looking at the visual aspect of the webpage displayed by the web browser - with no consideration for the visited URL or the presence and positioning of security components -, attackers capitalize on this weakness and design near-perfect copies of legitimate websites, displayed through a fraudulent URL. To attract as many victims as possible, most of the time phishing attacks are carried out through spam campaigns. One popular method for detecting phishing attacks is to integrate an anti-phishing protection into the web browser of the user (i.e. anti-phishing toolbar), which makes use of two kinds of classification methods : blacklists and heuristic tests. The first part of this thesis consists of a study of the effectiveness and the value of heuristics tests in differentiating legitimate from fraudulent websites. We conclude by identifying the decisive heuristics as well as discussing about their life span. In more sophisticated versions of phishing attacks - i.e. pharming attacks -, the threat is imperceptible to the user : the visited URL is the legitimate one and the visual aspect of the fake website is very similar to the original one. As a result, pharming attacks are particularly effective and difficult to detect. They are carried out by exploiting DNS vulnerabilities at the client-side, in the ISP (Internet Service Provider) network or at the server-side. While many efforts aim to address this problem in the ISP network and at the server-side, the client-side remains excessively exposed. In the second part of this thesis, we introduce two approaches - intended to be integrated into the client’s web browser - to detect pharming attacks at the client-side. These approaches combine both an IP address check and a webpage content analysis, performed using the information provided by multiple DNS servers. Their main difference lies in the method of retrieving the webpage which is used for the comparison. By performing two sets of experimentations, we validate our concept
FRANCIS, SHINCE. "REMOTE ADMINISTRATION OF AN AUTONOMOUS GUIDED VEHICLE THROUGH WEB BASED WIRELESS INTERFACES". University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060185932.
Texto completo da fonteIdrissi, Saïd. "Contribution à l'étude et à la réalisation d'un système de détection et de localisation en ligne des pannes d'un axe intelligent intégrable dans une architecture robotique totalement décentralisée". Aix-Marseille 3, 1989. http://www.theses.fr/1989AIX30072.
Texto completo da fonteLegaspi, Ramos Xurxo. "Scraping Dynamic Websites for Economical Data : A Framework Approach". Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-57070.
Texto completo da fonteGranata, Consuelo. "CONTRIBUTION A LA CONCEPTION D'INTERFACES ET DE COMPORTEMENTS INTERACTIFS POUR DES ROBOTS PERSONNELS". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00684772.
Texto completo da fonteAït-Bachir, Ali. "ArchiMed : un canevas pour la détection et la résolution des incompatibilités des conversations entre services web". Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00351255.
Texto completo da fonteDans un premier temps, une modélisation des interfaces comportementales par des automates est adoptée. Dans cette modélisation, seul le comportement externe (comportement observable) est considéré. En d'autres termes, il n'y a que les opérations d'envoi et de réception de messages qui sont décrites dans les interfaces. Une fois les interfaces comportementales décrites en automates, une étape de détection des incompatibilités entre les différentes définitions de l'interface d'un service est réalisée. La détection des incompatibilités est automatique et portent sur des changements élémentaires dans les interfaces qui sont : l'ajout, la suppression et la modification d'opérations. Une visualisation des incompatibilités entre les interfaces est rendue disponible.
Le canevas maintient les descriptions des versions successives de l'interface du service ainsi que l'ensemble minimal de médiateurs pour que les clients puissent interagir avec le service au travers de l'une de ses versions. A l'initiation d'une conversation par un client, si nécessaire, le canevas sélectionne parmi les médiateurs disponibles celui qui résoud les incompatibilités entre le client et le service. La sélection du médiateur est basée sur une mesure de similarité entre les interfaces afin de découvrir la description de l'interface fournie qui est conforme avec l'interface requise du client. La validation expérimentale du canevas, a été effectuée par le traitement d'une collection de tests qui contient les descriptions des interfaces comportementales des services. Une étude quantitative et comparative à des travaux similiaires est réalisée et montre l'apport significatif de notre proposition.
Aït-Bachir, Ali. "ArchiMed : un canevas pour la détection et la résolution des incompatibilités des conversations entre services web". Phd thesis, Grenoble 1, 2008. http://www.theses.fr/2008GRE10206.
Texto completo da fonteWeb service interactions rely on message ex changes, modelling Web service aims at describing messages as weil from the structural point of view : as from the behavioural point of view. With this setting, a Web service's interface is defined as the set of messages it can receive and send, and the inter-dependencies between these messages. When a Web service evolves, its interface is more likely to be modified too. This leads to the situation where the provided interface of a service does not correspond to the one its partners expect. Ln this thesis, we introduce our techniques for the detection and the resolution of incompatibilities. Our contribution is the framework, namely ArchiMed. We analyse versions of a service interface in order to detect changes that cause clients using any of the earlier versions not to interact property with a i later version. We focus on behavioural incompatibilities and adopt the notion of simulation as a basis for determining if a new version of a service is 1 behaviourally compatible with a previous one. Our technique does not simply check if the new version of the service simulates the previous one, but: it goes on to identify other incompatibilities. The framework, has been tested on a collection of interfaces of standard business-to-business choreographies. The same service has to expose as many provided interfaces as collaborations it is involved in. This solution consists in supplying a mediator which is capable of matchmaking each of the required interfaces with the provided interface. The selection of the suitable mediator is based on the similarity measure between the provided interface an the required interface by the client
Cazabet, Rémy. "Détection de communautés dynamiques dans des réseaux temporels". Phd thesis, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00874017.
Texto completo da fonteHernandez, Nicolas. "Description et détection automatique de structures de texte". Paris 11, 2004. http://www.theses.fr/2004PA112329.
Texto completo da fonteInformation Retrieval Systems are not well adapted for text browsing and visualization (dynamic summarization). But this one is always necessary for the user to evaluate the Information Retrieval (IR) systems are not well adapted for text browsing and visualization (dynamic summarization). But this is always necessary for users to evaluate the relevance of a document. Our work follows a Web Semantic perspective. We aim at annotating documents with abstract information about content description and discourse organization in order to create more abilities for IR systems. Descriptive information concerns both topic identification and semantic and rhetorical classification of text extracts (With information such as "Our aim is. . . ", "This paper deals with. . . "). We implement a system to identify topical linguistic expressions based on a robust anaphora system and lexical chains building. We also propose a method in order to automatically acquire meta-discursive material. We perform the detection of the text structure thanks to two complementary approaches. The first one offers a top-down analysis based on the segmentation provided by lexical cohesion and by linguistic markers such as frame introducers. The second one is concerned by local text organization by the detection of informational relations (coordination and subordination) between subsequent sentences
Brassard-Gourdeau, Éloi. "Toxicité et sentiment : comment l'étude des sentiments peut aider la détection de toxicité". Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/37564.
Texto completo da fonteAutomatic toxicity detection of online content is a major research field nowadays. Moderators cannot filter manually all the messages that are posted everyday and users constantly find new ways to circumvent classic filters. In this master’s thesis, I explore the benefits of sentiment detection for three majors challenges of automatic toxicity detection: standard toxicity detection, making filters harder to circumvent, and predicting conversations at high risk of becoming toxic. The two first challenges are studied in the first article. Our main intuition is that it is harder for a malicious user to hide the toxic sentiment of their message than to change a few toxic keywords. To test this hypothesis, a sentiment detection tool is built and used to measure the correlation between sentiment and toxicity. Next, the sentiment is used as features to train a toxicity detection model, and the model is tested in both a classic and a subversive context. The conclusion of those tests is that sentiment information helps toxicity detection, especially when using subversion. The third challenge is the subject of our second paper. The objective of that paper is to validate if the sentiments of the first messages of a conversation can help predict if it will derail into toxicity. The same sentiment detection tool is used, in addition to other features developed in previous related works. Our results show that sentiment does help improve that task as well.
Goulet, Sylvain. "Techniques d'identification d'entités nommées et de classification non-supervisée pour des requêtes de recherche web à l'aide d'informations contenues dans les pages web visitées". Mémoire, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5387.
Texto completo da fonteCottret, Maxime. "Exploration visuelle d'environnement intérieur par détection et modélisation d'objets saillants". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2007. http://tel.archives-ouvertes.fr/tel-00289380.
Texto completo da fonteLe, Tallec Marc. "Compréhension de parole et détection des émotions pour robot compagnon". Thesis, Tours, 2012. http://www.theses.fr/2012TOUR4044.
Texto completo da fonteGirond, Florian. "Mise en place d’un système d’information géographique pour la détection précoce et la prédiction des épidémies de paludisme à Madagascar". Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0012/document.
Texto completo da fonteWe describe a Malaria Early Warning System (MEWS) using various epidemic thresholds and a forecasting component with the support of recent technologies to improve the performance of a sentinel MEWS. Malaria-related data from sentinel sites collected by Short Message Service are automatically stored in a database hosted on a server at Institut Pasteur de Madagascar. Concomitantly our system routinely and automatically acquires site specific satellite weather data related to changes in malaria prevalence such as temperature, rainfall and Normalized Difference Vegetation Index (NDVI). A Malaria Control Intervention data base has also been. This system has already demonstrated its ability to detect a malaria outbreak in southeastern part of Madagascar in 2014. In a second time, we conducted a study to assess the relationship between the effectiveness of mass campaign of long-lasting insecticidal nets (LLIN) over time and malaria outbreaks identified in Madagascar from 2009 to 2015 through the Sentinel surveillance system. This study showed that the difference between efficacy and effectiveness may result in gaps in service coverage during the subsequent years contributing to malaria rebound well before the replacement of the LLINs and highlights the need of continuous distribution mechanism of LLINs.This work aims to maximize the usefulness of a sentinel surveillance system to predict and detect epidemics in limited-resource environments, to guide any changes in the orientation of malaria control programs and to provide practical examples and suggestions for use in other systems or settings
Azough, Ahmed. "Modèle sémantique de la vidéo pour la description, la détection et la recherche des événements visuels". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10055.
Texto completo da fonteThis thesis is about to explore the use of tools to support semantics of data in the field of multimedia. The first contribution concerns the generation of high-level descriptions. We propose a description language that allows high-level definition of events and objects from low-level features. The second contribution is the exploration of certain types of uncertainty reasoning in the context of multimedia semantics. We propose a semantic language (based on fuzzy conceptual graphs) for descriptions of videos and define mechanisms underlying reasoning. The third contribution relates to the semantic indexing and retrieval in multimedia databases. We propose a query language from deductive databases for the expression of spatiotemporal and semantic queries
Edouard, Amosse. "Détection et analyse d’événement dans les messages courts". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4079/document.
Texto completo da fonteIn the latest years, the Web has shifted from a read-only medium where most users could only consume information to an interactive medium allowing every user to create, share and comment information. The downside of social media as an information source is that often the texts are short, informal and lack contextual information. On the other hand, the Web also contains structured Knowledge Bases (KBs) that could be used to enrich the user-generated content. This dissertation investigates the potential of exploiting information from the Linked Open Data KBs to detect, classify and track events on social media, in particular Twitter. More specifically, we address 3 research questions: i) How to extract and classify messages related to events? ii) How to cluster events into fine-grained categories? and 3) Given an event, to what extent user-generated contents on social medias can contribute in the creation of a timeline of sub-events? We provide methods that rely on Linked Open Data KBs to enrich the context of social media content; we show that supervised models can achieve good generalisation capabilities through semantic linking, thus mitigating overfitting; we rely on graph theory to model the relationships between NEs and the other terms in tweets in order to cluster fine-grained events. Finally, we use in-domain ontologies and local gazetteers to identify relationships between actors involved in the same event, to create a timeline of sub-events. We show that enriching the NEs in the text with information provided by LOD KBs improves the performance of both supervised and unsupervised machine learning models
Luong, Phuc Hiep. "Gestion de l'évolution d'un Web sémantique d'entreprise". Phd thesis, École Nationale Supérieure des Mines de Paris, 2007. http://tel.archives-ouvertes.fr/tel-00198718.
Texto completo da fontePellegrino, Giancarlo. "Détection d'anomalies logiques dans les logiciels d'entreprise multi-partis à travers des tests de sécurité". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0064.
Texto completo da fonteMulti-party business applications are distributed computer programs implementing collaborative business functions. These applications are one of the main target of attackers who exploit vulnerabilities in order to perform malicious activities. The most prevalent classes of vulnerabilities are the consequence of insufficient validation of the user-provided input. However, the less-known class of logic vulnerabilities recently attracted the attention of researcher. According to the availability of software documentation, two testing techniques can be used: design verification via model checking, and black-box security testing. However, the former offers no support to test real implementations and the latter lacks the sophistication to detect logic flaws. In this thesis, we present two novel security testing techniques to detect logic flaws in multi-party business applicatons that tackle the shortcomings of the existing techniques. First, we present the verification via model checking of two security protocols. We then address the challenge of extending the results of the model checker to automatically test protocol implementations. Second, we present a novel black-box security testing technique that combines model inference, extraction of workflow and data flow patterns, and an attack pattern-based test case generation algorithm. Finally, we discuss the application of the technique developed in this thesis in an industrial setting. We used these techniques to discover previously-unknown design errors in SAML SSO and OpenID protocols, and ten logic vulnerabilities in eCommerce applications allowing an attacker to pay less or shop for free
Cappelle, Cindy. "Localisation de véhicules et détection d'obstacles : apport d'un modèle virtuel 3D urbain". Thesis, Lille 1, 2008. http://www.theses.fr/2008LIL10119/document.
Texto completo da fonteThis thesis deals with ego-Iocalization of intelligent vehicles and obstacles detection with virtual 3D city mode!. Vehicle localization uses several sources of infonnation : a GPS receiver, proprioceptive sensors (odometers and gyrometer), a video camera and a virtual 3D city mode!. The proprioceptive sensors allow to continuously estimate the dead-reckoning position and orientation of the vehicle. This dead-reckoning estimation of the pose is corrected by GPS measurements. Moreover, a 3D geographical observation is constructed to compensate the drift of the dead-reckoning localisation when GPS measurements are unavailable for a long time. The 3D geographical observation is based on the matching between the virtual 3D city model and the images acquired by the camera. Experimental results iIlustrate the developed approach. Moreover, the contribution of a virtual 3D city model is also studied for the detection and the localization of obstacles. Once the vehicle is localized in the 3D model, the obstacles of the infrastructure like buildings are known and localized. ln order to detect the other obstacles as vehicles, pedestrians, ... the real image acquired by the camera and the virtual image extracted from the virtual 3D model are compared, by considering that this kind of obstacles are in the real image but are absent from the virtual image. With the detph information available from the 3D model, the detccted obstacle are then localized. Experimental results are compared with Lidar measurements
Kaske, Axel. "Contribution à la détection des bords de route imprécis : implantation sur le robot ROMANE". Vandoeuvre-les-Nancy, INPL, 1997. http://www.theses.fr/1997INPL039N.
Texto completo da fonteNARAYANAN, SUGAN. "APPLICATION OF WEB SERVICES FOR REMOTE ACCESS OF BEARCAT III ROBOT USING THE .NET FRAMEWORK". University of Cincinnati / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1060199420.
Texto completo da fonteCherifi, Chantal. "Classification et Composition de Services Web : Une Perspective Réseaux Complexes". Phd thesis, Université Pascal Paoli, 2011. http://tel.archives-ouvertes.fr/tel-00652852.
Texto completo da fonteGadek, Guillaume. "Détection d'opinions, d'acteurs-clés et de communautés thématiques dans les médias sociaux". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR18/document.
Texto completo da fonteOnline Social Networks have taken a huge place in the informational space and are often used for advertising, e-reputation, propaganda, or even manipulation, either by individuals, companies or states. The amount of information makes difficult the human exploitation, while the need for social network analysis remains unsatisfied: trends must be extracted from the posted messages, the user behaviours must be characterised, and the social structure must be identified. To tackle this problem, we propose a system providing analysis tools on three levels. First, the message analysis aims to determine the opinions they bear. Then, the characterisation and evaluation of user accounts is performed thanks to the union of a behavioural profiling method, the study of node importance and position in social graphs and engagement and influence measures. Finally the step of user community detection and evaluation is accomplished. For this last challenge, we introduce thematic cohesion scores, completing the topological, graph-based measures for group quality. This system is then applied on two corpora, extracted from two different online social media. The first is constituted of messages published on Twitter, gathering every activity performed by a set of 5,000 accounts on a long period. The second stems from a ToR-based social network, named Galaxy2, and includes every public action performed on the platform during its uptime. We evaluate the relevance of our system on these two datasets, showing the complementarity of user account characterisation tools (influence, behaviour and role), and user account communities (interaction strength, thematic cohesion), enriching the social graph exploitation with textual content elements
Daass, Bilal. "Approches informationnelles pour une navigation autonome collaborative de robots d'exploration de zones à risques". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I054.
Texto completo da fonteIn the recent years, there was a growing interest to provide an accurate estimate of the state of a dynamic system for a wide range of applications. In this work, we target systems built up with several collaborative subsystems integrating various heterogeneous sensors. We introduce a filter concept that combines the advantages of both Kalman and informational filters to achieve low computational load. To consider any system whose measurement covariances are incomplete or unknown, a multi-sensor fusion based on the covariance intersection is analyzed in terms of calculation burden. Three multi-sensor fusion architectures are then considered. A fine analysis of the calculation load distribution of the filter and the covariance intersection algorithm is performed on the different components of these architectures. To make the system fault tolerant, informational statistical methods are developed. They are applicable to any method based on the generalized likelihood ratio. They lead to an adaptive threshold of this ratio. The technique has been implemented considering two types of control charts for the fast detection of sensor failures. Our theoretical approaches are validated through a system of collaborative mobile robots. We integrate a diagnosis and fault detection phase, which is based on the integration of these informational statistical methods into the fusion and estimation process, the latter being composed of a Bayesian filter and the covariance intersection. The main objective is to ensure that this system provides safe, accurate and fault-tolerant autonomous navigation. Finally, we present a proof-of-concept method for nondestructive and evaluation of materials in close proximity of the robot environment. In particular, we introduce a microwave sensor to characterize the electromagnetic wave to material under test interaction. This technique, known under the name radar, had a growing interest in academic laboratories and for usual applications related to speed measurements. Nevertheless, its adaptation to collaborative mobile robots remains a challenging task to address contactless characterization of materials, especially in harsh environments. This latter consists to determine the material characteristics from embedded microwave sensors
Bousbia, Nabila. "Analyse des traces de navigation des apprenants dans un environnement de formation dans une perspective de détection automatique des styles d'apprentissage". Paris 6, 2011. http://www.theses.fr/2011PA066011.
Texto completo da fonteDebure, Jonathan. "Détection de comportements et identification de rôles dans les réseaux sociaux". Electronic Thesis or Diss., Paris, CNAM, 2021. http://www.theses.fr/2021CNAM1290.
Texto completo da fonteSocial networks (SN) are omnipresent in our lives today. Not all users have the same behavior on these networks. If some have a low activity, rarely posting messages and following few users, some others at the other extreme have a significant activity, with many followers and regularly posts. The important role of these popular SN users makes them the target of many applications for example for content monitoring or advertising. After a study of the metadata of these users, in order to detect abnormal accounts, we present an approach allowing to detect users who are becoming popular. Our approach is based on modeling the evolution of popularity in the form of frequent patterns. These patterns describe the behaviors of gaining popularity. We propose a pattern matching model which can be used with a data stream and we show its scalability and its performance by comparing it to classic models. Finally, we present a clustering approach based on PageRank. This work allow to identify groups of users sharing the same role, using the interaction graphs
Makarov, Maria. "Contribution à la modélisation et la commande robuste de robots manipulateurs à articulations flexibles. Applications à la robotique interactive". Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00844738.
Texto completo da fonte