Dissertations / Theses on the topic 'Intelligence artificielle – Mesures de sécurité'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Intelligence artificielle – Mesures de sécurité.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hemmer, Adrien. "Méthodes de détection pour la sécurité des systèmes IoT hétérogènes." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0020.
Full textThis thesis concerns new detection methods for the security of heterogenous IoT systems, and fits within the framework of the SecureIoT European project. We have first proposed a solution exploiting the process mining together with pre-treatment techniques, in order to build behavioral models, and identifying anomalies from heterogenous systems. We have then evaluated this solution from datasets coming from different application domains : connected cars, industry 4.0, and assistance robots.. This solution enables to build models that are more easily understandable. It provides better detection results than other common methods, but may generate a longer detection time. In order to reduce this time without degrading detection performances, we have then extended our method with an ensemble approach, which combines the results from several detection methods that are used simultaneously. In particular, we have compared different score aggregation strategies, as well as evaluated a feedback mechanism for dynamically adjusting the sensitivity of the detection. Finally, we have implemented the solution as a prototype, that has been integrated into a security platform developed in collaboration with other European industrial partners
Doniat, Christophe. "Contribution à l'élaboration d'une méthodologie d'analyse systématique des vols centrée facteur humain : le système S-ethos." Aix-Marseille 3, 1999. http://www.theses.fr/1998AIX30081.
Full textDuchene, Fabien. "Detection of web vulnerabilities via model inference assisted evolutionary fuzzing." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM022/document.
Full textTesting is a viable approach for detecting implementation bugs which have a security impact, a.k.a. vulnerabilities. When the source code is not available, it is necessary to use black-box testing techniques. We address the problem of automatically detecting a certain class of vulnerabilities (Cross Site Scripting a.k.a. XSS) in web applications in a black-box test context. We propose an approach for inferring models of web applications and fuzzing from such models and an attack grammar. We infer control plus taint flow automata, from which we produce slices, which narrow the fuzzing search space. Genetic algorithms are then used to schedule the malicious inputs which are sent to the application. We incorporate a test verdict by performing a double taint inference on the browser parse tree and combining this with taint aware vulnerability patterns. Our implementations LigRE and KameleonFuzz outperform current open-source black-box scanners. We discovered 0-day XSS (i.e., previously unknown vulnerabilities) in web applications used by millions of users
Le, Coz Adrien. "Characterization of a Reliability Domain for Image Classifiers." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG109.
Full textDeep neural networks have revolutionized the field of computer vision. These models learn a prediction task from examples. Image classification involves identifying the main object present in the image. Despite the very good performance of neural networks on this task, they often fail unexpectedly. This limitation prevents them from being used in many applications. The goal of this thesis is to explore methods for defining a reliability domain that would clarify the conditions under which a model is trustworthy. Three aspects have been considered. The first is qualitative: generating synthetic extreme examples helps illustrate the limits of a classifier and better understand what causes it to fail. The second aspect is quantitative: selective classification allows the model to abstain in cases of high uncertainty, and calibration helps better quantify prediction uncertainty. Finally, the third aspect involves semantics: multimodal models that associate images and text are used to provide textual descriptions of images likely to lead to incorrect or, conversely, to correct predictions
Smache, Meriem. "La sécurité des réseaux déterministes de l’Internet des objets industriels (IIoT)." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEM033.
Full textTime synchronization is a crucial requirement for the IEEE802.15.4e based Industrial Internet of Things (IIoT). It is provided by the application of the Time-Slotted Channel-Hopping (TSCH) mode of the IEEE802.15.4e. TSCH synchronization allows reaching low-power and high-reliability wireless networking. However, TSCH synchronization resources are an evident target for cyber-attacks. They can be manipulated by attackers to paralyze the whole network communications. In this thesis, we aim to provide a vulnerability analysis of the TSCH asset synchronization. We propose novel detection metrics based on the internal process of the TSCH state machine of every node without requiring any additional communications or capture or analysis of the packet traces. Then, we design and implement novel self-detection and self-defence techniques embedded in every node to take into account the intelligence and learning ability of the attacker, the legitimate node and the real-time industrial network interactions. The experiment results show that the proposed mechanisms can protect against synchronization attacks
Chbib, Fadlallah. "Enhanced Cross Layer and Secure Architecture for Connected Vehicles." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0038.
Full textVehicular Ad hoc NETworks, known as VANETs, are deployed to minimize the risk of road accidents as well as to improve passengers comfort. This thesis deals with the problem of dropping and delaying packets in VANETs by reducing the time of exchanging data, improving the packet delivery ratio, as well as securing the vehicular architecture. First, we propose a novel method to avoid the congestion on the control channel in order to guarantee the real time transfer and the reliability of urgent safety messages. In addition, we extend the proposed method by using a neural network with various parameters such as priority of the message, sensitivity of road, type of vehicle and buffer state to reduce the time of exchanging safety data. Second, we propose two routing protocols based on signal to interference ratio (SIR). Our target in both is to maximize the overall SIR between source and destination with the aim to select the optimal path. In the first one, we evaluate the SIR level, while in the second, we use a Markov chain model to predict the SIR level. Finally, we protect these protocols from various attacks through three anti-attack algorithms. In the first algorithm, we create a key-value variable to detect the fabrication of the source address at the intermediate node. In the second one, we create a buffer and check it periodically in order to catch out the malicious node occurring at the destination field. In the last one, we discover the attack at the SIR level
Ben, Saad Sabra. "Security architectures for network slice management for 5G and beyond." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS023V2.pdf.
Full textNetwork slicing architecture, enabled by new technologies such as Network Functions Virtualization (NFV) and Software-Defined Networking (SDN), is one of the main pillars of Fifth-generation and Beyond (B5G). In B5G settings, the number of coexisting slices with varying degrees of complexity and very diverse lifespans, resource requirements, and performance targets is expected to explode. This creates significant challenges towards zero-touch slice management and orchestration, including security, fault management, and trust. In addition, network slicing opens the business market to new stakeholders, namely the vertical or tenant, the network slice provider, and the infrastructure provider. In this context, there is a need to ensure not only a secure interaction between these actors, but also that each actor delivers the expected service to meet the network slice requirements. Therefore, new trust architectures should be designed, which are able to identify/detect the new forms of slicing-related attacks in real-time, while securely and automatically managing Service Level Agreements (SLA) among the involved actors. In this thesis, we devise new security architectures tailored to network slicing ready networks (B5G), heavily relying on blockchain and Artificial Intelligence (AI) to enable secure and trust network slicing management
Bertin, Bruno. "Système d'acquisition et de traitement des signaux pour la surveillance et le diagnostic de système complexe." Compiègne, 1986. http://www.theses.fr/1986COMPI241.
Full textPicot, Marine. "Protecting Deep Learning Systems Against Attack : Enhancing Adversarial Robustness and Detection." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG017.
Full textOver the last decade, Deep Learning has been the source of breakthroughs in many different fields, such as Natural Language Processing, Computer Vision, and Speech Recognition. However, Deep Learning-based models have now been recognized to be extremely sensitive to perturbations, especially when the perturbation is well-designed and generated by a malicious agent. This weakness of Deep Neural Networks tends to prevent their use in critical applications, where sensitive information is available, or when the system interacts directly with people's everyday life. In this thesis, we focus on protecting Deep Neural Networks against malicious agents in two main ways. The first method aims at protecting a model from attacks by increasing its robustness, i.e., the ability of the model to predict the right class even under threats. We observe that the output of a Deep Neural Network forms a statistical manifold and that the decision is taken on this manifold. We leverage this knowledge by using the Fisher-Rao measure, which computes the geodesic distance between two probability distributions on the statistical manifold to which they belong. We exploit the Fisher-Rao measure to regularize the training loss to increase the model robustness. We then adapt this method to another critical application: the Smart Grids, which, due to monitoring and various service needs, rely on cyber components, such as a state estimator, making them sensitive to attacks. We, therefore, build robust state estimators using Variational AutoEncoders and the extension of our proposed method to the regression case. The second method we focus on that intends to protect Deep-Learning-based models is the detection of adversarial samples. By augmenting the model with a detector, it is possible to increase the reliability of decisions made by Deep Neural Networks. Multiple detection methods are available nowadays but often rely on heavy training and ad-hoc heuristics. In our work, we make use of a simple statistical tool called the data-depth to build efficient supervised (i.e., attacks are provided during training) and unsupervised (i.e., training can only rely on clean samples) detection methods
Ajayi, Idowu Iseoluwa. "Enhanced Physical Layer Security through Frequency and Spatial Diversity." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS227.
Full textPhysical layer security (PLS) is an emerging paradigm that focuses on using the properties of wireless communication, such as noise, fading, dispersion, interference, diversity, etc., to provide security between legitimate users in the presence of an eavesdropper. Since PLS uses signal processing and coding techniques, it takes place at the physical layer and hence can guarantee secrecy irrespective of the computational power of the eavesdropper. This makes it an interesting approach to complement legacy cryptography whose security premise is based on the computational hardness of the encryption algorithm that cannot be easily broken by an eavesdropper. The advancements in quantum computing has however shown that attackers have access to super computers and relying on only encryption will not be enough. In addition, the recent rapid advancement in wireless communication technologies has seen the emergence and adoption of technologies such as Internet of Things, Ultra-Reliable and Low Latency Communication, massive Machine-Type Communication, Unmanned Aerial Vehicles, etc. Most of these technologies are decentralized, limited in computational and power resources, and delay sensitive. This makes PLS a very interesting alternative to provide security in such technologies. To this end, in this thesis, we study the limitations to the practical implementation of PLS and propose solutions to address these challenges. First, we investigate the energy efficiency challenge of PLS by artificial noise (AN) injection in massive Multiple-Input Multiple-Output (MIMO) context. The large precoding matrix in massive MIMO also contributes to a transmit signal with high Peak-to-Average Power Ratio (PAPR). This motivated us to proposed a novel algorithm , referred to as PAPR-Aware-Secure-mMIMO. In this scheme, instantaneous Channel State Information (CSI) is used to design a PAPR-aware AN that simultaneously provides security while reducing the PAPR. This leads to energy efficient secure massive MIMO. The performance is measured in terms of secrecy capacity, Symbol Error Rate (SER), PAPR, and Secrecy Energy Efficiency (SEE). Next, we consider PLS by channel adaptation. These PLS schemes depend on the accuracy of the instantaneous CSI and are ineffective when the CSI is inaccurate. However, CSI could be inaccurate in practice due to such factors as noisy CSI feedback, outdated CSI, etc. To address this, we commence by proposing a PLS scheme that uses precoding and diversity to provide PLS. We then study the impact of imperfect CSI on the PLS performance and conclude with a proposal of a low-complexity autoencoder neural network to denoise the imperfect CSI and give optimal PLS performance. The proposed autoencoder models are referred to as DenoiseSecNet and HybDenoiseSecNet respectively. The performance is measured in terms of secrecy capacity and Bit Error Rate (BER). Finally, we study the performance of PLS under finite-alphabet signaling. Many works model performance assuming that the channel inputs are Gaussian distributed. However, Gaussian signals have high detection complexity because they take a continuum of values and have unbounded amplitudes. In practice, discrete channel inputs are used because they help to maintain moderate peak transmission power and receiver complexity. However, they introduce constraints that significantly affect PLS performance, hence, the related contribution in this thesis. We propose the use of dynamic keys to partition modulation spaces in such a way that it benefits a legitimate receiver and not the eavesdropper. This keys are based on the independent main channel and using them to partition leads to larger decision regions for the intended receiver but smaller ones for the Eavesdropper. The scheme is referred to as Index Partitioned Modulation (IPM). The performance is measured in terms of secrecy capacity, mutual information and BER
Li, Chuanqi. "Caractérisation automatique et semi-automatique des discontinuités des piliers de roche dure." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALI045.
Full textThe hard rock pillar is a unique rock structure that plays an ever-increasing role in maintaining underground space stability in metal mines. However, collapse, rock burst, local large area caving, and other adverse engineering disasters are frequently observed on the pillar with unknown discontinuity information. Therefore, the research on the characterization of discontinuities to improve mining safety is worth carrying out. On the other hand, with the popularization and application of artificial intelligence in mining, it is necessary to realize the automatic characterization of pillar discontinuous information.This dissertation aims to characterize the discontinuities of hard rock pillars using automatic or semi-automatic methods. The codes of the main algorithms are written in MATLAB, Python, and C++ languages, respectively. First, a non-contact measurement technique named the photogrammetry-based SfM is utilized to obtain discontinuity information represented by images. Then, a 3D pillar model is reconstructed to export point cloud data for detecting discontinuity sets and the corresponding planes and calculating discontinuity orientation using an automatic method. Next, the image data is adopted to establish deep learning models for extracting fracture traces. The skeletonization, quantitative description, and approximation are used to quantify trace length, dip angle, density, and intensity. Finally, the fracture trace spacing is characterized using a semi-automatic method. The extracted fracture traces are disconnected using an angle threshold algorithm and classified using a novel grouping algorithm. Three spacing indices can be calculated using scanlines set by users. The presented work first investigates discontinuous parameters of hard rock pillars, thus it provides real data for pillars’ support design, stability evaluation, and failure simulation analysis
Shahid, Mustafizur Rahman. "Deep learning for Internet of Things (IoT) network security." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS003.
Full textThe growing Internet of Things (IoT) introduces new security challenges for network activity monitoring. Most IoT devices are vulnerable because of a lack of security awareness from device manufacturers and end users. As a consequence, they have become prime targets for malware developers who want to turn them into bots. Contrary to general-purpose devices, an IoT device is designed to perform very specific tasks. Hence, its networking behavior is very stable and predictable making it well suited for data analysis techniques. Therefore, the first part of this thesis focuses on leveraging recent advances in the field of deep learning to develop network monitoring tools for the IoT. Two types of network monitoring tools are explored: IoT device type recognition systems and IoT network Intrusion Detection Systems (NIDS). For IoT device type recognition, supervised machine learning algorithms are trained to perform network traffic classification and determine what IoT device the traffic belongs to. The IoT NIDS consists of a set of autoencoders, each trained for a different IoT device type. The autoencoders learn the legitimate networking behavior profile and detect any deviation from it. Experiments using network traffic data produced by a smart home show that the proposed models achieve high performance.Despite yielding promising results, training and testing machine learning based network monitoring systems requires tremendous amount of IoT network traffic data. But, very few IoT network traffic datasets are publicly available. Physically operating thousands of real IoT devices can be very costly and can rise privacy concerns. In the second part of this thesis, we propose to leverage Generative Adversarial Networks (GAN) to generate bidirectional flows that look like they were produced by a real IoT device. A bidirectional flow consists of the sequence of the sizes of individual packets along with a duration. Hence, in addition to generating packet-level features which are the sizes of individual packets, our developed generator implicitly learns to comply with flow-level characteristics, such as the total number of packets and bytes in a bidirectional flow or the total duration of the flow. Experimental results using data produced by a smart speaker show that our method allows us to generate high quality and realistic looking synthetic bidirectional flows
Querrec, Ronan. "Les Systèmes Multi-Agents pour les Environnements Virtuels de Formation : Application à la sécurité civile." Brest, 2003. http://www.theses.fr/2002BRES2037.
Full textThis study concerns virtual environments for training in operationnal conditions. The principal developed idea is that these environments are heterogeneous and open multiagent systems. The MASCARET model is proposed to organize the interactions between the agents and to give them reactives, cognitives ans socials abilities to simulate the physical and social environment. The physical environment represent, in a realistic way, the phenomena that the learners and the teachers have to take into account. The social environment is simulated by agents executing collaborative and adaptative tasks. They realize, in team, the procedures that they have to adapt to the environment. The users participate to the training environment throught their avatar. To validate our model, the SecuRevi application for fire-fighters training is developed
Ghidalia, Sarah. "Etude sur les mesures d'évaluation de la cohérence entre connaissance et compréhension dans le domaine de l'intelligence artificielle." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCK001.
Full textThis thesis investigates the concept of coherence within intelligent systems, aiming to assess how coherence can be understood and measured in artificial intelligence, with a particular focus on pre-existing knowledge embedded in these systems. This research is funded as part of the European H2020 RESPONSE project and is set in the context of smart cities, where assessing the consistency between AI predictions and real-world data is a fundamental prerequisite for policy initiatives. The main objective of this work is to examine consistency in the field of artificial intelligence meticulously and to conduct a thorough exploration of prior knowledge. To this end, we conduct a systematic literature review to map the current landscape, focusing on the convergence and interaction between machine learning and ontologies, and highlighting, in particular, the algorithmic techniques employed. In addition, our comparative analysis positions our research in the broader context of important work in the field.An in-depth study of different knowledge integration methods is undertaken to analyze how consistency can be assessed based on the learning techniques employed. The overall quality of artificial intelligence systems, with particular emphasis on consistency assessment, is also examined. The whole study is then applied to the coherence evaluation of models concerning the representation of physical laws in ontologies. We present two case studies, one on predicting the motion of a harmonic oscillator and the other on estimating the lifetime of materials, to highlight the importance of respecting physical constraints in consistency assessment. In addition, we propose a new method for formalizing knowledge within an ontology and evaluate its effectiveness. This research aims to provide new perspectives in the evaluation of machine learning algorithms by introducing a coherence evaluation method. This thesis aspires to make a substantial contribution to the field of artificial intelligence by highlighting the critical role of consistency in the development of reliable and relevant intelligent systems
Ribas, Santos Eduardo. "Contribution au diagnostic qualitatif des procédés en intelligence artificielle." Vandoeuvre-les-Nancy, INPL, 1996. http://docnum.univ-lorraine.fr/public/INPL_T_1996_RIBAS_SANTOS_E.pdf.
Full textShrivastwa, Ritu Ranjan. "Enhancements in Embedded Systems Security using Machine Learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT051.
Full textThe list of connected devices (or IoT) is growing longer with time and so is the intense vulnerability to security of the devices against targeted attacks originating from network or physical penetration, popularly known as Cyber Physical Security (CPS) attacks. While security sensors and obfuscation techniques exist to counteract and enhance security, it is possible to fool these classical security countermeasures with sophisticated attack equipment and methodologies as shown in recent literature. Additionally, end node embedded systems design is bound by area and is required to be scalable, thus, making it difficult to adjoin complex sensing mechanism against cyberphysical attacks. The solution may lie in Artificial Intelligence (AI) security core (soft or hard) to monitor data behaviour internally from various components. Additionally the AI core can monitor the overall device behaviour, including attached sensors, to detect any outlier activity and provide a smart sensing approach to attacks. AI in hardware security domain is still not widely acceptable due to the probabilistic behaviour of the advanced deep learning techniques, there have been works showing practical implementations for the same. This work is targeted to establish a proof of concept and build trust of AI in security by detailed analysis of different Machine Learning (ML) techniques and their use cases in hardware security followed by a series of case studies to provide practical framework and guidelines to use AI in various embedded security fronts. Applications can be in PUFpredictability assessment, sensor fusion, Side Channel Attacks (SCA), Hardware Trojan detection, Control flow integrity, Adversarial AI, etc
Fenet, Serge. "Vers un paradigme de programmation pour les applications distribuées basé sur le comportement des insectes sociaux : application à la sécurité des réseaux." Lyon 1, 2001. http://www.theses.fr/2001LYO10261.
Full textBonte, Mathieu. "Influence du comportement de l'occupant sur la performance énergétique du bâtiment : modélisation par intelligence artificielle et mesures in situ." Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2495/.
Full textBuilding sector plays a major role in global warming. In France, it is responsible of about 40% of energy consumption et about 33% of carbon emissions. In this context, building designers try to improve building energy performance. To do so, they often use building energy modeling (BEM) software to predict future energy use. For several years now, researchers have observed a difference between actual and predicted energy performance. Some reasons are pointed out such as uncertainties on physical properties of building materials and lack of precision of fluid dynamics models. One of the main causes could come from bad assessments in the modeling of occupant behavior. Occupant is often considered as passive in building simulation hypothesis. However, numerous of papers show that he act on the building he is in, and on personal characteristics. The work presented here intend to characterize occupant behavior and its influence on energy use. In the first part of the manuscript we assess the individual impact of several actions using design of experiments (DOE) methodology. Actions like operations on windows, blind or thermostat are investigated separately. We show that two opposite extreme behaviors (economic and wasteful) could lead to significant difference in building energy use. Moreover, a factor two-to-one in total energy use is observed between passive and active behaviors. In the second part we focused on an experimental approach. Thermal and visual environment of 4 offices have been monitored during a year and online questionnaires about comfort and behavior have been submitted to office occupants. Tank to a statistical analysis we estimates probabilities of acting on windows, blinds and clothing insulation against physical variables or thermal sensation. Final part of the thesis deals with the development of an occupant behavior model called OASys (Occupant Actions System) and running under TRNSys software. The model is based on an artificial intelligence algorithm and is intended to predict occupant interactions with thermostat, clothing insulation, windows, blinds and lighting system based on thermal and visual sensation. Results from OASys are compared to results from literature through various case studies for partial validation. They also confirm the significant impact of occupant behavior on building energy performance
Luang, Aphay Jean Siri. "Quelle confiance pour améliorer la sécurité du système d'information ? Contribution à une modélisation de la sécurité des systèmes d'information socio-techniques." Compiègne, 2004. http://www.theses.fr/2004COMP1527.
Full textInformation security rests on important technical means whose organisational and social insertion is often retumed to the steps of sensitising. Our investigations show that the training of safety forms part of a strategie global training and located. The practice of safety is indexed with a system of constraints, individual and collective objectives sometimes divergent and intentions. Ln comparison with the priorities that the individual builds itself, safety thus takes a variable weight likely to be negotiated at every moment. Safety is thus not a process but a training of installations which the actors and the threats around the apparent but nonexclusive form operate which is the organization. The piloting of safety supposes permanent construction and update of a representation of the regulations of the complex information system. This building work requires to invest the local margins of appropriation by confidence, thus making it possible to make profitable the company richness
Yaich, Mohamed Reda. "Adaptiveness and Social-Compliance in Trust Management - A Multi-Agent Based approach." Thesis, Saint-Etienne, EMSE, 2013. http://www.theses.fr/2013EMSE0717/document.
Full textVirtual communities (VCs) are socio-technical systems wherein distributed individuals (human and/or artificial) are grouped together around common objectives and goals. In such systems, participants are massively collaborating with each other’s by sharing their private resources and knowledge. A collaboration always bears the risk that one partner exhibits uncooperative or malicious behaviour. Thus, trust is a critical issue for the success of such systems. The work presented in this dissertation addresses the problem of trust management in open and decentralised virtual communities (VCs). To address this problem, we proposed an Adaptive and Socially-Compliant Trust Management System (ASC-TMS). The novelty of ASC-TMS lies in its ability to exhibit social-awareness and context-awareness features. Social-awareness refers to the ability of the trust management system (TMS) to handle the social nature of VCs by making trust evaluations that are collectively harmful, while context-awareness refers to the ability of the system to handle the dynamic nature of VCs by making trust evaluations that are always in adequacy with the context in which these evaluations are undertaken. Thus, the contributions made in this thesis constitute an additional step towards the automation of trust assessment. We provided accordingly a novel trust management system that assists members of open and decentralised virtual communities in their trust decisions. The system has been implemented and deployed using the JaCaMo multi-agent platform. We illustrated also the applicability of on a real life open innovation virtual community scenario. Finally, the ASC-TMS has been experimentally evaluated using the multi-agent based Repast simulation platform. The preliminary results show that the use of our system significantly improves the stability of the virtual communities in which it has been deployed
Morissette, Jean-François. "Algorithmes de gestion de ressources en temps-réel : positionnement et planification lors d'attaques aériennes." Thesis, Université Laval, 2004. http://www.theses.ulaval.ca/2004/22184/22184.pdf.
Full textKaplan, Caelin. "Compromis inhérents à l'apprentissage automatique préservant la confidentialité." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4045.
Full textAs machine learning (ML) models are increasingly integrated into a wide range of applications, ensuring the privacy of individuals' data is becoming more important than ever. However, privacy-preserving ML techniques often result in reduced task-specific utility and may negatively impact other essential factors like fairness, robustness, and interpretability. These challenges have limited the widespread adoption of privacy-preserving methods. This thesis aims to address these challenges through two primary goals: (1) to deepen the understanding of key trade-offs in three privacy-preserving ML techniques—differential privacy, empirical privacy defenses, and federated learning; (2) to propose novel methods and algorithms that improve utility and effectiveness while maintaining privacy protections. The first study in this thesis investigates how differential privacy impacts fairness across groups defined by sensitive attributes. While previous assumptions suggested that differential privacy could exacerbate unfairness in ML models, our experiments demonstrate that selecting an optimal model architecture and tuning hyperparameters for DP-SGD (Differentially Private Stochastic Gradient Descent) can mitigate fairness disparities. Using standard ML fairness datasets, we show that group disparities in metrics like demographic parity, equalized odds, and predictive parity are often reduced or remain negligible when compared to non-private baselines, challenging the prevailing notion that differential privacy worsens fairness for underrepresented groups. The second study focuses on empirical privacy defenses, which aim to protect training data privacy while minimizing utility loss. Most existing defenses assume access to reference data---an additional dataset from the same or a similar distribution as the training data. However, previous works have largely neglected to evaluate the privacy risks associated with reference data. To address this, we conducted the first comprehensive analysis of reference data privacy in empirical defenses. We proposed a baseline defense method, Weighted Empirical Risk Minimization (WERM), which allows for a clearer understanding of the trade-offs between model utility, training data privacy, and reference data privacy. In addition to offering theoretical guarantees on model utility and the relative privacy of training and reference data, WERM consistently outperforms state-of-the-art empirical privacy defenses in nearly all relative privacy regimes.The third study addresses the convergence-related trade-offs in Collaborative Inference Systems (CISs), which are increasingly used in the Internet of Things (IoT) to enable smaller nodes in a network to offload part of their inference tasks to more powerful nodes. While Federated Learning (FL) is often used to jointly train models within CISs, traditional methods have overlooked the operational dynamics of these systems, such as heterogeneity in serving rates across nodes. We propose a novel FL approach explicitly designed for CISs, which accounts for varying serving rates and uneven data availability. Our framework provides theoretical guarantees and consistently outperforms state-of-the-art algorithms, particularly in scenarios where end devices handle high inference request rates.In conclusion, this thesis advances the field of privacy-preserving ML by addressing key trade-offs in differential privacy, empirical privacy defenses, and federated learning. The proposed methods provide new insights into balancing privacy with utility and other critical factors, offering practical solutions for integrating privacy-preserving techniques into real-world applications. These contributions aim to support the responsible and ethical deployment of AI technologies that prioritize data privacy and protection
Es-Saidi, Soukaina. "Optimisation de la réponse optique de réseaux diffractifs métalliques appliqués à la sécurité des documents." Thesis, Troyes, 2020. http://www.theses.fr/2020TROY0016.
Full textSecurity holograms based on sub-wavelength gratings (SWGs) are increasingly used not only to protect sensitive documents, but also to combat against the reprographic technologies used in counterfeiting.The aim of the present work is to design optical security devices to produce visual and chromatic effects, based on the generation of structural colors, easily recognizable but difficult to counterfeit and compatible with high-tech foil production. To this end, we study the optical response of one and two-dimensional asymmetric SWGs fabricated by laser interferometric lithography and scaled up to larger scales on polymer film using roll-to-roll replication processes. The in-depth physical analysis of the resonance mechanisms generated by metallic and hybrid metal-dielectric SWGs allows to understand and tailor their chromatic response. We also demonstrate that hybrid SWGs open new design perspectives and enhance the quality of the perceived colors. The research evidence presented in this contribution clearly shows that the use of modern optimization tools, prior to fabrication, provides an efficient way to tailor and to optimize the resonant response of diffraction gratings. We demonstrate that the multi-objective approach outperforms single-objective strategies and opens the possibility of increasing the complexity of the diffractive structures used for color reproduction. We emphasize that Artificial Intelligence tools constitute an efficient alternative to the traditional time-consuming electromagnetic methods
Khatoun, Rida. "Système multi-agents et architecture pair à pair pour la détection d'attaques de déni de service distribuées." Troyes, 2008. http://www.theses.fr/2008TROY0015.
Full textThe Internet has ultimately become, the support for all types of network services which are numerously increasing, interacting and thereby introducing an important dimension of complexity. In addition, they introduce vulnerabilities due to their location and implementation. Indeed, it promotes more covetousness and attacks. In this context, denial of service attacks are among the most popular ones and they are relatively easy to implement. Attack streams are simultaneously generated from several attack machines that are spread all over the Internet and, therefore, making cooperation among a large number of equipment. Such attacks have important economic consequences due to their effects. Many solutions have been proposed to solve this problem, but they are still incomplete because of economical and technical reasons and also for cooperation between different operators of the Internet. Our objective, in this context, is to respond by a solution that is based a distributed architecture of cooperative agents in order to detect intrusions and attacks. The agents are implemented on all the edge routers in an ISP domain. The attack detection is carried out by sharing agents' knowledge about traffic. For an efficient routing algorithm among agents we used a peer-to-peer architecture. This solution has been validated concretely over a real network, integrating the well-known Snort sensor in our intelligent agents and using Pastry as a peer-to-peer protocol for routing information among agents
Aimé, Xavier. "Gradients de prototypicalité, mesures de similarité et de proximité sémantique : une contribution à l'Ingénierie des Ontologies." Phd thesis, Université de Nantes, 2011. http://tel.archives-ouvertes.fr/tel-00660916.
Full textMkhida, Abdelhak. "Contribution à l'évaluation de la sûreté de fonctionnement des Systèmes Instrumentés de Sécurité à Intelligence Distribuée." Electronic Thesis or Diss., Vandoeuvre-les-Nancy, INPL, 2008. http://www.theses.fr/2008INPL083N.
Full textThe incorporation of intelligent instruments in safety loops leads towards the concept of intelligent safety and the systems become “Intelligent Distributed Safety Instrumented Systems (IDSIS)”. The justification for using these instruments in safety applications is not fully proven and the dependability evaluation of such systems is a difficult task. Achieved work in this thesis deals with modelling and thus the performance evaluation relating to the dependability for structures which have intelligence in the instruments constituting the Safety Instrumented Systems (SIS). In the modelling of the system, the functional and dysfunctional aspects coexist and the dynamic approach using the Stochastic Activity Network (SAN) is proposed to overcome the difficulties mentioned above. The introduction of performance indicators highlight the effect of the integration of intelligence levels in safety applications. Monte-Carlo method is used to assess the dependability parameters in compliance with safety standards related to SIS (IEC 61508 & IEC 61511). We have proposed a method and associated tools to approach this evaluation by simulation and thus provide assistance in designing Safety Instrumented Systems (SIS) integrating some features of intelligent tools
Dandrieux, Jean-Pierre. "Contributions à l'analyse de la complexité de problèmes de résolution de contraintes." Lille 1, 2000. https://pepite-depot.univ-lille.fr/RESTREINT/Th_Num/2000/50376-2000-210.pdf.
Full textMkhida, Abdelhak. "Contribution à l'évaluation de la sûreté de fonctionnement des Systèmes Instrumentés de Sécurité à Intelligence Distribuée." Thesis, Vandoeuvre-les-Nancy, INPL, 2008. http://www.theses.fr/2008INPL083N/document.
Full textThe incorporation of intelligent instruments in safety loops leads towards the concept of intelligent safety and the systems become “Intelligent Distributed Safety Instrumented Systems (IDSIS)”. The justification for using these instruments in safety applications is not fully proven and the dependability evaluation of such systems is a difficult task. Achieved work in this thesis deals with modelling and thus the performance evaluation relating to the dependability for structures which have intelligence in the instruments constituting the Safety Instrumented Systems (SIS). In the modelling of the system, the functional and dysfunctional aspects coexist and the dynamic approach using the Stochastic Activity Network (SAN) is proposed to overcome the difficulties mentioned above. The introduction of performance indicators highlight the effect of the integration of intelligence levels in safety applications. Monte-Carlo method is used to assess the dependability parameters in compliance with safety standards related to SIS (IEC 61508 & IEC 61511). We have proposed a method and associated tools to approach this evaluation by simulation and thus provide assistance in designing Safety Instrumented Systems (SIS) integrating some features of intelligent tools
Hirsch, Gérard. "Équations de relation floue et mesures d'incertain en reconnaissance de formes." Nancy 1, 1987. http://www.theses.fr/1987NAN10030.
Full textCiarletta, Laurent. "Contribution à l'évaluation des technologies de l'informatique ambiante." Nancy 1, 2002. http://www.theses.fr/2002NAN10234.
Full textComputer Science and Networks are more and more embedded into our daily life. Pervasive or Ubiquitous Computing is at the crossroad of four typical areas: Networking (connecting the elements), Personal Computing (providing services), Embedded Computing (improving software and hardware miniaturization), and Computer-Human Interaction (where artificial intelligence will provide the needed cleverness). This document introduces this emerging technology and the tools, architectures and methods that were developed during the course of my PhD: the Layered Pervasive Computing model, EXiST, the evaluation and distributed simulation platform and the VPSS security architecture. They are first steps towards the resolution of security, standardization, integration, and convergence issues of the technologies at play. Some prototypes and implementations, such as the Aroma Adapter (providing adhoc "intelligence" to electronic devices), a Smart conference Room and a version of EXiST working with Intelligent Agents, are also detailed
Goyat, Yann. "Estimation précise des trajectoires de véhicule par un système optique." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2008. http://tel.archives-ouvertes.fr/tel-00399848.
Full textGoyat, Yann. "Estimation précise des trajectoires de véhicule par un système optique." Phd thesis, Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21900.
Full textLeurent, Edouard. "Apprentissage par renforcement sûr et efficace pour la prise de décision comportementale en conduite autonome." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I049.
Full textIn this Ph.D. thesis, we study how autonomous vehicles can learn to act safely and avoid accidents, despite sharing the road with human drivers whose behaviors are uncertain. To explicitly account for this uncertainty, informed by online observations of the environment, we construct a high-confidence region over the system dynamics, which we propagate through time to bound the possible trajectories of nearby traffic. To ensure safety under such uncertainty, we resort to robust decision-making and act by always considering the worst-case outcomes. This approach guarantees that the performance reached during planning is at least achieved for the true system, and we show by end-to-end analysis that the overall sub-optimality is bounded. Tractability is preserved at all stages, by leveraging sample-efficient tree-based planning algorithms. Another contribution is motivated by the observation that this pessimistic approach tends to produce overly conservative behaviors: imagine you wish to overtake a vehicle, what certainty do you have that they will not change lane at the very last moment, causing an accident? Such reasoning makes it difficult for robots to drive amidst other drivers, merge into a highway, or cross an intersection — an issue colloquially known as the “freezing robot problem”. Thus, the presence of uncertainty induces a trade-off between two contradictory objectives: safety and efficiency. How to arbitrate this conflict? The question can be temporarily circumvented by reducing uncertainty as much as possible. For instance, we propose an attention-based neural network architecture that better accounts for interactions between traffic participants to improve predictions. But to actively embrace this trade-off, we draw on constrained decision-making to consider both the task completion and safety objectives independently. Rather than a unique driving policy, we train a whole continuum of behaviors, ranging from conservative to aggressive. This provides the system designer with a slider allowing them to adjust the level of risk assumed by the vehicle in real-time
Gouel, Matthieu. "Internet-Scale Route Tracing Capture and Analysis." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS160.
Full textThe Internet is one of the most remarkable human creations, enabling communication among about two thirds of the global population. This network of networks spans the entire globe and is managed in a highly decentralized way, making it impossible to fully comprehend at IP-level. Nonetheless, for over two decades, researchers have been devising new techniques, developing new tools, and creating new platforms to capture and provide more precise and comprehensive maps of the Internet's topology. These efforts support network operators in the industry and other researchers in improving core features of the Internet such as its connectivity, performance, security, or neutrality. This thesis presents new contributions that improve the scalability of Internet topology measurement. It introduces a state of the art measurement platform that enables the use of high-speed probing techniques for IP route tracing at Internet scale, as well as a reinforcement learning approach to maximize the discovery of the Internet topology. Because the analysis of the route tracing data collected requires additional metadata, the evolution of IP address geolocation over a 10-year period in a widely used proprietary database is examined, and lessons are provided to avoid biases in studies using this database. Finally, a large-scale analysis framework is developed to effectively utilize the large number of collected data and augmented metadata from other sources, such as IP address geolocation, to produce insightful studies at the Internet scale. This work aims to considerably improve the study of the Internet topology by providing tools to collect and analyze large amounts of Internet topology data. This will allow researchers to better understand how the Internet is structured and how it evolves over time, leading to a more comprehensive understanding of this complex system
Delefosse, Thierry. "Stratégies de recherche d'Informations émergentes pour la compréhension de grands volumes documentaires numérisées : application à la sécurité des systèmes d'information." Thesis, Paris Est, 2008. http://www.theses.fr/2008PEST0224.
Full textYang, Mingqiang. "Extraction d'attributs et mesures de similarité basées sur la forme." Phd thesis, INSA de Rennes, 2008. http://tel.archives-ouvertes.fr/tel-00335083.
Full textMotta, Mariane Prado. "Contribution à l’étude de systèmes de surveillance de l'usinage basés sur des méthodes d‘apprentissage machine et des mesures de vibrations, efforts et température de coupe." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0296.
Full textMachining is an economically important manufacturing process that relies on the use of a sharpened cutting tool to mechanically cut and remove material from a part to achieve a desired geometry. Given the ever-increasing demands for quality, product variability and cost reduction, tool condition and workpiece quality monitoring systems based on artificial intelligence (AI) techniques are a potential solution are a potential solution for a more reliable and economical manufacturing processes. Recent developments in the field of AI, have shown great potential to transform the manufacturing domain with advanced tools dedicated to data analysis and modeling. In particular, supervised machine learning (ML) algorithms are a powerful tool for modeling complex relationships between input and output variables based on a dataset containing examples, i.e., input-output pairs. Nevertheless, one of the main drawbacks of these modeling techniques is that a large amount of data, usually obtained through experiments (often long and expensive to perform), is required to train accurate and reliable models. This fact limits the applicability of these types of models in an industrial context. Considering this context, this study aims to contribute to the identification of methodologies for the development of ML models dedicated to machining monitoring within industrial conditions, in which time and resources for the realization of experiments are often limited. For this purpose, it is considered in this study that, although experiments can be onerous, in the industry it is common that, before starting large scale machining productions with a new tool or material, setup experiments are performed to determine the most appropriate cutting parameters to perform that production. Given this need (or recommendation), it will be investigated in this thesis, the predictive performances that can be achieved if data, obtained from these tuning experiments, are used to generate predictive models for machining monitoring. More precisely, setting experiments from the standardized methodology Couple Tool-Material protocol (NF E 66-520) are considered. In an effort to obtain good predictive performance with a limited amount of experimental data, sensors for measuring cutting forces, temperature and vibrations are chosen as instrumentation for the monitoring system to be developed, given its close relationship with the kinematics of the machining process. In this matter, special attention is given to the feature engineering step. That is, the process of transforming the available raw data, for example, the signals recorded by the sensors, into features, i.e., information, that more accurately represent the problem underlying the predictive model. Finally, since in the industry the changes in cutting tool reference can occur quite often, it will also be investigated whether the models developed for a given target tool can be applied to other slightly different tools (variations on nose radius, substrate and coating) and whether, for the training of ML models, the use of larger databases, but containing observations related not only to the target tool but also to other tools slightly different from it, will be more advantageous, compared to the use of a smaller database specific to the target tool
Babari, Raouf. "Estimation des conditions de visibilité météorologique par caméras routières." Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00786898.
Full textCiguene, Richardson. "Génération automatique de sujets d'évaluation individuels en contexte universitaire." Electronic Thesis or Diss., Amiens, 2019. http://www.theses.fr/2019AMIE0046.
Full textThis PhD work focuses on the evaluation of learning and especially the automatic generation of evaluation topics in universities. We rely on a base of source questions to create topic questions through algorithms that are able to construct differentiated assessment tests. This research has made it possible to develop a metric that measures this differentiation and to propose algorithms aimed at maximizing total differentiation on test collections, while minimizing the number of necessary patterns. The average performance of the latter depends on the number of patterns available in the source database (compared to the number of items desired in the tests), and the size of the generated collections. We focused on the possible differentiation in very small collections of subjects, and proposes methodological tracks to optimize the distribution of these differentiated subjects to cohorts of students respecting the constraints of the teacher. The rest of this work will eventually take into account the level of difficulty of a test as a new constraint, relying in part on the statistical and semantic data collected after each test. The goal is to be able to maximize the differentiation by keeping the equity between the Tests of a Collection, for an optimized distribution during the Events
Qamar, Ali Mustafa. "Mesures de similarité et cosinus généralisé : une approche d'apprentissage supervisé fondée sur les k plus proches voisins." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM083.
Full textAlmost all machine learning problems depend heavily on the metric used. Many works have proved that it is a far better approach to learn the metric structure from the data rather than assuming a simple geometry based on the identity matrix. This has paved the way for a new research theme called metric learning. Most of the works in this domain have based their approaches on distance learning only. However some other works have shown that similarity should be preferred over distance metrics while dealing with textual datasets as well as with non-textual ones. Being able to efficiently learn appropriate similarity measures, as opposed to distances, is thus of high importance for various collections. If several works have partially addressed this problem for different applications, no previous work is known which has fully addressed it in the context of learning similarity metrics for kNN classification. This is exactly the focus of the current study. In the case of information filtering systems where the aim is to filter an incoming stream of documents into a set of predefined topics with little supervision, cosine based category specific thresholds can be learned. Learning such thresholds can be seen as a first step towards learning a complete similarity measure. This strategy was used to develop Online and Batch algorithms for information filtering during the INFILE (Information Filtering) track of the CLEF (Cross Language Evaluation Forum) campaign during the years 2008 and 2009. However, provided enough supervised information is available, as is the case in classification settings, it is usually beneficial to learn a complete metric as opposed to learning thresholds. To this end, we developed numerous algorithms for learning complete similarity metrics for kNN classification. An unconstrained similarity learning algorithm called SiLA is developed in which case the normalization is independent of the similarity matrix. SiLA encompasses, among others, the standard cosine measure, as well as the Dice and Jaccard coefficients. SiLA is an extension of the voted perceptron algorithm and allows to learn different types of similarity functions (based on diagonal, symmetric or asymmetric matrices). We then compare SiLA with RELIEF, a well known feature re-weighting algorithm. It has recently been suggested by Sun and Wu that RELIEF can be seen as a distance metric learning algorithm optimizing a cost function which is an approximation of the 0-1 loss. We show here that this approximation is loose, and propose a stricter version closer to the the 0-1 loss, leading to a new, and better, RELIEF-based algorithm for classification. We then focus on a direct extension of the cosine similarity measure, defined as a normalized scalar product in a projected space. The associated algorithm is called generalized Cosine simiLarity Algorithm (gCosLA). All of the algorithms are tested on many different datasets. A statistical test, the s-test, is employed to assess whether the results are significantly different. GCosLA performed statistically much better than SiLA on many of the datasets. Furthermore, SiLA and gCosLA were compared with many state of the art algorithms, illustrating their well-foundedness
Mobarek, Iyad. "Conception d'un système national des équipements médicaux automatiques pour améliorer la performance et réduire les coûts d'investissements et d'exploitations des dispositifs médicaux." Compiègne, 2006. http://www.theses.fr/2006COMP1623.
Full textThis thesis describes the different phases of developing, implementing and evaluating a unique Clinical Engineering System (CES) based on Jordan. This includes establishing and then automating ail technical issues and information related to medical equipment in 29 hospitals, 685 health centers, 332 dental clinics, 348 pediatric and mother care clinics and 23 blood banks. Every medical equipment was assigned an identity code that can be recognized through a bar code scanning system and similarly ail other involved parameters like hospitals, personnel, spare parts, workshops. . . Etc. Are also coded comprehensively. The fully automated CES presents a powerful system; implemented over network covering different locations of the Directorate of Biomedical Engineering (DBE) at Ministry Of Heath ail over the country, presenting the first Comprehensive CES to be implemented on the national level and the automated system can read and report in both Arabic and English languages. Compared to international figures the developed clinical engineering system has enhanced the performance of medical equipment including its availability (uptime) up to the best available international levels at extremely low cost. The complete system proved to be invaluable tool to manage, control and report all different parameters concerning medical equipment in the considered clinical engineering system. The System was evaluated and found to be reliable, effective and unique compared to internationally available systems and it presents a. Successful model for other countries
Piolle, Guillaume. "Agents utilisateurs pour la protection des données personnelles : modélisation logique et outils informatiques." Phd thesis, Grenoble 1, 2009. https://theses.hal.science/tel-00401295.
Full textUsage in the domain of multi-agent systems has evolved so as to integrate human users more closely in the applications. Manipulation of private information by autonomous agents has then called for an adapted protection of personal data. This work first examines the legal context of privacy protection and the various computing methods aiming at personal data protection. Surveys show a significant need for AI-based solutions, allowing both reasoning on the regulations themselves and automatically adapting an agent's behaviour to these regulations. The Privacy-Aware (PAw) agent model and the Deontic Logic for Privacy, designed to deal with regulations coming from multiple authorities, are proposed here in this perspective. The agent's normative reasoning component analyses its heterogeneous context and provides a consistent policy for dealing with personal information. PAw agent then automatically controls its own usage of the data with regard to the resulting policy. In order to enforce policies in a remote manner, we study the different distributed application architectures oriented towards privacy protection, several of them based on the principles of Trusted Computing. We propose a complementary one, illustrating a different possible usage of this technology. Implementation of the PAw agent allows demonstration of its principles over three scenarios, thus showing the adaptability of the agent to its normative context and the influence of the regulations over the behaviour of the application
Piolle, Guillaume. "Agents utilisateurs pour la protection des données personnelles : modélisation logique et outils informatiques." Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00401295.
Full textles divers moyens informatiques destinés à la protection des données personnelles. Il en ressort un besoin de solutions fondées sur les méthodes d'IA, autorisant à la fois un raisonnement sur les réglementations et l'adaptation du comportement d'un agent à ces réglementations. Dans cette perspective, nous proposons le modèle d'agent PAw (Privacy-Aware) et la logique DLP (Deontic Logic for Privacy), conçue pour traiter des réglementations provenant d'autorités multiples. Le composant de raisonnement normatif de l'agent analyse son contexte hétérogène et fournit une politique cohérente pour le traitement des données personnelles. L'agent PAw contrôle alors automatiquement sa propre utilisation des données en regard de cette politique. Afin d'appliquer cette politique de manière distante, nous étudions les différentes architectures d'applications distribuées orientées vers la protection de la vie privée, notamment celles fondées sur les principes du Trusted Computing. Nous en proposons une complémentaire, illustrant la possibilité d'utiliser différemment cette technologie. L'implémentation de l'agent PAw permet la démonstration de ses principes sur trois scénarios, montrant ainsi l'adaptabilité de l'agent à son contexte normatif et l'influence des réglementations sur le comportement de l'application.
Yakan, Hadi. "Security of V2X communications in 3GPP - 5G cellular networks." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG077.
Full textThe introduction of 5G networks has brought significant technical improvements; a new era of Vehicle-to-Everything (V2X) communications has emerged, offering new and advanced safety, efficiency, and other driving experience applications in the Intelligent Transport Systems (ITS). However, with new features come new security challenges, especially in the realm of Vehicle-to-Network (V2N) communications.This thesis focuses on the application of misbehavior detection in V2X communications within 5G networks. First, we introduce a novel misbehavior detection system integrated with 5G core (5GC) network to detect and prevent V2X attacks. Then, we propose a collaboration scheme between detection nodes to improve detection results in 5G edge networks. Last, we leverage Federated Learning to enable distributed training, and we assess the performance on a wide variety of V2X attacks
Bennani, Youssef. "Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4128/document.
Full textThis thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance
Chelouati, Mohammed. "Contributions to safety assurance of autonomous trains." Electronic Thesis or Diss., Université Gustave Eiffel, 2024. http://www.theses.fr/2024UEFL2014.
Full textThe deployment of autonomous trains raises many questions and challenges, particularly concerning the required safety level, which must be globally at least equivalent to that of the existing systems, along with how to achieve it. Conventionally, ensuring the safety of a global railway system or a defined subsystem includes analyzing risks and effectively handling dangerous situations. Therefore, for any technical railway system, whether it is conventional, automatic, or autonomous, an acceptable level of safety must be ensured. In the context of autonomous trains, safety challenges include aspects related to the use of artificial intelligence models, the transfer of tasks and responsibilities from the driver to automatic decision-making systems, and issues related to autonomy, such as mode transitions and management of degraded modes. Thus, the safety demonstration methodology for autonomous trains must take into account the risks generated by all these aspects. In other words, it must define all the safety activities (related to the introduction of autonomy and artificial intelligence systems), complementary to conventional safety demonstration. In this context, this dissertation proposes three main contributions towards the development of a safety assurance methodology for autonomous trains. Firstly, we establish a high-level framework for structuring and presenting safety arguments for autonomous trains. This framework is based on a goal-based approach represented by the graphical modeling Goal Structuring Notation (GSN). Then, we propose a model for the situational awareness of the automated driving system of an autonomous train, that integrating the process of dynamic risk assessment. This model enables the automated driving system to perceive, understand, anticipate and adapt its behavior to unknown situations while making safe decisions. This model is illustrated through a case study related to the obstacle detection and avoidance. Finally, we develop a decision-making approach based on dynamic risk assessment. The approach is based on Partially Observable Markov Decision Processes (POMDP) and aims to ensure continuous environmental monitoring to guarantee operational safety, particularly collision prevention. The approach is based on maintaining an acceptable level of risk through continuous estimation and updating of the train's operational state and environmental perception data
Porphyre, Vincent. "Comment concilier le développement des systèmes d'élevage porcin et l'amélioration de la qualité des produits animaux : modélisation multi-agents appliquée au secteur de l'élevage porcin à Madagascar pour la conception et l'évaluation de scénarii de lutte contre la cysticercose." Thesis, La Réunion, 2019. http://www.theses.fr/2019LARE0025.
Full textPorcine cysticercosis, a neglected tropical parasitic disease due to Taenia solium, with a cycle involving humans and pigs, is responsible for 50,000 deaths each year, mainly in the developing countries. Our PhD work has tried to explore the epidemiological situation of this disease in the swine population of Madagascar and to understand the determinants explaining its prevalence in the epidemiological and economic context of the country. As a first step, abattoir surveys estimated an apparent prevalence of 4.6% [4.2-5.0%] at the national level and a corrected prevalence of 21.03% [19.18-22.87%] taking into account the sensitivity of the method (veterinary inspection by macroscopic observation). In a second step, we modeled the environment-animal-human link in the context of Malagasy highlands where pig farming is semi-intensified but porcine cysticercosis remains endemic. Our multi-agent model, developed under Cormas, allowed us to model the simplified behaviors of human and animal actors as well as health and environmental processes. A multivariate sensitivity analysis helped us better understand the model's responses to the input parameters used. It was sensitive primarily to parameters describing (i) the exposure of animals to food contaminated with T. solium eggs, including the distribution of non-farmer-controlled feed and access to contaminated environment, and (ii) the infectious capacity of T. solium eggs, their excretion and survival in the environment. This exploratory approach allowed us to identify the important parameters, highlighting the research needs to be carried out to reinforce the likelihood of the model results and help us to test the impact of the control scenarios against cysticercosis in pig production areas characteristic of the country's situation
Taheri, Sojasi Yousef. "Modeling automated legal and ethical compliance for trustworthy AI." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS225.
Full textThe advancements in artificial intelligence have led to significant legal and ethical issues related to privacy, bias, accountability, etc. In recent years, many regulations have been put in place to limit or mitigate the risks associated with AI. Compliance with these regulations are necessary for the reliability of AI systems and to ensure that they are being used responsibly. In addition, reliable AI systems should also be ethical, ensuring alignment with ethical norms. Compliance with applicable laws and adherence to ethical principles are essential for most AI applications. We investigate this problem from the point of view of AI agents. In other words, how an agent can ensure the compliance of its actions with legal and ethical norms. We are interested in approaches based on logical reasoning to integrate legal and ethical compliance in the agent's planning process. The specific domain in which we pursue our objective is the processing of personal data. i.e., the agent's actions involve the use and processing of personal data. A regulation that applies in such a domain is the General Data Protection Regulations (GDPR). In addition, processing of personal data may entail certain ethical risks with respect to privacy or bias.We address this issue through a series of contributions presented in this thesis. We start with the issue of GDPR compliance. We adopt Event Calculus with Answer Set Programming(ASP) to model agents' actions and use it for planning and checking the compliance with GDPR. A policy language is used to represent the GDPR obligations and requirements. Then we investigate the issue of ethical compliance. A pluralistic ordinal utility model is proposed that allows one to evaluate actions based on moral values. This model is based on multiple criteria and uses voting systems to aggregate evaluations on an ordinal scale. We then integrate this utility model and the legal compliance framework in a Hierarchical Task Network(HTN) planner. In this contribution, legal norms are considered hard constraints and ethical norm as soft constraint. Finally, as a last step, we further explore the possible combinations of legal and ethical compliance with the planning agent and propose a unified framework. This framework captures the interaction and conflicts between legal and ethical norms and is tested in a use case with AI systems managing the delivery of health care items
Mokhtari, Aimed. "Diagnostic des systèmes hybrides : développement d'une méthode associant la détection par classification et la simulation dynamique." Phd thesis, INSA de Toulouse, 2007. http://tel.archives-ouvertes.fr/tel-00200034.
Full textLe, Vinh Thinh. "Security and Trust in Mobile Cloud Computing." Electronic Thesis or Diss., Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1148.
Full textAs living in the cyber era, we admit that a dozen of new technologies have been born every day with the promises that making a human life be more comfortable, convenient and safe. In the forest of new technologies, mobile computing is raise as an essential part of human life. Normally, mobile devices have become the best companions in daily activities. They have served us from the simple activities like entertainment to the complicated one as business operations. As playing the important roles, mobile devices deserve to work in the environment which they can trust for serving us better. In this thesis, we investigate the way to secure mobile devices from the primitive security level (Trusted Platforms) to the sophisticated one (bio-inspired intelligence). More precisely, after addressing the challenges of mobile cloud computing (MCC), we have studied the real-case of mobile cloud computing, in terms of energy efficiency and performance, as well as proposed a demonstration of particular MCC model, called Droplock system. Moreover, taking advantages of trusted platform module functionality, we introduced a novel schema of remote attestation to secure mobile devices in the context of Mobile-Cloud based solution. To enhance the security level, we used fuzzy logic combining with ant colony system to assess the trust and reputation for securing another mobile cloud computing model based on the cloudlet notion