Tesi sul tema "Efficacité des algorithmes"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-45 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Efficacité des algorithmes".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Zhang, Jian Ping. "Contrôle de flux pour le service ABR des réseaux ATM : équité et efficacité". Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0010.
Testo completoPetit, Franck. "Efficacité et simplicité dans les algorithmes distribués auto-stabilisants de parcours en profondeur de jeton". Amiens, 1998. http://www.theses.fr/1998AMIE0103.
Testo completoHamdaoui, Mohamed. "Optimisation multicritère de l’efficacité propulsive de mini-drônes à ailes battantes par algorithmes évolutionnaires". Paris 6, 2009. http://www.theses.fr/2009PA066448.
Testo completoDoncel, Josu. "Efficiency of distributed queueing games and of path discovery algorithms". Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0007/document.
Testo completoThis thesis deals with the efficiency of distributed resource sharing algorithms and of online path discovery algorithms. In the first part of the thesis, we analyse a game in which users pay for using a shared resource. The allocated resource to a user is directly proportional to its payment. Each user wants to minimize its payment while ensuring a certain quality of service. This problem is modelled as a non-cooperative resource-sharing game. Due to lack of analytical expressions for the underlying queuing discipline, we are able to give the solution of the game only under some assumptions. For the general case, we develop an approximation based on a heavy-traffic result and we validate the accuracy of the approximation numerically. In the second part, we study the efficiency of load balancing games, i.e., we compare the loss in performance of noncooperative decentralized routing with a centralized routing. We show that the PoA is very pessimistic measure since it is achieved in only pathological cases. In most scenarios, distributed implementations of load-balancing perform nearly as well as the optimal centralized implementation. In the last part of the thesis, we analyse the optimal path discovery problem in complete graphs. In this problem, the values of the edges are unknown but can be queried. For a given function that is applied to paths, the goal is to find a best value path from a source to a given destination querying the least number of edges. We propose the query ratio as efficiency measure of algorithms that solve this problem. We prove a lower-bound for any algorithm that solves this problem and we proposed an algorithm with query ratio strictly less than 2
Belmega, Elena Veronica. "Problèmes d'allocations de ressouces dans les réseaux MIMO sans fil distribués". Paris 11, 2010. http://www.theses.fr/2010PA112259.
Testo completoLn this thesis manuscript, the main objective is to study the wireless networks where the node terminals are equipped with multiple antennas. Rising topics such as self-optimizing networks, green communications and distributed algorithms have been approached mainly from a theoretical perspective. To this aim, we have used a diversified spectrum of tools from Game Theory, Information Theory, Random Matrix Theory and Learning Theory in Games. We start our analysis with the study of the power allocation problem in distributed networks. The transmitters are assumed to be autonomous and capable of allocating their powers to optimize their Shannon achievable rates. A non-cooperative game theoretical framework is used to investigate the solution to this problem. Distributed algorithms which converge towards the optimal solution, i. E. The Nash equilibrium, have been proposed. Two different approaches have been applied: iterative algorithms based on the best-response correspondence and reinforcement learning algorithms. Another major issue is related to the energy-efficiency aspect of the communication. Ln order to achieve high transmission rates, the power consumption is also high. Ln networks where the power consumption is the bottleneck, the Shannon achievable rate is no longer suitable performance metric. This is why we have also addressed the problem of optimizing an energy-efficiency function
Tan, Pauline. "Précision de modèle et efficacité algorithmique : exemples du traitement de l'occultation en stéréovision binoculaire et de l'accélération de deux algorithmes en optimisation convexe". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX092/document.
Testo completoThis thesis is splitted into two relatively independant parts. The first part is devoted to the binocular stereovision problem, specifically to the occlusion handling. An analysis of this phenomena leads to a regularity model which includes a convex visibility constraint. The resulting energy functional is minimized by convex relaxation. The occluded areas are then detected thanks to the horizontal slope of the disparity map and densified. Another method with occlusion handling was proposed by Kolmogorov and Zabih. Because of its efficiency, we adapted it to two auxiliary problems encountered in stereovision, namely the densification of sparse disparity maps and the subpixel refinement of pixel-accurate maps.The second part of this thesis studies two convex optimization algorithms, for which an acceleration is proposed. The first one is the Alternating Direction Method of Multipliers (ADMM). A slight relaxation in the parameter choice is shown to enhance the convergence rate. The second one is an alternating proximal descent algorithm, which allows a parallel approximate resolution of the Rudin-Osher-Fatemi (ROF) pure denoising model, in color-image case. A FISTA-like acceleration is also proposed
Machart, Pierre. "Coping with the Computational and Statistical Bipolar Nature of Machine Learning". Phd thesis, Aix-Marseille Université, 2012. http://tel.archives-ouvertes.fr/tel-00771718.
Testo completoDjeddour, Khédidja. "Estimation récursive du mode et de la valeur modale d'une densité : test d'ajustement de loi". Versailles-St Quentin en Yvelines, 2003. http://www.theses.fr/2003VERS0016.
Testo completoBeltaief, Slim. "Algorithmes optimaux de traitement de données pour des systèmes complexes d'information et télécommunication dans un environnement incertain". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR056/document.
Testo completoThis thesis is devoted to the problem of non parametric estimation for continuous-time regression models. We consider the problem of estimating an unknown periodoc function S. This estimation is based on observations generated by a stochastic process; these observations may be in continuous or discrete time. To this end, we construct a series of estimators by projection and thus we approximate the unknown function S by a finite Fourier series. In this thesis we consider the estimation problem in the adaptive setting, i.e. in situation when the regularity of the fonction S is unknown. In this way, we develop a new adaptive method based on the model selection procedure proposed by Konev and Pergamenshchikov (2012). Firstly, this procedure give us a family of estimators, then we choose the best possible one by minimizing a cost function. We give also an oracle inequality for the risk of our estimators and we give the minimax convergence rate
Luo, Jia. "Algorithmes génétiques parallèles pour résoudre des problèmes d'ordonnancement de tâches dynamiques de manière efficace en prenant en compte l'énergie". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30001.
Testo completoDue to new government legislation, customers' environmental concerns and continuously rising cost of energy, energy efficiency is becoming an essential parameter of industrial manufacturing processes in recent years. Most efforts considering energy issues in scheduling problems have focused on static scheduling. But in fact, scheduling problems are dynamic in the real world with uncertain new arrival jobs after the execution time. In this thesis, two energy efficient dynamic scheduling problems are studied. Model I analyzes the total tardiness and the makespan with a peak power limitation while considering the flexible flow shop with new arrival jobs. A periodic complete rescheduling approach is adopted to represent the optimization problem. Model II concerns an investigation into minimizing total tardiness and total energy consumption in the job shop with new urgent arrival jobs. An event driven schedule repair approach is utilized to deal with the updated schedule. As an adequate renewed scheduling plan needs to be obtained in a short response time in dynamic environment, two parallel Genetic Algorithms (GAs) are proposed to solve these two models respectively. The parallel GA I is a CUDA-based hybrid model consisting of an island GA at the upper level and a fine-grained GA at the lower level. It combines metrics of two hierarchical layers and takes full advantage of CUDA's compute capability. The parallel GA II is a dual heterogeneous design composed of a cellular GA and a pseudo GA. The islands with these two different structures increase the population diversity and can be well parallelized on GPUs simultaneously with multi-core CPU. Finally, numerical experiments are conducted and show that our approaches can not only solve the problems flexibly, but also gain competitive results and reduce time requirements
Harbaoui, dridi Imen. "Optimisation heuristique pour la résolution du m-PDPTW statique et dynamique". Thesis, Ecole centrale de Lille, 2010. http://www.theses.fr/2010ECLI0031/document.
Testo completoNowadays, the transport goods problem occupies an important place in the economic life of modern societies. The PDPTW (Pickup and delivery problem with Time Windows) is one which a large part of researchers was interested. This is an optimization vehicles routing problem which must meet requests for transport between suppliers and customers satisfying precedence and capacity.Researchers developed in this thesis concerns the resolution of the PDPTW with multiple vehicles (m-PDPTW). The latter was treated in two cases: static and dynamic.We have proposed some approaches to solving the m- PDPTW, based on genetic algorithms, multicriteria optimization and the lower bounds, and this to minimize a number of criteria such as: the vehicles number, the total travel cost, and the total tardiness time.Computational results indicate that the proposed approach gives good results with a total tardiness equal to zero with a tolerable cost
Chaari, Tarek. "Un algorithme génétique pour l'ordonnancement robuste : application au problème d'ordonnancement du flow shop hybride". Phd thesis, Université de Valenciennes et du Hainaut-Cambresis, 2010. http://tel.archives-ouvertes.fr/tel-00551511.
Testo completoAllen, Benoît. "Optimisation d'échangeurs de chaleur : condenseur à calandre, réseau d'échangeurs de chaleur et production d'eau froide". Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27364/27364.pdf.
Testo completoXu, Yanni. "Optimization of the cutting-related processes for consumer-centered garment manufacturing". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I015.
Testo completoThe work aims to make optimizations of garment production and resolve the dilemma between personalization and cost in the context of mass customization. Firstly, practical mass customization methods regarding cutting-related processes (including sizing) are proposed adapted from the industrial practice of traditional mass production. Due to the good performances of personalization and cost, additional sizes are adopted in the further optimizations of specific cutting-related processes, i.e., sizing, cutting order planning, and marker making with exact methods and artificial intelligence techniques. A genetic algorithm is used for the best set of additional sizes, an integer programming is employed for the best cutting order plan (i.e., the lay planning with the corresponding markers), a multi-linear regression, and a neural network are applied to estimating marker lengths. The proposed mass customization methods are proved to be efficient. The underneath indirect relationship between personalization and cost is established. With the help of the optimized cutting-related processes, the balance of personalization and cost is demonstrated. The estimation of marker length reduces the marker making workload and provides marker lengths for cutting cost estimation with a high efficiency and an acceptable accuracy. All the above enable the garment production to shift from mass production to mass customization
Zaarour, Farah. "Channel estimation algorithms for OFDM in interference scenarios". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10105/document.
Testo completoThe scarcity of the radio spectrum and the increasing demand on bandwidth makes it vital to optimize the spectrum use. While a maximum efficiency should be attained, a minimal interference level should be maintained. OFDM has been selected as the modulation scheme in several wireless standards. Channel estimation is a fundamental task in OFDM and it becomes even more challenging in the presence of interference. In this thesis, our aim is to propose channel estimation algorithms for OFDM systems in the presence of interference, where conventional channel estimators designed for OFDM fail. First, we consider the cognitive radio environment and propose a novel channel estimation framework for fast time-varying channels in OFDM with NBI. This is accomplished through an expectation maximization (EM) based algorithm. This formulation allows us to obtain a closed-form expression for the estimation of the noise power. In this thesis, we are particularly interested in a very recent scheme of superimposed pilots for OFDM (DNSP). DNSP assures interference-free pilots at the expense of data interference. Seen the modernity of DNSP, a suitable receiver has to be designed to cope with its design. We first propose a low-complexity interference canceler (IC) for slow time-varying channels with DNSP. The performance of the proposed IC is guaranteed when the channel estimation error is small. As another contribution, we extend the design of the approximated IC for DNSP so as to take the channel estimation errors into account. Finally, we consider robust channel estimation which can be viewed as one of the perspectives of this thesis
Ben, Jmaa Chtourou Yomna. "Implémentation temps réel des algorithmes de tri dans les applications de transports intelligents en se basant sur l'outil de synthèse haut niveau HLS". Thesis, Valenciennes, 2019. http://www.theses.fr/2019VALE0013.
Testo completoIntelligent transport systems play an important role in minimizing accidents, traffic congestion, and air pollution. Among these systems, we mention the avionics domain, which uses in several cases the sorting algorithms, which are one of the important operations for real-time embedded applications. However, technological evolution is moving towards more and more complex architectures to meet the application requirements. In this respect, designers find their ideal solution in reconfigurable computing, based on heterogeneous CPU / FPGA architectures that house multi-core processors (CPUs) and FPGAs that offer high performance and adaptability to real-time constraints. Of the application. The main objective of my work is to develop hardware implementations of sorting algorithms on the heterogeneous CPU / FPGA architecture by using the high-level synthesis tool to generate the RTL design from the behavioral description. This step requires additional efforts on the part of the designer in order to obtain an efficient hardware implementation by using several optimizations with different use cases: software, optimized and nonoptimized hardware and for several permutations / vectors generated using the generator pf permutation based on Lehmer method. To improve performance, we calculated the runtime, standard deviation and resource number used for sorting algorithms by considering several data sizes ranging from 8 to 4096 items. Finally, we compared the performance of these algorithms. This algorithm will integrate the applications of decision support, planning the flight plan
Talbourdet, Fabien. "Développement d'une démarche d’aide à la connaissance pour la conception de bâtis performants". Thesis, Vaulx-en-Velin, Ecole nationale des travaux publics, 2014. http://www.theses.fr/2014ENTP0010/document.
Testo completoBoth aspirations of users and improvements in the thermal regulation require that the comfort and the energy efficiency of new buildings improve. In addition to these requirements, regulations are strengthening in many fields such as acoustics, fire safety and mechanical performance. The combined effects of these factors are making it increasingly hard to design buildings. This thesis presents a knowledge-aid approach for designing high-performance buildings based on an optimization method. This approach aims to provide clear knowledge of the potential of projects (exploration of various options) for architects and design offices at the beginning of the design that will allow them to design the best possible high-performance buildings. This potential is evaluated using external and internal geometric parameters as well as the energy characteristics of buildings. This approach also allows them to assess geometries and design solutions which are intended to be used for their projects.This approach will be applied to an office building in Lyon, France. For the tested case, the approach obtains quickly efficient solutions and also finds, for some parameters, values to design efficient solutions on part of the Paretofront or in this entire front. This application of the approach also shows that there may be solutions which are close in terms of energy needs and cost but could be very different on design parameters. This problem could influence robustness of the approach but highlights a new problem. This thesis then lays the foundation of a new study on this topic
Abdelli, Wassim. "Modélisation du rayonnement électromagnétique de boîtiers de blindage par sources équivalentes : application aux matériaux composites". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112093.
Testo completoThe modeling of composite materials is a domain of study which benefits of increasingly interest. Indeed, the popularization of the use of such materials requires the development of new models in order to better understand their behavior. The automotive and aerospace industry strives to optimize material selection based on the specificities of each application in order to reduce the weight of the equipment and to provide better mechanical and thermal characteristics. Composite materials have been also presented as a potential alternative to metals for the role of electromagnetic shielding. Their generalization in this context is nevertheless hampered by a relative lack of knowledge of their electromagnetic behavior. For this purpose, it is necessary to have methodologies to evaluate the shielding effectiveness of composite enclosures and identify the different corresponding mechanisms and parameters.Moreover, the deployment of these alternative materials on a larger scale is hindered by other constraints related mainly to the difficulty of complete 3D analysis of complex systems including composite enclosures. In fact, the topological complexity of certain components greatly complicates their integration into existing electromagnetic simulation tools. Moreover, the scale ratio between the different levels (system, composite enclosures, electronic card, circuit, component) is too large ; This disparity of scale complexifies considerably the geometrical discretization of the entire system. The combination of these different constraints leads to real difficulties to which EMC engineers face. That is why it is necessary to develop efficient models to facilitate the 3D analysis of the complete host system.This work is therefore divided in two sections :- In a first time, we present a methodology to calculate shielding effectiveness of composite enclosures of electronic equipment. The goal is to evaluate the potential of these materials in terms of electromagnetic shielding and to identify the main contributing factors.- In a second time, and in order to ensure compliance of complex electronic systems incorporating composite shielding enclosures with the stringent requirements of EMC, we propose a modeling methodology of electronic devices radiation. This modeling (based on genetic algorithms) allows to replace the radiating devices and enclosures (especially composites) by a set of elementary dipoles. The equivalent model, "black box" type, is thus representative of the entire structure in terms of high frequency electromagnetic radiation and is easily integrable in the mesh of host structures. This multipolar model provides spatial and frequency predictions of the electric and magnetic field, enabling among others to calculate the shielding effectiveness of the radiating enclosure in space, thereby giving a way to quantify its disruptive impact on its environment. Moreover, this approach allow to simplify the 3D analysis of a complete system comprising composite enclosures by controlling the EM behavior at all levels: system, enclosures, cards, circuits and components
Belmega, Elena Veronica. "Problèmes d'allocation de ressources dans les réseaux MIMO sans fil distribués". Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00556223.
Testo completoAl-Qaseer, Firas Abdulmajeed. "Scheduling policies considering both production duration and energy consumption criteria for environmental management". Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC028/document.
Testo completoWe present the challenges of environmental management and underline the importance of an energy saving policy for companies. We propose a model to determine the energy balance of manufacturing by integrating the different productive and non-productive phases. We define two purposes for minimizing production time and energy consumption. We apply this model to the scheduling of flexible job-shop workshops. To determine the optimal solution we use two types of methods: - The first is genetic algorithms. We propose different types of algorithms to solve this multi-criteria problem. For example, we propose to develop two populations to minimize the energy consumed and the production time, and to cross them to achieve the overall objective. - The second is constraint programming. We propose to find the optimal solution by developing a double tree to evaluate the energy consumed and the production time. We build our algorithm starting from the tasks to be performed on the machines or from the machines that will perform the tasks. We discuss the construction of the Pareto front to get the best solution.We finish by comparing the different approaches and discussing their relevance to deal with problems of different sizes. We also offer several improvements and some leads for future research
Valenti, Giacomo. "Secure, efficient automatic speaker verification for embedded applications". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS471.
Testo completoThis industrial CIFRE PhD thesis addresses automatic speaker verification (ASV) issues in the context of embedded applications. The first part of this thesis focuses on more traditional problems and topics. The first work investigates the minimum enrolment data requirements for a practical, text-dependent short-utterance ASV system. Contributions in part A of the thesis consist in a statistical analysis whose objective is to isolate text-dependent factors and prove they are consistent across different sets of speakers. For very short utterances, the influence of a specific text content on the system performance can be considered a speaker-independent factor. Part B of the thesis focuses on neural network-based solutions. While it was clear that neural networks and deep learning were becoming state-of-the-art in several machine learning domains, their use for embedded solutions was hindered by their complexity. Contributions described in the second part of the thesis comprise blue-sky, experimental research which tackles the substitution of hand-crafted, traditional speaker features in favour of operating directly upon the audio waveform and the search for optimal network architectures and weights by means of genetic algorithms. This work is the most fundamental contribution: lightweight, neuro-evolved network structures which are able to learn from the raw audio input
Zou, Hang. "Goal oriented communications : the quantization problem". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG021.
Testo completoThe classic paradigm for designing a transmitter (encoder) and a receiver (decoder) is to design these elements by ensuring that the information reconstructed by the receiver is sufficiently close to the information that the transmitter has formatted to send it on the communication medium. This is referred to as a criterion of fidelity or of reconstruction quality (measured for example in terms of distortion, binary error rate, packet error rate or communication cut-off probability).The problem with the classic paradigm is that it can lead to an unjustified investment in terms of communication resources (oversizing of the data storage space, very high speed and expensive communication medium, very fast components, etc.) and even to make exchanges more vulnerable to attacks. The reason for this is that the use of the classic approach (based on the criterion of fidelity of information) in the wireless networks will typically lead to exchanges excessively rich in information, too rich regarding the decision which will have to be taken. the recipient of the information; in the simpler case, this decision may even be binary, indicating that in theory a single bit of information could be sufficient. As it turns out, the engineer does not currently have at his disposal a methodology to design such a transceiver pair that would be suitable for the intended use (or uses) of the recipient.Therefore, a new communication paradigm named the goal-oriented communication is proposed to solve the problem of classic communications. The ultimate objective of goal-oriented communications is to achieve some tasks or goals instead of improving the accuracy of reconstructed signal merely. Tasks are generally characterized by some utility functions or cost functions to be optimized.In the present thesis, we focus on the quantization problem of the goal-oriented communication, i.e., the goal-oriented quantization. We first formulate the goal-oriented quantization problem formally. Secondly, we propose an approach to solve the problem when only realizations of utility function are available. A special scenario with some extra knowledge about regularity properties of the utility functions is treated as well. Thirdly, we extend the high-resolution quantization theory to our goal-oriented quantization problem and propose implementable schemes to design a goal-oriented quantizer. Fourthly, the goal-oriented quantification problem is developed in a framework of games in strategic form. It is shown that goal-oriented quantization could improve the overall performance of the system if the famous Braess paradox exists. Finally, Nash equilibrium of a multi-user multiple-input and multiple output multiple access channel game with energy efficiency being the utility is studied and achieved in different methods
Abdelli, Abdenour. "Optimisation multicritère d'une chaîne éolienne passive". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2007. http://tel.archives-ouvertes.fr/tel-00553540.
Testo completoMousset, Stéphane. "Estimation de la vitesse axiale à partir d'une séquence réelle d'images stéréoscopiques". Rouen, 1997. http://www.theses.fr/1997ROUES048.
Testo completoLahiani, Nouha. "Méthode hybride d'affectation des ressources humaines pour l'amélioration de la performance de la maintenance". Electronic Thesis or Diss., Paris 8, 2015. http://www.theses.fr/2015PA080037.
Testo completoIn this thesis, a decision-making tool for maintenance management process based on assignment ofhuman resources is proposed in order to improve maintenance performance. An optimal maintenanceperformance is indispensable to guarantee the productivity and competitiveness of manufacturingcompanies.The proposed approach provides a framework of different possible levers to measure, evaluate,improve and optimize the maintenance performance. The assignment of human resources problem isconsidered. It takes into account different constraints like human resources availability, competences,urgency degree management of interventions requests etc.The proposed method is based on a discrete event simulation model, providing a better presentation ofthe maintenance service and better comprehensive thanks to the performance indicators. To improveuntil optimize the model, a simulation-based Pareto optimization method is introduced. Optimizationmodule was coded on independent programs in order to provide an opportunity of control thesimulation based optimization process.The proposed simulation based optimization method find good solutions in a reasonable amount oftime. Applying this technique on an industrial case-study, we show that it is more effective indetecting real faults than existing alternatives. The approach can be extended to cover other domainsand other types of simulation models
Boudargham, Nadine. "Competent QoS-aware and energy efficient protocols for body sensor networks". Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD007.
Testo completoBody Sensor Networks (BSNs) are formed of medical sensors that gather physiological and activity data from the human body and its environment, and send them wirelessly to a personal device like Personal Digital Assistant (PDA) or a smartphone that acts as a gateway to health care. Collaborative Body Sensor Networks (CBSNs) are collection of BSNs that move in a given area and collaborate, interact and exchange data between each other to identify group activity, and monitor the status of single and multiple persons.In both BSN and CBNS networks, sending data with the highest Quality of Service (QoS) and performance metrics is crucial since the data sent affects people’s life. For instance, the sensed physiological data should be sent reliably and with minimal delay to take appropriate actions before it is too late, and the energy consumption of nodes should be preserved as they have limited capacities and they are expected to serve for a long period of time. The QoS in BSNs and CBSNs largely depends on the choice of the Medium Access Control (MAC) protocols, the adopted routing schemes, and the efficient and accuracy of anomaly detection.The current MAC, routing and anomaly detection schemes proposed for BSNs and CBSNs in the literature present many limitations and open the door toward more research and propositions in these areas. Thus this thesis work focuses on three main axes. The first axe consists in studying and designing new and robust MAC algorithms able to address BSNs and CBSNs' challenges. Standard MAC protocols are compared in high traffic BSNs and a new MAC protocol is proposed for such environments; then an emergency aware MAC scheme is presented to address the dynamic traffic requirements of BSN in ensuring delivery of emergency data within strict delay requirements, and energy efficiency of nodes during regular observations; moreover, a traffic and mobility aware MAC scheme is proposed for CBSNs to address both traffic and mobility requirements for these networks.The second axe consists in proposing a thorough and efficient routing scheme suitable for BSNs and CBSNs. First, different routing models are compared for CBSNs and a new routing scheme is proposed in the aim of reducing the delay of data delivery, and increasing the network throughput and the energy efficiency of nodes. The proposed scheme is then adapted to BSN's requirements to become a solid solution for the challenges faced by this network. The third axe involves proposing an adaptive sampling approach that guarantees high accuracy in the detection of emergency cases, while ensuring at the same time high energy efficiency of the sensors.In the three axes, the performance of the proposed schemes is qualitatively compared to existing algorithms in the literature; then simulations are carried a posteriori with respect to different performance metrics and under different scenarios to assess their efficiency and ability to face BSNs and CBSNs' challenges.Simulation results demonstrate that the proposed MAC, routing and anomaly detection schemes outperform the existing algorithms, and present strong solutions that satisfy BSNs and CBSNs' requirements
Xiong, Haoyi. "Near-optimal mobile crowdsensing : design framework and algorithms". Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0005/document.
Testo completoNowadays, there is an increasing demand to provide real-time environment information such as air quality, noise level, traffic condition, etc. to citizens in urban areas for various purposes. The proliferation of sensor-equipped smartphones and the mobility of people are making Mobile Crowdsensing (MCS) an effective way to sense and collect information at a low deployment cost. In MCS, instead of deploying static sensors in urban areas, people with mobile devices play the role of mobile sensors to sense the information of their surroundings and the communication network (3G, WiFi, etc.) is used to transfer data for MCS applications. Typically, an MCS application (or task) not only requires each participant's mobile device to possess the capability of receiving sensing tasks, performing sensing and returning sensed results to a central server, it also requires to recruit participants, assign sensing tasks to participants, and collect sensed results that well represents the characteristics of the target sensing region. In order to recruit sufficient participants, the organizer of the MCS task should consider energy consumption caused by MCS applications for each individual participant and the privacy issues, further the organizer should give each participant a certain amount of incentives as encouragement. Further, in order to collect sensed results well representing the target region, the organizer needs to ensure the sensing data quality of the sensed results, e.g., the accuracy and the spatial-temporal coverage of the sensed results. With the energy consumption, privacy, incentives, and sensing data quality in mind, in this thesis we have studied four optimization problems of mobile crowdsensing and conducted following four research works: • EEMC - In this work, the MCS task is splitted into a sequence of sensing cycles, we assume each participant is given an equal amount of incentive for joining in each sensing cycle; further, given the target region of the MCS task, the MCS task aims at collecting an expected number of sensed results from the target region in each sensing cycle.Thus, in order to minimize the total incentive payments and the total energy consumption of the MCS task while meeting the predefined data collection goal, we propose EEMC which intends to select a minimal number of anonymous participants to join in each sensing cycle of the MCS task while ensuring an minimum number of participants returning sensed results. • EMC3 - In this work, we follow the same sensing cycles and incentives assumptions/settings from EEMC; however, given a target region consisting of a set of subareas, the MCS task in this work aims at collecting sensed results covering each subarea of the target region in each sensing cycle (namely full coverage constraint).Thus, in order to minimize the total incentive payments and the total energy consumption of the MCS task under the full coverage constraint, we propose EMC3 which intends to select a minimal number of anonymous participaNts to join in each sensing cycle of the MCS task while ensuring at least one participant returning sensed results from each subarea. • CrowdRecruiter - In this work, we assume each participant is given an equal amount of incentive for joining in all sensing cycles of the MCS task; further, given a target region consisting of a set of subareas, the MCS task aims at collecting sensed results from a predefined percentage of subareas in each sensing cycle (namely probabilistic coverage constraint).Thus, in order to minimize the total incentive payments the probabilistic coverage constraint, we propose CrowdRecruiter which intends to recruit a minimal number of participants for the whole MCS task while ensuring the selected participants returning sensed results from at least a predefined percentage of subareas in each sensing cycle. • CrowdTasker - In this work, we assume each participant is given a varied amount of incentives according to [...]
Xiong, Haoyi. "Near-optimal mobile crowdsensing : design framework and algorithms". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0005.
Testo completoNowadays, there is an increasing demand to provide real-time environment information such as air quality, noise level, traffic condition, etc. to citizens in urban areas for various purposes. The proliferation of sensor-equipped smartphones and the mobility of people are making Mobile Crowdsensing (MCS) an effective way to sense and collect information at a low deployment cost. In MCS, instead of deploying static sensors in urban areas, people with mobile devices play the role of mobile sensors to sense the information of their surroundings and the communication network (3G, WiFi, etc.) is used to transfer data for MCS applications. Typically, an MCS application (or task) not only requires each participant's mobile device to possess the capability of receiving sensing tasks, performing sensing and returning sensed results to a central server, it also requires to recruit participants, assign sensing tasks to participants, and collect sensed results that well represents the characteristics of the target sensing region. In order to recruit sufficient participants, the organizer of the MCS task should consider energy consumption caused by MCS applications for each individual participant and the privacy issues, further the organizer should give each participant a certain amount of incentives as encouragement. Further, in order to collect sensed results well representing the target region, the organizer needs to ensure the sensing data quality of the sensed results, e.g., the accuracy and the spatial-temporal coverage of the sensed results. With the energy consumption, privacy, incentives, and sensing data quality in mind, in this thesis we have studied four optimization problems of mobile crowdsensing and conducted following four research works: • EEMC - In this work, the MCS task is splitted into a sequence of sensing cycles, we assume each participant is given an equal amount of incentive for joining in each sensing cycle; further, given the target region of the MCS task, the MCS task aims at collecting an expected number of sensed results from the target region in each sensing cycle.Thus, in order to minimize the total incentive payments and the total energy consumption of the MCS task while meeting the predefined data collection goal, we propose EEMC which intends to select a minimal number of anonymous participants to join in each sensing cycle of the MCS task while ensuring an minimum number of participants returning sensed results. • EMC3 - In this work, we follow the same sensing cycles and incentives assumptions/settings from EEMC; however, given a target region consisting of a set of subareas, the MCS task in this work aims at collecting sensed results covering each subarea of the target region in each sensing cycle (namely full coverage constraint).Thus, in order to minimize the total incentive payments and the total energy consumption of the MCS task under the full coverage constraint, we propose EMC3 which intends to select a minimal number of anonymous participaNts to join in each sensing cycle of the MCS task while ensuring at least one participant returning sensed results from each subarea. • CrowdRecruiter - In this work, we assume each participant is given an equal amount of incentive for joining in all sensing cycles of the MCS task; further, given a target region consisting of a set of subareas, the MCS task aims at collecting sensed results from a predefined percentage of subareas in each sensing cycle (namely probabilistic coverage constraint).Thus, in order to minimize the total incentive payments the probabilistic coverage constraint, we propose CrowdRecruiter which intends to recruit a minimal number of participants for the whole MCS task while ensuring the selected participants returning sensed results from at least a predefined percentage of subareas in each sensing cycle. • CrowdTasker - In this work, we assume each participant is given a varied amount of incentives according to [...]
Hamini, Abdallah. "News algorithms for green wired and wireless communications". Phd thesis, INSA de Rennes, 2013. http://tel.archives-ouvertes.fr/tel-00903356.
Testo completoBen, Ayed Ramzi. "Eco-conception d’une chaine de traction ferroviaire". Thesis, Ecole centrale de Lille, 2012. http://www.theses.fr/2012ECLI0009/document.
Testo completoWith the introduction of different environmental standards like ISO 14001, concerns of manufacturers in railway industry are more and more oriented to the design of green products. One important issue when designing such products is the control of the cost impact and the evaluation of the price which consumers agree to pay for a reduced environmental footprint.Eco-design of railway train presents several challenges for the designer. The first one is the complexity of the life cycle analysis of such components. The second challenge is the necessity of consideration of several environmental impacts in design stage given the number of impacts. Finally, railway components have different models with different granularity which can be used in the process of eco-design. To overcome these problems we propose in this work a method which involves two steps. The first one is to simplify the LCA of the railway train using environmental management software and take the opportunity to build a malleable model to calculate eleven impacts. The second step, is to aggregate these impacts for a single indicator which is considered later as environmental criterion in the eco-design process. In order to investigate optimization tools, the eco-design problem is expressed into an optimization problem. Optimization algorithms are able to solve this problem and to find the optimal set of compromises between environmental criterion and the cost of the railway product. The set of compromises is given as a graph called the Pareto front. In our work the cost is expressed by the mass of the component and some optimization algorithms have been adapted in this work to serve in the process of eco-design
Ben, Ayed Ramzi. "Eco-conception d’une chaine de traction ferroviaire". Electronic Thesis or Diss., Ecole centrale de Lille, 2012. http://www.theses.fr/2012ECLI0009.
Testo completoWith the introduction of different environmental standards like ISO 14001, concerns of manufacturers in railway industry are more and more oriented to the design of green products. One important issue when designing such products is the control of the cost impact and the evaluation of the price which consumers agree to pay for a reduced environmental footprint.Eco-design of railway train presents several challenges for the designer. The first one is the complexity of the life cycle analysis of such components. The second challenge is the necessity of consideration of several environmental impacts in design stage given the number of impacts. Finally, railway components have different models with different granularity which can be used in the process of eco-design. To overcome these problems we propose in this work a method which involves two steps. The first one is to simplify the LCA of the railway train using environmental management software and take the opportunity to build a malleable model to calculate eleven impacts. The second step, is to aggregate these impacts for a single indicator which is considered later as environmental criterion in the eco-design process. In order to investigate optimization tools, the eco-design problem is expressed into an optimization problem. Optimization algorithms are able to solve this problem and to find the optimal set of compromises between environmental criterion and the cost of the railway product. The set of compromises is given as a graph called the Pareto front. In our work the cost is expressed by the mass of the component and some optimization algorithms have been adapted in this work to serve in the process of eco-design
Murad, Nour Mohammad. "Synchronisation, diversité et démodulation de l'interface DS-WCDMA du système de troisème génération de radio mobile : l'U. M. T. S". École Nationale Supérieure des télécommunications, 2001. http://www.theses.fr/2001ENST0020.
Testo completoLahiani, Nouha. "Méthode hybride d'affectation des ressources humaines pour l'amélioration de la performance de la maintenance". Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080037.
Testo completoIn this thesis, a decision-making tool for maintenance management process based on assignment ofhuman resources is proposed in order to improve maintenance performance. An optimal maintenanceperformance is indispensable to guarantee the productivity and competitiveness of manufacturingcompanies.The proposed approach provides a framework of different possible levers to measure, evaluate,improve and optimize the maintenance performance. The assignment of human resources problem isconsidered. It takes into account different constraints like human resources availability, competences,urgency degree management of interventions requests etc.The proposed method is based on a discrete event simulation model, providing a better presentation ofthe maintenance service and better comprehensive thanks to the performance indicators. To improveuntil optimize the model, a simulation-based Pareto optimization method is introduced. Optimizationmodule was coded on independent programs in order to provide an opportunity of control thesimulation based optimization process.The proposed simulation based optimization method find good solutions in a reasonable amount oftime. Applying this technique on an industrial case-study, we show that it is more effective indetecting real faults than existing alternatives. The approach can be extended to cover other domainsand other types of simulation models
Raffray, Guilhem. "Outils d'aide à la décision pour la conception de procédés agroalimentaires au Sud : application au procédé combiné de séchage, cuisson et fumage de produits carnés". Thesis, Montpellier, SupAgro, 2014. http://www.theses.fr/2014NSAM0066/document.
Testo completoFood process design is a complex activity, given the wide diversity of existing product and processes, and the plurality of production contexts. Designer must meet the requirements derived from the critical stakes from human, sanitarian, economic, environmental and cultural point of views. In southern countries, the rapid growth of population drives the need of more industrial processes able to valorize traditional products.The savings of development time and extra-expenses are mainly determined by the quality of design choices from the early stage of the designing process, called embodiment design. Multiple criteria decision analysis (MCDA) techniques are used in this purpose, which enable to evaluate and criticize any technological concept. In a specific context, it is possible to generate the Pareto-set of a concept, which is composed of the most efficient possible alternatives. Indeed, every design alternative is defined by some design (or decision) variables which are the degree of freedom for the dimensioning of the system considered. Our case study focuses on a technological innovation to perform hot-smoking using radiant plates (for sanitarian purpose). It is aimed to be developed for the production of traditional hot-smoked catfish widely consumed in West and Central Africa. This is a multicriteria design problem since many objectives have to be satisfied, and concern the product quality, production and energetic performances.In a first work, the mass reduction of catfish dried in hot air conditions was modeled from empirical measurements. In particular, this model takes into account the influence of the drying air conditions (Temperature, Velocity and Relative Humidity) on the calculation of the mass fluxes of evaporation and drips. After that, a global simulation model of the radiant plate hot-smoking process was developed from a previous work. Some key phenomena were described (pressure losses, air recycling, thermal regulation) as they could strongly impact the process performances. The resulting observation model allows predicting the performances of any design alternative defined by a set of 8 design variables.In a final work, expert knowledge and preference were mathematically introduced in a multiobjective optimization tool, meaning some desirability functions. Therefore, every performance variable is converted into desirability indices (traducing the level of satisfaction) and then aggregated into a single global desirability index (thus defining a global objective function). The optimal design of the concept is found using a genetic algorithm.This multiobjective optimization method enabled to find very satisfactory design solution for the radiant plate hot smoking process. More to the point, the analysis of a wide range of Pareto-optimal solutions enabled to better understand what were the strengths and weaknesses, so it was possible to suggest some targeted improvement to the current radiant plate smoking technology. Also, it is noticeable that the current simulation model can be easily adapted to other products. For the purpose of a generalization of the use of such multiobjective methods for the design of food processes, it has been pointed out that efforts should be made to gather expert criteria other relevant functional data
Ballout, Ali. "Apprentissage actif pour la découverte d'axiomes". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4026.
Testo completoThis thesis addresses the challenge of evaluating candidate logical formulas, with a specific focus on axioms, by synergistically combining machine learning with symbolic reasoning. This innovative approach facilitates the automatic discovery of axioms, primarily in the evaluation phase of generated candidate axioms. The research aims to solve the issue of efficiently and accurately validating these candidates in the broader context of knowledge acquisition on the semantic Web.Recognizing the importance of existing generation heuristics for candidate axioms, this research focuses on advancing the evaluation phase of these candidates. Our approach involves utilizing these heuristic-based candidates and then evaluating their compatibility and consistency with existing knowledge bases. The evaluation process, which is typically computationally intensive, is revolutionized by developing a predictive model that effectively assesses the suitability of these axioms as a surrogate for traditional reasoning. This innovative model significantly reduces computational demands, employing reasoning as an occasional "oracle" to classify complex axioms where necessary.Active learning plays a pivotal role in this framework. It allows the machine learning algorithm to select specific data for learning, thereby improving its efficiency and accuracy with minimal labeled data. The thesis demonstrates this approach in the context of the semantic Web, where the reasoner acts as the "oracle," and the potential new axioms represent unlabeled data.This research contributes significantly to the fields of automated reasoning, natural language processing, and beyond, opening up new possibilities in areas like bioinformatics and automated theorem proving. By effectively marrying machine learning with symbolic reasoning, this work paves the way for more sophisticated and autonomous knowledge discovery processes, heralding a paradigm shift in how we approach and leverage the vast expanse of data on the semantic Web
Nguyen, Hong Diep. "Efficient algorithms for verified scientific computing : Numerical linear algebra using interval arithmetic". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00680352.
Testo completoDa, Costa Fontes Fábio Francisco. "Optimization Models and Algorithms for the Design of Global Transportation Networks". Thesis, Artois, 2017. http://www.theses.fr/2017ARTO0206/document.
Testo completoThe development of efficient network structures for freight transport is a major concern for the current global market. Demands need to be quickly transported and should also meet the customer needs in a short period of time. Traffic congestions and delays must be minimized, since CO2 emissions must be controlled and affordable transport costs have to be offered to customers. Hub-and-spoke structure is a current network model used by both regional and intercontinental transportation, which offers an economy of scale for aggregated demands inside hub nodes. However, delays, traffic congestions and long delivery time are drawbacks from this kind of network. In this thesis, a new concept, which is called "sub-hub", is proposed to the classic hub-and-spoke network structure. In the proposed network models, economy of scale and shorter alternative paths are implemented, thus minimizing the transport cost and delivery time. The sub-hub proposal can be viewed as a connection point between two routes from distinct and close regions. Transshipments without the need to pass through hub nodes are possible inside sub-hubs. This way, congestions can be avoided and, consequently, delays are minimized. Four binary integer linear programming models for hub location and routing problem were developed in this thesis. Networks with sub-hub and networks without sub-hub taking into account circular hub routes or direct connections between hubs are compared. These models are composed of four sub-problems (location, allocation, service design and routing), which hinders the solution. A cutting plane approach was used to solve small instances of problem, while a Variable Neighborhood Decomposition Search (VNDS) composed of exact methods (matheuristic) was developed to solve large instances. The VNDS was used to explore each sub-problem by different operators. Major benefits are provided by models with sub-hub, thus promoting the development of more competitive networks
Delespierre, Tiba. "Etude de cas sur architectures à mémoires distribuées : une maquette systolique programmable et l'hypercube d'Intel". Paris 9, 1987. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1987PA090073.
Testo completoColombet, Laurent. "Parallélisation d'applications pour des réseaux de processeurs homogènes ou hétérogènes". Grenoble INPG, 1994. http://tel.archives-ouvertes.fr/tel-00005084.
Testo completoThe aim of this thesis is to study and develop efficient methods for parallelization of scientific applications on parallel computers with distributed memory. In the first part we present two libraries of PVM((\it Parallel Virtual Machine)) and MPI ((\it Message Passing Interface)) communication tools. They allow implementation of programs on most parallel machines, but also on heterogeneous computer networks. This chapter illustrates the problems faced when trying to evaluate performances of networks with heterogeneous processors. To evaluate such performances we modified and adapted the concepts of speed-up and efficiency to account for heterogeneity. The second part deals with a study of parallel application libraries such as ScaLAPACK and with the development of communication masking techniques. The general concept is based on communication anticipation, in particular by pipelining message sending operations. Experimental results on Cray T3D and IBM SP1 machines validates the theoretical studies performed on basic algorithms of the libraries discussed above
Danloup, Nicolas. "Les problèmes de collectes et livraisons avec collaboration et transbordements : modélisations et méthodes approchées". Thesis, Artois, 2016. http://www.theses.fr/2016ARTO0203/document.
Testo completoCollaborative logistics have become recently an important element for many companies to improve their supply chains efficiency. In this thesis, we study pickup and delivery problems to improve supply chains efficiency thanks to collaborative transportation. The thesis was part of the European project SCALE (Step Change in Agri-food Logistics Ecosystem). Firstly, two metaheuristics are proposed and studied to solve the Pickup and Delivery Problem with Transshipments. These metaheuristics are compared with literature works and the results of several instances are improved. Secondly, a mathematical model for a pickup and delivery problem (PDVRP) is proposed. This model is used to study the benefits of collaboration on transportation. It is applied on random data and on a case study from SCALE with real data. Finally, a model for a particular PDVRP is presented. In this model, the shipments have to cross exactly two transshipments nodes between their pickup and delivery points. This problem is inspired by a second case study made during the project SCALE. This allows to highlight the importance of collaboration and transshipment in the field of goods transportations
Bouzid, Salah Eddine. "Optimisation multicritères des performances de réseau d’objets communicants par méta-heuristiques hybrides et apprentissage par renforcement". Thesis, Le Mans, 2020. http://cyberdoc-int.univ-lemans.fr/Theses/2020/2020LEMA1026.pdf.
Testo completoThe deployment of Communicating Things Networks (CTNs), with continuously increasing densities, needs to be optimal in terms of quality of service, energy consumption and lifetime. Determining the optimal placement of the nodes of these networks, relative to the different quality criteria, is an NP-Hard problem. Faced to this NP-Hardness, especially for indoor environments, existing approaches focus on the optimization of one single objective while neglecting the other criteria, or adopt an expensive manual solution. Finding new approaches to solve this problem is required. Accordingly, in this thesis, we propose a new approach which automatically generates the deployment that guarantees optimality in terms of performance and robustness related to possible topological failures and instabilities. The proposed approach is based, on the first hand, on the modeling of the deployment problem as a multi-objective optimization problem under constraints, and its resolution using a hybrid algorithm combining genetic multi-objective optimization with weighted sum optimization and on the other hand, the integration of reinforcement learning to guarantee the optimization of energy consumption and the extending the network lifetime. To apply this approach, two tools are developed. A first called MOONGA (Multi-Objective Optimization of wireless Network approach based on Genetic Algorithm) which automatically generates the placement of nodes while optimizing the metrics that define the QoS of the CTN: connectivity, m-connectivity, coverage, k-coverage, coverage redundancy and cost. MOONGA tool considers constraints related to the architecture of the deployment space, the network topology, the specifies of the application and the preferences of the network designer. The second optimization tool is named R2LTO (Reinforcement Learning for Life-Time Optimization), which is a new routing protocol for CTNs, based on distributed reinforcement learning that allows to determine the optimal rooting path in order to guarantee energy-efficiency and to extend the network lifetime while maintaining the required QoS
Gonnet, Jean-Paul. "Optimisation des Canalisations Electriques et des Armoires de Distribution". Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00137973.
Testo completoAfin de prendre en compte ce phénomène dès la conception (au travers d'un outil logiciel dédié), on introduit une méthode de modélisation adaptée. Alors que les méthodes éléments finis sont adaptées aux organes de conversion électromécaniques, les connexions sont plus naturellement modélisées par la méthode PEEC (Partial Element Equivalent Circuit).
Couplée à des optimiseurs, cette méthode se révèle très efficace pour améliorer le design des conducteurs, tant sur l'agencement des barres pour lutter contre les effets de proximité que, comme on le montre ici, sur la forme des sections pour minimiser l'effet de peau par couplage à des algorithmes génétiques. Les outils développés donnent alors accès à une marge de gain importante jusqu'ici peu explorée. Afin de s'adapter aux dispositifs étudiés, dont une partie est entourée d'enveloppes métalliques, une extension de la méthode (baptisée ‘µPEEC') prenant en compte l'aimantation des tôles ferromagnétiques est proposée.
Pour l'épineux problème du choix de fonction objectif, l'analyse du cycle de vie et la recherche du moindre impact environnemental peuvent orienter l'arbitrage entre coût matière et pertes Joule consenties. Une extrapolation des gains accessibles est proposée.
Martel, Yannick. "Efficacité de l’algorithme EM en ligne pour des modèles statistiques complexes dans le contexte des données massives". Thesis, 2020. http://hdl.handle.net/1866/25477.
Testo completoThe EM algorithm Dempster et al. (1977) yields a sequence of estimators that converges to the maximum likelihood estimator for missing data models whose maximum likelihood estimator is not directly tractable. The EM algorithm is remarkable given its numerous applications in statistical learning. However, it may suffer from its computational cost. Cappé and Moulines (2009) proposed an online version of the algorithm in models whose likelihood belongs to the exponential family that provides an upgrade in computational efficiency in large data sets. However, the conditional expected value of the sufficient statistic is often intractable for complex models and/or when the missing data is of a high dimension. In those cases, it is replaced by an estimator. Many questions then arise naturally: do the convergence results pertaining to the initial estimator hold when the expected value is substituted by an estimator? In particular, does the asymptotic normality property remain in this case? How does the variance of the estimator of the expected value affect the asymptotic variance of the EM estimator? Are Monte-Carlo and MCMC estimators suitable in this situation? Could variance reduction tools such as control variates provide variance relief? These questions will be tackled by the means of examples containing latent data models. This master’s thesis’ main contributions are the presentation of a unified framework for stochastic approximation EM algorithms, an illustration of the impact that the estimation of the conditional expected value has on the variance and the introduction of online EM algorithms which reduce the additional variance stemming from the estimation of the conditional expected value.
Boisvert-Beaudry, Gabriel. "Efficacité des distributions instrumentales en équilibre dans un algorithme de type Metropolis-Hastings". Thèse, 2019. http://hdl.handle.net/1866/23794.
Testo completoIn this master's thesis, we are interested in a new class of informed proposal distributions for Metropolis-Hastings algorithms. These new proposals, called balanced proposals, are obtained by adding information about the target density to an uninformed proposal distribution. A Markov chain generated by a balanced proposal is reversible with respect to the target density without the need for an acceptance probability in two extreme cases: the local case, where the proposal variance tends to zero, and the global case, where it tends to infinity. The balanced proposals need to be approximated to be used in practice. We show that the local case leads to the Metropolis-adjusted Langevin algorithm (MALA), while the global case leads to a small modification of the MALA. These results are used to create a new algorithm that generalizes the MALA by adding a new parameter. Depending on the value of this parameter, the new algorithm will use a locally balanced proposal, a globally balanced proposal, or an interpolation between these two cases. We then study the optimal choice for this parameter as a function of the dimension of the target distribution under two regimes: the asymptotic regime and a finite-dimensional regime. Simulations are presented to illustrate the theoretical results. Finally, we apply the new algorithm to a Bayesian logistic regression problem and compare its efficiency to existing algorithms. The results are satisfying on a theoretical and computational standpoint.
Augustyniak, Maciej. "Estimation du modèle GARCH à changement de régimes et son utilité pour quantifier le risque de modèle dans les applications financières en actuariat". Thèse, 2013. http://hdl.handle.net/1866/10826.
Testo completoThe Markov-switching GARCH model is the foundation of this thesis. This model offers rich dynamics to model financial data by allowing for a GARCH structure with time-varying parameters. This flexibility is unfortunately undermined by a path dependence problem which has prevented maximum likelihood estimation of this model since its introduction, almost 20 years ago. The first half of this thesis provides a solution to this problem by developing two original estimation approaches allowing us to calculate the maximum likelihood estimator of the Markov-switching GARCH model. The first method is based on both the Monte Carlo expectation-maximization algorithm and importance sampling, while the second consists of a generalization of previously proposed approximations of the model, known as collapsing procedures. This generalization establishes a novel relationship in the econometric literature between particle filtering and collapsing procedures. The discovery of this relationship is important because it provides the missing link needed to justify the validity of the collapsing approach for estimating the Markov-switching GARCH model. The second half of this thesis is motivated by the events of the financial crisis of the late 2000s during which numerous institutional failures occurred because risk exposures were inappropriately measured. Using 78 different econometric models, including many generalizations of the Markov-switching GARCH model, it is shown that model risk plays an important role in the measurement and management of long-term investment risk in the context of variable annuities. Although the finance literature has devoted a lot of research into the development of advanced models for improving pricing and hedging performance, the approaches for measuring dynamic hedging effectiveness have evolved little. This thesis offers a methodological contribution in this area by proposing a statistical framework, based on regression analysis, for measuring the effectiveness of dynamic hedges for long-term investment guarantees.