Tesis sobre el tema "Plan de données programmable"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 41 mejores tesis para su investigación sobre el tema "Plan de données programmable".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Jose, Matthews. "In-network real-value computation on programmable switches". Electronic Thesis or Diss., Université de Lorraine, 2023. http://docnum.univ-lorraine.fr/ulprive/DDOC_T_2023_0057_JOSE.pdf.
Texto completoThe advent of new-generation programmable switch ASICs have compelled the network community to rethink operation of networks. The ability to reconfigure the dataplane packet processing logic without changing the underlying hardware and the introduction of stateful memory primitives have resulted in a surge in interest and use-cases that can be offloaded onto the dataplane. However, programmable switches still do not support real-value computation and forcing the use of external servers or middle boxes to perform these operations. To fully realize the capability of in-network processing, our contributions propose to add support for real-value operations on the switch. This is achieved by leveraging mathematical lookup tables for building pipelines to compute a real-value functions. We start by developing procedures for computing basic elementary operations, keeping in mind the constraints and limitations of a programmable switch. These procedures are a combination of lookup tables and native operations provided by the switch. A given function is decomposed into a representation that highlights its constituent elementary operations and the dependencies between them. A pipeline is constructed by stitching together the predefined procedures for each elementary operation based on the representation. Several, reduction and resource optimization techniques are also applied before the final pipeline is deployed onto the switch. This process was further expanded to scale multiple switches in the network, enabling even larger functions to be deployed on the switch. The project was the first to investigate a generic framework for building pipelines for real-value computation. Our prototype on Barefoot Tofino switch shows the efficiency of our system for in-network computation of different types of operations and its application for in-network logistic regression models used for classification problems and time series functions like ARIMA for DDoS detection. Our evaluations show that reaching a relative error below 5% or even 1% is possible with a low amount of resources making it a viable approach to support complex functions and algorithms
Soni, Hardik. "Une approche modulaire avec délégation de contrôle pour les réseaux programmables". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4026/document.
Texto completoNetwork operators are facing great challenges in terms of cost and complexity in order to incorporate new communication technologies (e.g., 4G, 5G, fiber) and to keep up with increasing demands of new network services to address emerging use cases. Softwarizing the network operations using SoftwareDefined Networking (SDN) and Network Function Virtualization (NFV) paradigms can simplify control and management of networks and provide network services in a cost effective way. SDN decouples control and data traffic processing in the network and centralizes the control traffic processing to simplify the network management, but may face scalability issues due to the same reasons. NFV decouples hardware and software of network appliances for cost effective operations of network services, but faces performance degradation issues due to data traffic processing in software. In order to address scalability and performance issues in SDN/NFV, we propose in the first part of the thesis, a modular network control and management architecture, in which the SDN controller delegates part of its responsibilities to specific network functions instantiated in network devices at strategic locations in the infrastructure. We have chosen to focus on a modern application using an IP multicast service for live video streaming applications (e.g., Facebook Live or Periscope) that illustrates well the SDN scalability problems. Our solution exploits benefits of the NFV paradigm to address the scalability issue of centralized SDN control plane by offloading processing of multicast service specific control traffic to Multicast Network Functions (MNFs) implemented in software and executed in NFV environment at the edge of the network. Our approach provides smart, flexible and scalable group management and leverages centralized control of SDN for Lazy Load Balance Multicast (L2BM) traffic engineering policy in software defined ISP networks. Evaluation of this approach is tricky, as real world SDN testbeds are costly and not easily available for the research community. So, we designed a tool that leverages the huge amount of resources available in the grid, to easily emulate such scenarios. Our tool, called DiG, takes into account the physical resources (memory, CPU, link capacity) constraints to provide a realistic evaluation environment with controlled conditions. Our NFV-based approach requires multiple application specific functions (e.g., MNFs) to control and manage the network devices and process the related data traffic in an independent way. Ideally, these specific functions should be implemented directly on hardware programmable routers. In this case, new routers must be able to execute multiple independently developed programs. Packet-level programming language P4, one of the promising SDN-enabling technologies, allows applications to program their data traffic processing on P4 compatible network devices. In the second part of the thesis, we propose a novel approach to deploy and execute multiple independently developed and compiled applications programs on the same network device. This solution, called P4Bricks, allows multiple applications to control and manage their data traffic, independently. P4Bricks merges programmable blocks (parsers/deparsers and packet processing pipelines) of P4 programs according to processing semantics (parallel or sequential) provided at the time of deployment
Mabrouk, Lhoussein. "Contribution à l'implémentation des algorithmes de vision avec parallélisme massif de données sur les architectures hétérogènes CPU/GPU". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT009.
Texto completoMixture of Gaussians (MoG) and Compressive Sensing (CS) are two common algorithms in many image and audio processing systems. Their combination, CS-MoG, was recently used for mobile objects detecting through background subtraction. However, the implementations of CS-MoG presented in previous works do not take full advantage of the heterogeneous architectures evolution. This thesis proposes two contributions for the efficient implementation of CS-MoG on heterogeneous parallel CPU/GPU architectures. These technologies nowadays offer great programming flexibility, which allows optimizing performance as well as energy efficiency.Our first contribution consists in offering the best acceleration-precision compromise on CPU and GPU. The second is the proposition of a new adaptive approach for data partitioning, allowing to fully exploiting the CPUs-GPUs. Whatever their relative performance, this approach, called the Optimal Data Distribution Cursor (ODDC), aims to ensure automatic balancing of the computational load by estimating the optimal proportion of data that has to be affected to each processor, taking into account its computing capacity.This approach updates the partitioning online, which allows taking into account any influence of the irregularity of the processed images content. In terms of mobile objects, we mainly target vehicles whose detection represents some challenges, but in order to generalize our approach, we test also scenes containing other types of targets.Experimental results, on different platforms and datasets, show that the combination of our two contributions makes it possible to reach 98% of the maximal possible performance on these platforms. These results can also be beneficial for other algorithms where calculations are performed independently on small grains of data
Castanié, Laurent. "Visualisation de données volumiques massives : application aux données sismiques". Thesis, Vandoeuvre-les-Nancy, INPL, 2006. http://www.theses.fr/2006INPL083N/document.
Texto completoSeismic reflection data are a valuable source of information for the three-dimensional modeling of subsurface structures in the exploration-production of hydrocarbons. This work focuses on the implementation of visualization techniques for their interpretation. We face both qualitative and quantitative challenges. It is indeed necessary to consider (1) the particular nature of seismic data and the interpretation process (2) the size of data. Our work focuses on these two distinct aspects : 1) From the qualitative point of view, we first highlight the main characteristics of seismic data. Based on this analysis, we implement a volume visualization technique adapted to the specificity of the data. We then focus on the multimodal aspect of interpretation which consists in combining several sources of information (seismic and structural). Depending on the nature of these sources (strictly volumes or both volumes and surfaces), we propose two different visualization systems. 2) From the quantitative point of view, we first define the main hardware constraints involved in seismic interpretation. Focused on these constraints, we implement a generic memory management system. Initially able to couple visualization and data processing on massive data volumes, it is then improved and specialised to build a dynamic system for distributed memory management on PC clusters. This later version, dedicated to visualization, allows to manipulate regional scale seismic data (100-200 GB) in real-time. The main aspects of this work are both studied in the scientific context of visualization and in the application context of geosciences and seismic interpretation
Buslig, Leticia. "Méthodes stochastiques de modélisation de données : application à la reconstruction de données non régulières". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4734/document.
Texto completoSaade, Julien. "Encodage de données programmable et à faible surcoût, limité en disparité et en nombre de bits identiques consécutifs". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM079/document.
Texto completoThanks to their routing simplicity, noise, EMI (Electro-Magnetic Interferences), area and power consumption reduction advantages over parallel links, High Speed Serial Links (HSSLs) are found in almost all today's System-on-Chip (SoC) connecting different components: the main chip to its Inputs/Outputs (I/Os), the main chip to a companion chip, Inter-Processor Communication (IPC) and etc… Serial memory might even be the successor of current DDR memories.However, going from parallel links to high-speed serial links presents many challenges; HSSLs must run at higher speeds reaching many gigabits per second to maintain the same end-to-end throughput as parallel links as well as satisfying the exponential increase in the demand for throughput. The signal's attenuation over copper increases with the frequency, requiring more equalizers and filtering techniques, thereby increasing the design complexity and the power consumption.One way to optimize the design at high speeds is to embed the clock within the data, because a clock line means more routing surface, and it also can be source to high EMI. Another good reason to use an embedded clock is that the skew (time mismatch between the clock and the data lanes) becomes hard to control at high frequencies. Transitions must then be ensured inside the data that is sent on the line, for the receiver to be able to synchronize and recover the data correctly. In other words, the number of Consecutive Identical Bits (CIBs) also called the Run Length (RL) must be reduced or bounded to a certain limit.Another challenge and characteristic that must be bounded or reduced in the data to send on a HSSL is the difference between the number of ‘0' bits and ‘1' bits. It is called the Running Disparity (RD). Big differences between 1's and 0's could shift the signal from the reference line. This phenomenon is known as Base-Line Wander (BLW) that could increase the BER (Bit Error Rate) and require filtering or equalizing techniques to be corrected at the receiver, increasing its complexity and power consumption.In order to ensure a bounded Run Length and Running Disparity, the data to be transmitted is generally encoded. The encoding procedure is also called line coding. Over time, many encoding methods were presented and used in the standards; some present very good characteristics but at the cost of high additional bits, also called bandwidth overhead, others have low or no overhead but do not ensure the same RL and RD bounds, thus requiring more analog design complexity and increasing the power consumption.In this thesis, we propose a novel programmable line coding that can perform to the desired RL and RD bounds with a very low overhead, down to 10 times lower that the existing used encodings and for the same bounds. First, we show how we can obtain a very low overhead RL limited line coding, and second we propose a very low overhead method which bounds the RD, and then we show how we can combine both techniques in order to build a low overhead, Run Length Limited, and Running Disparity bounded Line Coding
Karam, Sandrine. "Application de la méthodologie des plans d'expériences et de l'analyse de données à l'optimisation des processus de dépôt". Limoges, 2004. http://aurore.unilim.fr/theses/nxfile/default/8f5029e5-3ced-4997-973f-2cc455e74bf1/blobholder:0/2004LIMO0044.pdf.
Texto completoTraditionally, a scientist carries out experiments in a sequential way by varying the parameters the ones after the others. This method gives results but is very expensive in time because it inevitably requires the realization of a great number of experiments. This is why it is important to help the scientist to achieve his experiments with data analyses and experimental design methods. Data analysis makes it possible to collect, summarize and present data so as to optimize the way to implement next experiments. By using experimental design, the scientist knows how to plan experiments. This experimental step will help him to structure his research in a different way, to validate his own assumptions, with better understanding the studied phenomena, and to solve the problems. The objective of this thesis is to present, through examples of applications related to the development of deposits, the elaboration and the interest of these methods
Lopez, Philippe. "Comportement mécanique d'une fracture en cisaillement, analyse par plan d'expériences des données mécaniques et morphologiques connues sur une fracture". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ57923.pdf.
Texto completoLopez, Philippe. "Comportement mécanique d'une fracture en cisaillement : analyse par plan d'expériences des données mécaniques et morphologiques connues sur une fracture". Thèse, Bordeaux 1, 2000. http://constellation.uqac.ca/949/1/12127341.pdf.
Texto completoPlessis, Gaël. "Utilisation du formalisme flou pour l'identification modale à partir de données expérimentales incertaines". Valenciennes, 2001. https://ged.uphf.fr/nuxeo/site/esupversions/0201670a-9b60-45b4-979f-f37adefecde6.
Texto completoModal identification with uncertain data is a global problem. It is established in this study that generally, knowledge about a mechanical system is not sufficient to describe only one possible experimental representation of this system. It is proposed to take into account this experimental representation uncertainty in the modal identification process. An original method, based on the possibilitic approach is developed to give modal solutions uncertainty from the uncertain representation of the system. First, a possibilistic description of experimental dynamical responses is given in order to take into account experimental data uncertainty. Secondly, these fuzzy complex responses are introduced in the identification process in order to compute uncertainty of estimate modal solutions. Three approaches are hence considered. Each one bring about new difficulties that are then considered according in order to formulate the next solution. These approaches are finally synthesised into a new one, which is numerically validated and capable to reverberate data uncertainty into estimated modal solutions. Finally, this method is exploited in an experimental context. This study is organised following an exeprimental layout in order to make a confrontation between the statistical analysis solutions and fuzzy method ones. Results from both approaches are coherent
Madelenat, Jill. "La consommation énergétique du secteur tertiaire marchand : le cas de la France avec données d’enquête à plan de sondage complexe". Thesis, Paris 10, 2016. http://www.theses.fr/2016PA100183/document.
Texto completoThis thesis deals with energy consumption in the French tertiary buildings. We adopt an empirical approach based on a French national survey, the Tertiary Buildings Energy Consumption Survey (l’Enquête sur les Consommations d’Énergie dans le Tertiaire (ECET)). From a literature review, we present the econometric analyses of the drivers of energy consumption in buildings (residential and tertiary). This review highlights the coexistence of two estimation methods. We discuss these methods, and then we detail consensus and debates on the effects of every driver that has already been analyzed in the literature. Because the data we use are complex sample survey data, we describe the statistical tools that must be used to analyze this type of data, and next present the still controversial issue of econometric modeling based on survey data. Then we use our database to produce a first statistical description of energy consumption in tertiary buildings. This description is based on a nomenclature that we establish to obtain information at the subsector level. Finally, we use all the methods and approaches identified previously to study the drivers of the tertiary buildings’ energy demand by implementing an econometric analysis on the ECET data. This lead to a double analysis of our results, both as elements of knowledge on the impact of each driver on energy consumption and as materials that help to compare the different methods to use
Beal, Aurélie. "Description et sélection de données en grande dimension". Electronic Thesis or Diss., Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4305.
Texto completoTechnological progress has now made many experiments (or simulations) possible, along with taking into account a large number of parameters, which result in (very) high-dimensional matrix requiring the development of new tools to assess and visualize the data and, if necessary, to reduce the dimension. The quality of the information provided by all points of a database or an experimental design can be assessed using criteria based on distances that will inform about the uniformity of repartition in a multidimensional space. Among the visualization methods, Curvilinear Component Analysis has the advantage of projecting high-dimensional data in a two-dimensional space with respect to the local topology. This also enables the detection of clusters of points or gaps. The dimensional reduction is based on a judicious selection of subsets of points or variables, via accurate algorithms. The performance of these methods was assessed on case studies of QSAR, spectroscopy and numeric simulation
Beal, Aurélie. "Description et sélection de données en grande dimension". Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4305/document.
Texto completoTechnological progress has now made many experiments (or simulations) possible, along with taking into account a large number of parameters, which result in (very) high-dimensional matrix requiring the development of new tools to assess and visualize the data and, if necessary, to reduce the dimension. The quality of the information provided by all points of a database or an experimental design can be assessed using criteria based on distances that will inform about the uniformity of repartition in a multidimensional space. Among the visualization methods, Curvilinear Component Analysis has the advantage of projecting high-dimensional data in a two-dimensional space with respect to the local topology. This also enables the detection of clusters of points or gaps. The dimensional reduction is based on a judicious selection of subsets of points or variables, via accurate algorithms. The performance of these methods was assessed on case studies of QSAR, spectroscopy and numeric simulation
Chemmi, Malika. "La marge de manoeuvre de l'expert-comptable dans le plan de restructuration de l’emploi des entreprises : le poids des lois et le choc des données". Thesis, Paris 13, 2014. http://www.theses.fr/2014PA131034.
Texto completoOur doctorate thesis studies a domain which is dominated at the same time by very binding laws and data difficult to analyze. It is situated in the intersection between the analysis of the existing and the forecast of the future tendencies. Its object is the works council. Regarding project of reorganization of a firm, the elected representatives can be accompanied by a chartered accountant. Yet, what can be the weight of its relationship? Can he really modify or cancel a restructuring plan? The powers which are devolved to him by the law are restricted because he cannot act and alert directly the judicial authorities. At the same time, we supposed that a restructuring plan can be only in agreement with the law and the regulations because the directions would not take the risk of proceeding to “dry” dismissals if they were not in trouble. It is true that in the majority of the cases, the chartered accountant cannot question a restructuring plan. He can generally supply information to the elected representatives allowing them to negotiate a bonus of exit. Nevertheless, through the study of a real case, we were able to demonstrate that further to the report of a chartered accountant, a project of dismissal was questioned and cancelled. It was possible because be held to proceed to a reduction in turnover further to the loss of a customer cannot be held to proceed to a reduction of staff. The staff representatives became key players in the management of the company
Flageul, Cédric. "Création de bases de données fines par simulation directe pour les effets de la turbulence sur les transferts thermiques pariétaux". Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2281/document.
Texto completoThis study focuses on the turbulent heat transfer in the turbulent channel flow configuration. Our Direct Numerical Simulations are performed using the open-source code Incompact3d. As our target is to produce data for RANS models validation, the budgets of the turbulent heat fluxes and of the temperature variance are extracted. Two-point correlations for the temperature and wall-normal heat flux are also presented to deepen our analysis. Regarding the thermal field, 2 configurations are considered: with and without conjugate heat transfer (thermal coupling between the fluid and solid domains). For conjugate heat transfer cases, a novel compatiblity condition, expressed in the spectral space, connects the temperature and wall-normal heat flux at the fluid-solid interface. For non-conjugate cases, our study is limited to boundary conditions that impose a linear combination of the temperature and wall-normal heat flux at the wall using constant coefficients (Dirichlet, Neumann, Robin). For such simple boundary conditions, a novel compatibility condition is obtained which connects the wall-value of the temperature variance and the wall-normal part of the associated dissipation rate. On one hand, this condition highlights the limitations of an imposed temperature or heat-flux at the wall. On the other, it allows us to build tailored Robin boundary conditions able to reproduce satisfactorily present conjugate heat-transfer results in the channel flow configuration
Chaignon, Paul. "Software Datapaths for Multi-Tenant Packet Processing". Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0062/document.
Texto completoMulti-tenant networks enable applications from multiple, isolated tenants to communicate over a shared set of underlying hardware resources. The isolation provided by these networks is enforced at the edge: end hosts demultiplex packets to the appropriate virtual machine, copy data across memory isolation boundaries, and encapsulate packets in tunnels to isolate traffic over the datacenter's physical network. Over the last few years, the growing demand for high performance network interfaces has pressured cloud providers to build more efficient multi-tenant networks. While many turn to specialized, hard-to-upgrade hardware devices to achieve high performance, in this thesis, we argue that significant performance improvements are attainable in end-host multi-tenant networks, using commodity hardware. We advocate for a consolidation of network functions on the host and an offload of specific tenant network functions to the host. To that end, we design Oko, an extensible software switch that eases the consolidation of network functions. Oko includes an extended flow caching algorithm to support its runtime extension with limited overhead. Extensions are isolated from the software switch to prevent failures on the path of packets. By avoiding costly redirections to separate processes and virtual machines, Oko halves the running cost of network functions on average. We then design a framework to enable tenants to offload network functions to the host. Executing tenant network functions on the host promises large performance improvements, but raises evident isolation concerns. We extend the technique used in Oko to provide memory isolation and devise a mechanism to fairly share the CPU among offloaded network functions with limited interruptions
Sasportas, Raphaël. "Etude d'architectures dédiées aux applications temps réel d'analyse d'images par morphologie mathématique". Paris, ENMP, 2002. http://www.theses.fr/2002ENMP1082.
Texto completoMora, Pascal. "Étude et caractérisation de la fiabilité de cellules mémoire non volatiles pour des technologies CMOS et BICMOS avancées". Grenoble INPG, 2007. http://www.theses.fr/2007INPG0065.
Texto completoToday, the "Flash like" memory solutions compatible with CMOS technologies are in great demand. However, their integration in digital technologies is more and more difficult due to physical barriers related to the non volatility of the structure. Indeed, several process steps are not optimized for this type of device and induce reliability issues. Ln this context, the thesis consists of three major axes of work. First we have studied the failure mechanisms. The second axe is the evaluation of the impact of both the processes and the architecture on the cell reliability. The last objective is to improve the test structures and the analysis methods. A focus is performed on the data retention aspect through a thorough study of the fast charge loss phenomenon. Indeed, this is a critical issue of the reliability of embedded non volatile memories. The technological solutions proposed make it possible to push forward the limits of the integration of such kind of memories
Chaignon, Paul. "Software Datapaths for Multi-Tenant Packet Processing". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0062.
Texto completoMulti-tenant networks enable applications from multiple, isolated tenants to communicate over a shared set of underlying hardware resources. The isolation provided by these networks is enforced at the edge: end hosts demultiplex packets to the appropriate virtual machine, copy data across memory isolation boundaries, and encapsulate packets in tunnels to isolate traffic over the datacenter's physical network. Over the last few years, the growing demand for high performance network interfaces has pressured cloud providers to build more efficient multi-tenant networks. While many turn to specialized, hard-to-upgrade hardware devices to achieve high performance, in this thesis, we argue that significant performance improvements are attainable in end-host multi-tenant networks, using commodity hardware. We advocate for a consolidation of network functions on the host and an offload of specific tenant network functions to the host. To that end, we design Oko, an extensible software switch that eases the consolidation of network functions. Oko includes an extended flow caching algorithm to support its runtime extension with limited overhead. Extensions are isolated from the software switch to prevent failures on the path of packets. By avoiding costly redirections to separate processes and virtual machines, Oko halves the running cost of network functions on average. We then design a framework to enable tenants to offload network functions to the host. Executing tenant network functions on the host promises large performance improvements, but raises evident isolation concerns. We extend the technique used in Oko to provide memory isolation and devise a mechanism to fairly share the CPU among offloaded network functions with limited interruptions
Thevenin, Mathieu. "Conception et validation d'un processeur programmable de traitement du signal à faible consommation et à faible empreinte silicium : application à la vidéo HD sur téléphone mobile". Phd thesis, Université de Bourgogne, 2009. http://tel.archives-ouvertes.fr/tel-00504704.
Texto completoCarbillet, Thomas. "Monitoring en temps réel de la vitesse de déplacement sur dispositif connecté : modélisation mathématique sur plateforme mobile interfacée avec une base de données d'entraînement et d'audit physiologique". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM013/document.
Texto completoThe improvement running performance has become a major topic lately. We are getting closer to running a marathon in under 2 hours. However, there are not so many professionals working transversally regarding pre-race and in-race preparation concerning the general public. Training plans are based on trainers' experience and are often not custom-made. This exposes the runners to injury risk and motivation loss. It seems that the current analysis of training plans has reached a limit. The aim for BillaTraining® is to go beyond this limit by connecting the research with the general public of runners.This PhD has two main goals. The first one is trying to contribute to the research about running. After gathering and formatting trainings and races data from different origins, we tried to isolate and describe how humans run marathons including 2.5 to 4-hour performances. We studied acceleration, speed and heart rate time series among other things, with the idea of understanding the different running strategies.The second one is the development of a web application embracing the three steps of the BillaTraining® method. The first step is an energetic audit which is a 30-minute running session guided by the runner's sensations. The second step is the energetic radar which is the results of the audit. The last step is a tailor-made training plan built depending on the runner's objectives.In order to come up with a solution, we had to bring together Physiology, Mathematics and Computer Science.The knowledge we had in Physiology was based on professor Véronique Billat's past and current researches. These researches are now part of BillaTraining® and are central for the growth of the company.We used Mathematics to try to describe physiological phenomenons thanks to Statistics. By applying the Ornstein-Uhlenbeck model, we found that humans are able to run at an even acceleration. By using the PELT (Pruned Exact Linear Time) method we automated changepoints detection in time series.Finally, Computer Science allowed a communication between Physiology and Mathematics for research, as well as marketing training tools at the forefront of innovation
AIDO, LAETITIA. "Optimisation de la fabrication de l'indole-2 carboxylate d'ethyle : automatisation de la reaction de wislicenus et thoma et de la reaction d'hydrogenation de brehm, etude de differents systemes de commande : correlateur logique (...) microprocesseur". Paris 6, 1986. http://www.theses.fr/1986PA066370.
Texto completoLamer, Antoine. "Contribution à la prévention des risques liés à l’anesthésie par la valorisation des informations hospitalières au sein d’un entrepôt de données". Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S021/document.
Texto completoIntroduction Hospital Information Systems (HIS) manage and register every day millions of data related to patient care: biological results, vital signs, drugs administrations, care process... These data are stored by operational applications provide remote access and a comprehensive picture of Electronic Health Record. These data may also be used to answer to others purposes as clinical research or public health, particularly when integrated in a data warehouse. Some studies highlighted a statistical link between the compliance of quality indicators related to anesthesia procedure and patient outcome during the hospital stay. In the University Hospital of Lille, the quality indicators, as well as the patient comorbidities during the post-operative period could be assessed with data collected by applications of the HIS. The main objective of the work is to integrate data collected by operational applications in order to realize clinical research studies.Methods First, the data quality of information registered by the operational applications is evaluated with methods … by the literature or developed in this work. Then, data quality problems highlighted by the evaluation are managed during the integration step of the ETL process. New data are computed and aggregated in order to dispose of indicators of quality of care. Finally, two studies bring out the usability of the system.Results Pertinent data from the HIS have been integrated in an anesthesia data warehouse. This system stores data about the hospital stay and interventions (drug administrations, vital signs …) since 2010. Aggregated data have been developed and used in two clinical research studies. The first study highlighted statistical link between the induction and patient outcome. The second study evaluated the compliance of quality indicators of ventilation and the impact on comorbity.Discussion The data warehouse and the cleaning and integration methods developed as part of this work allow performing statistical analysis on more than 200 000 interventions. This system can be implemented with other applications used in the CHRU of Lille but also with Anesthesia Information Management Systems used by other hospitals
Saif, Abdulqawi. "Experimental Methods for the Evaluation of Big Data Systems". Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0001.
Texto completoIn the era of big data, many systems and applications are created to collect, to store, and to analyze massive data in multiple domains. Although those – big data systems – are subjected to multiple evaluations during their development life-cycle, academia and industry encourage further experimentation to ensure their quality of service and to understand their performance under various contexts and configurations. However, the experimental challenges of big data systems are not trivial. While many pieces of research still employ legacy experimental methods to face such challenges, we argue that experimentation activity can be improved by proposing flexible experimental methods. In this thesis, we address particular challenges to improve experimental context and observability for big data experiments. We firstly enable experiments to customize the performance of their environmental resources, encouraging researchers to perform scalable experiments over heterogeneous configurations. We then introduce two experimental tools: IOscope and MonEx to improve observability. IOscope allows performing low-level observations on the I/O stack to detect potential performance issues in target systems, convincing that the high-level evaluation techniques should be accompanied by such complementary tools to understand systems’ performance. In contrast, MonEx framework works on higher levels to facilitate experimental data collection. MonEx opens directions to practice experiment-based monitoring independently from the underlying experimental environments. We finally apply statistics to improve experimental designs, reducing the number of experimental scenarios and obtaining a refined set of experimental factors as fast as possible. At last, all contributions complement each other to facilitate the experimentation activity by working almost on all phases of big data experiments’ life-cycle
Dubuc, Carole. "Vers une amélioration de l’analyse des données et une optimisation des plans d’expérience pour une analyse quantitative du risque en écotoxicologie". Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10039/document.
Texto completoIn ecotoxicology, the effects of toxic compounds on living organisms are usually measured at the individual level, in the laboratory and according to standards. This ensures the reproducibility of bioassays and the control of environmental factors. Bioassays, in acute or chronic toxicity, generally apply to survival, reproduction and growth of organisms. The statistical analysis of standardized bioassays classically leads to the estimation of critical effect concentrations used in risk assessment. Nevertheless, several methods/models are used to determine a critical effect concentration. These methods/models are more and less adapted to the data type. The first aim of this work is to select the most adapted methods/models to improve data analysis and so the critical effect concentration estimation. Usually, data sets are built from standard bioassays and so follow recommendations about exposure duration, number and range of tested concentrations and number of individuals per concentration. We can think that these recommendations are not the most adapted for each critical effect concentration and each method/model. That’s why, the second aim of this work is to optimize the experimental design in order to improve the critical effect concentration estimations for a fixed cost or at least to reduce the waste of time and organisms
Soyez, Olivier. "Stockage dans les systèmes pair à pair". Phd thesis, Université de Picardie Jules Verne, 2005. http://tel.archives-ouvertes.fr/tel-00011443.
Texto completoDans un premier temps, nous avons créé un prototype Us et conçu une interface utilisateur, nommée UsFS, de type système de fichiers. Un procédé de journalisation des données est inclus dans UsFS.
Ensuite, nous nous sommes intéressés aux distributions de données au sein du réseau Us. Le but de ces distributions est de minimiser le dérangement occasionné par le processus de reconstruction pour chaque pair. Enfin, nous avons étendu notre schéma de distribution pour gérer le comportement dynamique des pairs et prendre en compte les corrélations de panne.
Modeley, Derek. "Etude des états doublement excités de H- et des processus de seuil dans les collisions H-/gaz rare par spectroscopie électronique à zéro degré". Paris 6, 2003. http://www.theses.fr/2003PA066458.
Texto completoPois, Véronique. "La Caune de l'Arago (Pyrénées-Orientales) : visualisation spatiale, en coupe et en plan, du matériel archéologique par interrogation de la base de données matériel paléontologique et préhistorique. conséquences sur l'interprétation du mode de vie de l'homme de Tautavel". Paris, Muséum national d'histoire naturelle, 1998. http://www.theses.fr/1998MNHN0001.
Texto completoMichon, Philippe. "Vers une nouvelle architecture de l'information historique : L'impact du Web sémantique sur l'organisation du Répertoire du patrimoine culturel du Québec". Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8776.
Texto completoBen, Taleb Romain. "Modélisation et optimisation des actifs pour l'aide à la prise de décision stratégique dans les entreprises". Electronic Thesis or Diss., Ecole nationale des Mines d'Albi-Carmaux, 2024. http://www.theses.fr/2024EMAC0001.
Texto completoThe tools and methods used to assist in strategic decision-making, particularly in SMEs, face several limitations. It is observed that they are primarily deterministic, based on past data, and are framed by an approach that is almost exclusively accounting and financial. However, strategic decisions in a company are activities aimed towards the future, highly subject to uncertainty, which aim to maximize the generated value of the company, whether it is financial or not. In this context, the research question addressed in this thesis is how to assist business leaders in making prospective strategic decisions in a context subject to uncertainty ? In terms of contributions, we first propose a conceptual framework based on a meta-model that allows representing a company according to a logic of assets and value. This modeling is then enriched with a causality diagram that establishes the existing dynamics between the assets that create value. To illustrate the applicability of this conceptual framework, an approach is proposed using experimental design based on a simulation model on one hand, and an optimization model in Mixed Integer Programming on the other hand. A set of experiments validates the relevance of the proposal, notably identifying the consequences of the decisions made on each asset in terms of generated value for the company
Schramm, Catherine. "Intégration des facteurs prédictifs de l'effet d'un traitement dans la conception et l'analyse des essais cliniques de petite taille : application à la maladie de Huntington". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066610/document.
Texto completoHuntington's disease is neurodegenerative, genetic, rare, multifaceted and has a long evolution, inducing heterogeneity of conditions and progression of the disease. Current biotherapy trials are performed on small samples of patients, with a treatment effect measurable in the long-term that is heterogeneous. Identifying markers of the disease progression and of the treatment response may help to better understand and improve results of biotherapy studies in Huntington's disease. We have developed a clustering method for the treatment efficacy in the case of longitudinal data in order to identify treatment responders and nonresponders. Our method combines a linear mixed model with two slopes and a classical clustering algorithm. The mixed model generates random effects associated with treatment response, specific to each patient. The clustering algorithm is used to define subgroups according to the value of the random effects. Our method is robust in case of small samples. Finding subgroups of responders may help to define predictive markers of treatment response which will be used to give the most appropriate treatment for each patient. We discussed integration of (i) the predictive markers in study design of future clinical trials, assessing their impact on the power of the study; and (ii) the prognostic markers of disease progression by studying the COMT polymorphism as a prognostic marker of cognitive decline in Huntington's disease. Finally, we evaluated the learning effect of neuropsychological tasks measuring cognitive abilities, and showed how a double baseline in a clinical trial could take it into account when the primary outcome is the cognitive decline
Idris, Jamal. "Accidents géotechniques des tunnels et des ouvrages souterrains - Méthodes analytiques pour le retour d'expérience et la modélisation numérique". Thesis, Vandoeuvre-les-Nancy, INPL, 2007. http://www.theses.fr/2007INPL070N/document.
Texto completoThe instability of the underground works is an important cause of many accidents during their construction and exploitation. Experience feedback of previous accidents is one of used methods that allows improving the prevention of such accidents during the design and the construction of new underground works projects. A bibliographical search enabled us to establish a database of tunnels and underground constructions accidents in the world. These database contains currently 230 case related to the two phases of construction and exploitation of underground constructions, each case was characterized by several variables associated to instability phenomena, to geometrical and géomechanicals characteristics of the concerned underground construction. The causes and the consequences of instabilities phenomena were also analysed especially those related to the particular geological context and the géotechnical characteristics of the surrounding ground. The established database enabled us to carry out several analyses on instabilities phenomena like as a factorial correspondence analysis, which aims to discover the relations between instabilities phenomena and their explanatory variables. This study proposes two representative numerical models of vaulted tunnels supported by masonry structure. Biased on the numerical simulation and the experimental design technique, it also relates to the analysis of the mechanical behaviour of the masonry structure support and its evolution in the time, where the influence of certain mechanical parameters of masonry structure was quantified and evaluated by various analyses methods such as multivariate variance analysis and the linear modelling by multiple regression
Rojas, Castro Dalia Marcela. "The RHIZOME architecture : a hybrid neurobehavioral control architecture for autonomous vision-based indoor robot navigation". Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS001/document.
Texto completoThe work described in this dissertation is a contribution to the problem of autonomous indoor vision-based mobile robot navigation, which is still a vast ongoing research topic. It addresses it by trying to conciliate all differences found among the state-of-the-art control architecture paradigms and navigation strategies. Hence, the author proposes the RHIZOME architecture (Robotic Hybrid Indoor-Zone Operational ModulE) : a unique robotic control architecture capable of creating a synergy of different approaches by merging them into a neural system. The interactions of the robot with its environment and the multiple neural connections allow the whole system to adapt to navigation conditions. The RHIZOME architecture preserves all the advantages of behavior-based architectures such as rapid responses to unforeseen problems in dynamic environments while combining it with the a priori knowledge of the world used indeliberative architectures. However, this knowledge is used to only corroborate the dynamic visual perception information and embedded knowledge, instead of directly controlling the actions of the robot as most hybrid architectures do. The information is represented by a sequence of artificial navigation signs leading to the final destination that are expected to be found in the navigation path. Such sequence is provided to the robot either by means of a program command or by enabling it to extract itself the sequence from a floor plan. This latter implies the execution of a floor plan analysis process. Consequently, in order to take the right decision during navigation, the robot processes both set of information, compares them in real time and reacts accordingly. When navigation signs are not present in the navigation environment as expected, the RHIZOME architecture builds new reference places from landmark constellations, which are extracted from these places and learns them. Thus, during navigation, the robot can use this new information to achieve its final destination by overcoming unforeseen situations.The overall architecture has been implemented on the NAO humanoid robot. Real-time experimental results during indoor navigation under both, deterministic and stochastic scenarios show the feasibility and robustness of the proposed unified approach
Manzon, Diane. "Développement de nouveaux outils en plans d'expériences adaptés à l'approche quality by design (QbD) dans un contexte biologique". Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0223.
Texto completoThe quality by design (QbD) approach is a recent concept initiated by quality control which has led to new requirements from regulatory authorities, particularly in the pharmaceutical industry. Among these, the guideline ICHQ8 explains that quality should not be tested on finished products but should be integrated throughout, from design to finished product. This approach is characterised by different steps, one of which is risk assessment, which mainly involves identifying the critical parameters in a process or formulation. To do this, experiments must be carried out and suitable experimental designs will enable the phenomena studied to be modelled and then represent response surfaces in the experimental space to be explored. In this space, the Food and Drug Administration recommends delimiting a sub-space, called "Design Space", characterised by a certain probability that the output parameters comply with the specifications. This Design Space usually has any geometric shape, which means that the acceptable variation range of a parameter will depend on the value of another parameter. To overcome this constraint and thus define "Proven Acceptable Independent Range" for each parameter studied, we have used and adapted different methods. Their respective performance, in terms of defining acceptable variation range for each parameter independently, has been tested in different case studies
Hossain, Mohaimenul. "Green Metrics to Improve Sustainable Networking". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0201.
Texto completoAchieving energy efficiency has in recent times become a major concern of networking research due to the ever-escalating power consumption and CO2 emissions produced by large data networks. This problem is becoming more and more challenging because of the drastic traffic increase in the last few years and it is expected to be increased even more in the coming years. Using efficient energy-aware strategies that could overturn this situation by reducing the electricity consumption as well as mitigating the environmental impact of data transmission networks. However, CO2 and energy consumption cannot be considered proportionate if the means of electricity production differs. This research work focuses on reducing the environmental impact of data transmission network by implementing energy aware routing, where unused network devices will be put into sleep/shut down and high capacity links will be adapted according to demand requirement. But, alongside with energy, this work has introduced two different metrics namely carbon emission factor and non-renewable energy usage percentage, which are considered as objective functions for designing green network. Here a centralized approach like using Software-Defined Networking (SDN), is used for designing to solve this problem as they allow flexible programmability suitable for this problem. Our proposal proposes a routing technique using genetic algorithm that minimizes the number of network-elements required and at the same time adapt the bandwidth capacity while satisfying an incoming traffic load. Different from existing related works, we focus on optimizing not only energy consumption but also carbon emission and non-renewable energy consumption in SDN in order to close this important gap in the literature and provide solutions compatible with operational backbone networks. Complementing the general aim of improving the environmental impact of data transmission network, this research is also intended to cover important related features such as realistic large demand size, network performance, and Quality of Service (QoS) requirements. At the same time this work focuses on network stability and analyzes the impact of network stability while implementing a green solution. Our work proposes a penalty and filtering mechanism which helps to find an optimal balance between stability and green networking. By using realistic input data, significant values of switched-off links and nodes are reached which demonstrate the effectiveness of our algorithm. The obtained result validated the importance of considering environmental factors rather than considering only energy. Results also show the trade-off between environmental and performance concerns, considering a couple of performance indicators. Moreover, it is shown that the penalty and filtering mechanism is an effective approach to avoid incoherent system and improve the stability of the system. As a whole, this conducted research and contributions reported through this manuscript stand as a valuable solution on the road to sustainable networking
Lardin, Pauline. "Estimation de synchrones de consommation électrique par sondage et prise en compte d'information auxiliaire". Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00842199.
Texto completoCouffignal, Camille. "Variabilité de la réponse pharmacologique, modélisation et influence des plans expérimentaux". Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5250.
Texto completoThe increasing number of patients with chronic diseases, most of whom are subject to long-term treatment, justifies the exploration and characterisation of phenotypic and genetic factors of pharmacological response. The identification and estimation of the pharmacokinetic-pharmacodynamic variability involved in the response to a treatment are essential steps in this exploration to achieve precision medicine. We studied the re-introduction of β-blockers after cardiac surgery in a prospective multicentre cohort of patients receiving chronic β-blocker therapy and who underwent cardiac surgery. With a landmark analysis, we have shown the efficacy of reintroducing β-blockers 72 hours after cardiac surgery on the occurrence of atrial fibrillation. We modelled, using a population approach, the serum, erythrocyte and urine concentration data of once-daily sustained-release lithium in bipolar patients undergoing treatment for at least two years. A clinical research protocol was then written, with an optimization of sampling times, based on the pharmacokinetic model obtained, to characterise inter- and intra-individual variability and to identify predictive factors of the prophylactic response to lithium. We evaluated, by simulation, the impact of the crossover versus parallel design, as well as the choice of statistical model during the analysis, in pharmacogenetic studies evaluating two treatments (candidate and reference) when a genetic polymorphism increases or not the efficacy of the candidate treatment compared to the reference. The results of this simulation study show that the choice of the model and the choice of the experimental design strongly affect not only the type I error and the power to detect a gene-treatment interaction, but also the correct allocation of the treatment. This work reinforces the need to use adequate statistical tools and experimental designs in the analysis of a clinical trial or pharmacoepidemiological study to characterize and quantify the variability of the pharmacological response
Seddiki, Sélim. "Contribution au développement du détecteur de Vertex de l'expérience CBM et étude de faisabilité des mesures du flot elliptique des particules à charme ouvert". Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00862654.
Texto completoLelo, Emmanuel Bernard. "Problème des données manquantes dans un plan d'analyse de variance à mesures répétées". Thèse, 2003. http://hdl.handle.net/1866/14582.
Texto completoMichal, Victoire. "Estimation multi-robuste efficace en présence de données influentes". Thèse, 2019. http://hdl.handle.net/1866/22553.
Texto completoVallée, Audrey-Anne. "Estimation de la variance en présence de données imputées pour des plans de sondage à grande entropie". Thèse, 2014. http://hdl.handle.net/1866/11120.
Texto completoVariance estimation in the case of item nonresponse treated by imputation is the main topic of this work. Treating the imputed values as if they were observed may lead to substantial under-estimation of the variance of point estimators. Classical variance estimators rely on the availability of the second-order inclusion probabilities, which may be difficult (even impossible) to calculate. We propose to study the properties of variance estimators obtained by approximating the second-order inclusion probabilities. These approximations are expressed in terms of first-order inclusion probabilities and are usually valid for high entropy sampling designs. The results of a simulation study evaluating the properties of the proposed variance estimators in terms of bias and mean squared error will be presented.