Tesis sobre el tema "Memory optimisation"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Memory optimisation".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Forrest, B. M. "Memory and optimisation in neural network models". Thesis, University of Edinburgh, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384164.
Texto completoFargus, Alexander. "Optimisation of correlation matrix memory prognostic and diagnostic systems". Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/9032/.
Texto completoUzor, Chigozirim. "Compact dynamic optimisation algorithm". Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/13056.
Texto completoBeyler, Jean-Christophe. "Dynamic software memory access optimization : Dynamic low-cost reduction of memory latencies by binary analyses and transformations". Université Louis Pasteur (Strasbourg) (1971-2008), 2007. http://www.theses.fr/2007STR13171.
Texto completoThis thesis concerns the development of dynamic approaches for the control of the hardware/software couple. More precisely, works presented here have the main goal of minimizing program execution times on mono or multi-processor architectures, by anticipating memory accesses through dynamic prefetch of useful data in cache memory and in a way that is entirely transparent to the user. The developed systems consist in a dynamic analysis phase, where memory access latencies are measured, a phase of binary optimizing transformations when they have been evaluated as efficient, and where data prefetching instructions are inserted into the binary code, a dynamic analysis phase of the optimizations efficiency, and finally a canceling phase for transformations that have been evaluated as inefficient. Every phase applies individually to every memory access, and eventually applies several times if memory accesses have behaviors that are varying during the execution time of the target software
Maalej, Kammoun Maroua. "Low-cost memory analyses for efficient compilers". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1167/document.
Texto completoThis thesis was motivated by the emergence of massively parallel processing and supercomputingthat tend to make computer programming extremely performing. Speedup, the power consump-tion, and the efficiency of both software and hardware are nowadays the main concerns of theinformation systems community. Handling memory in a correct and efficient way is a step towardless complex and more performing programs and architectures. This thesis falls into this contextand contributes to memory analysis and compilation fields in both theoretical and experimentalaspects.Besides the deep study of the current state-of-the-art of memory analyses and their limitations,our theoretical results stand in designing new algorithms to recover part of the imprecisionthat published techniques still show. Among the present limitations, we focus our research onthe pointer arithmetic to disambiguate pointers within the same data structure. We develop ouranalyses in the abstract interpretation framework. The key idea behind this choice is correctness,and scalability: two requisite criteria for analyses to be embedded to the compiler construction.The first alias analysis we design is based on the range lattice of integer variables. Given a pair ofpointers defined from a common base pointer, they are disjoint if their offsets cannot have valuesthat intersect at runtime. The second pointer analysis we develop is inspired from the Pentagonabstract domain. We conclude that two pointers do not alias whenever we are able to build astrict relation between them, valid at program points where the two variables are simultaneouslyalive. In a third algorithm we design, we combine both the first and second analysis, and enhancethem with a coarse grained but efficient analysis to deal with non related pointers.We implement these analyses on top of the LLVM compiler. We experiment and evaluate theirperformance based on two metrics: the number of disambiguated pairs of pointers compared tocommon analyses of the compiler, and the optimizations further enabled thanks to the extraprecision they introduce
Munns, Joseph. "Optimisation and applications of a Raman quantum memory for temporal modes of light". Thesis, Imperial College London, 2018. http://hdl.handle.net/10044/1/63867.
Texto completoAlsaiari, Mabkhoot Abdullah. "High throughput optimisation of functional nanomaterials and composite structures for resistive switching memory". Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/422863/.
Texto completoMarina, Sahakyan. "Optimisation des mises à jours XML pour les systèmes main-memory: implémentation et expériences". Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00641579.
Texto completoKaeslin, Alain E. "Performance Optimisation of Discrete-Event Simulation Software on Multi-Core Computers". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191132.
Texto completoSIMLOX är en kommersiell mjukvara utvecklad av Systecon AB, vars huvudsakliga funktion är en händelsestyrd simuleringskärna för analys av underhållslösningar för komplexa tekniska system. För hantering av stora problem så används parallellexekvering för simuleringen, vilket i teorin borde ge en nästan linjär skalning med antal trådar. Prestandaförbättringen som observerats i praktiken var dock ytterst begränsad, varför en ordentlig analys av skalbarheten har gjorts i detta projekt. Genom användandet av ett profileringsverktyg med liten overhead och mikroarkitektur-analys, så kunde orsakerna hittas: atomiska operationer som skapar mycket overhead för kommunikation, dålig lokalitet ger fragmentering vid översättning till fysiska adresser och dåligt utnyttjande av TLB-cachen, och vissa flaskhalsar som kräver mycket CPU-kraft. Därefter implementerades och testade optimeringar för att undvika de identifierade problem. Testade lösningar inkluderar eliminering av dyra operationer, ökad effektivitet i minneshantering genom skalbara minneshanteringsalgoritmer och implementation av datastrukturer som ger bättre lokalitet och därmed bättre användande av cache-strukturen. Verifiering på verkliga testfall visade på uppsnabbningar på åtminstone 6.75 gånger på en processor med 8 kärnor. De flesta fall visade på en uppsnabbning med en faktor större än 7.2. Optimeringarna gav även en uppsnabbning med en faktor på åtminstone 1.5 vid sekventiell exekvering i en tråd. Slutsatsen är därmed att det är möjligt att uppnå nästan linjär skalning med antalet kärnor för denna typ av händelsestyrd simulering.
Laga, Arezki. "Optimisation des performance des logiciels de traitement de données sur les périphériques de stockage SSD". Thesis, Brest, 2018. http://www.theses.fr/2018BRES0087/document.
Texto completoThe growing volume of data poses a real challenge to data processing software like DBMS (DataBase Management Systems) and data storage infrastructure. New technologies have emerged in order to face the data volume challenges. We considered in this thesis the emerging new external memories like flash memory-based storage devices named SSD (Solid State Drive).SSD storage devices offer a performance gain compared to the traditional magnetic devices.However, SSD devices offer a new performance model that involves 10 cost optimization for data processing and management algorithms.We proposed in this thesis an 10 cost model to evaluate the data processing algorithms. This model considers mainly the SSD 10 performance and the data distribution.We also proposed a new external sorting algorithm: MONTRES. This algorithm includes optimizations to reduce the 10 cost when the volume of data is greater than the allocated memory space by an order of magnitude. We proposed finally a data prefetching mechanism: Lynx. This one makes use of a machine learning technique to predict and to anticipate future access to the external memory
Chekaf, Mustapha. "Capacité de la mémoire de travail et son optimisation par la compression de l'information". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCC010/document.
Texto completoSimple span tasks are tasks commonly used to measure short-term memory, while complex span tasks are usually considered typical measures of working memory. Because complex span tasks were designed to create a concurrent task, the average span is usually lower (4 ± 1items) than in simple span tasks (7±2 items). One possible reason for measuring higher spansduring simple span tasks is that participants can take profit of the spare time between the stimuli to detect, and recode regularities in the stimulus series (in the absence of a concurrent task), and such regularities can be used to pack a few stimuli into 4 ± 1 chunks. Our main hypothesis was that information compression in immediate memory is an excellent indicator for studying the relationship between immediate-memory capacity and fluid intelligence. The idea is that both depend on the efficiency of information processing, and more precisely, on the interaction between storage and processing. We developed various span tasks measuringa chunking capacity, in which compressibility of memoranda was estimated using different algorithmic complexity metrics. The results showed that compressibility can be used to predictworking-memory performance, and that fluid intelligence is well predicted by the ability to compress information.We conclude that the ability to compress information in working memoryis the reason why both manipulation and retention of information are linked to intelligence
Brunie, Hugo. "Optimisation des allocations de données pour des applications du Calcul Haute Performance sur une architecture à mémoires hétérogènes". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0014/document.
Texto completoHigh Performance Computing, which brings together all the players responsible for improving the computing performance of scientific applications on supercomputers, aims to achieve exaflopic performance. This race for performance is today characterized by the manufacture of heterogeneous machines in which each component is specialized. Among these components, system memories specialize too, and the trend is towards an architecture composed of several memories with complementary characteristics. The question arises then of these new machines use whose practical performance depends on the application data placement on the different memories. Compromising code update against performance is challenging. In this thesis, we have developed a data allocation on Heterogeneous Memory Architecture problem formulation. In this formulation, we have shown the benefit of a temporal analysis of the problem, because many studies were based solely on a spatial approach this result highlight their weakness. From this formulation, we developed an offline profiling tool to approximate the coefficients of the objective function in order to solve the allocation problem and optimize the allocation of data on a composite architecture composed of two main memories with complementary characteristics. In order to reduce the amount of code changes needed to execute an application according to our toolbox recommended allocation strategy, we have developed a tool that can automatically redirect data allocations from a minimum source code instrumentation. The performance gains obtained on mini-applications representative of the scientific applications coded by the community make it possible to assert that intelligent data allocation is necessary to fully benefit from heterogeneous memory resources. On some problem sizes, the gain between a naive data placement strategy, and an educated data allocation one, can reach up to ×3.75 speedup
Puma, Sébastien. "Optimisation des apprentissages : modèles et mesures de la charge cognitive". Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20058/document.
Texto completoLearning allows you to gain the necessary knowledge to adapt to the world. Cognitive load theory takes into consideration cognitive resources invested during school learning. However, two main limitations can be identified: a theoretical one and a methodological one. From a theoretical perspective, CLT invoke working memory (WM) to describe the cognitive resources used during learning and these models do not take time into account. The other limit is related to methodology: CLT doesn’t offer measures of cognitive load either reliable or dynamic.Taking into consideration these limitations, we suggest the use of physiological measurement and a new WM model: the TBRS (Time Based Resource Sharing). Physiological measurement is a mean to analyze the temporal variations implied by the cognitive load while TBRS model takes the temporal variation of the attentional focus allocation into account. However, the TBRS has not yet been used with meaningful items, which could be gathered into chunks. Thus, the aim of the present work is to study the benefits of using physiological measurement and the TBRS model with CLT.To address the question of cognitive load measurement, a first experiment used a task included in the ENAC’s (École Nationale d’Aviation Civile) recruitment selection process. During the experiment, cerebral activity (EEG) and eye movements (Eye-tracking) were recorded. Another series of four experiments stressed the question of the use of the TBRS model in CLT. They began by replicating a previous study using the TBRS model (exp. 2 & 3), replacing items to be held in memory by items which could be chunked. The other two experiments extended these results. Finally a sixth experiment used physiological measures to assess cognitive load variations while participants performed a protocol similar to the previous experiments.Results from these six experiments show that TBRS model and physiological measurement are consistent with CLT and also complete its findings
Carpov, Sergiu. "Scheduling for memory management and prefetch in embedded multi-core architectures". Compiègne, 2011. http://www.theses.fr/2011COMP1962.
Texto completoThis PhD thesis is devoted to the study of several combinatorial optimization problems which arise in the field of parallel embedded computing. Optimal memory management and related scheduling problems for dataflow applications executed on massively multi-core processors are studied. Two memory access optimization techniques are considered: data reuse and prefetch. The memory access management is instantiated into three combinatorial optimization problems. In the first problem, a prefetching strategy for dataflow applications is investigated so as to minimize the application execution time. This problem is modeled as a hybrid flow shop under precedence constraints, an NP-hard problem. An heuristic resolution algorithm together with two lower bounds are proposed so as to conservatively, though fairly tightly, estimate the distance to the optimality. The second problem is concerned by optimal prefetch management strategies for branching structures (data-controlled tasks). Several objective functions, as well as prefetching techniques, are examined. In all these cases polynomial resolution algorithms are proposed. The third studied problem consists in ordering a set of tasks so as to minimize the number of times the memory data are fetched. In this way the data reuse for a set of tasks is optimized. This problem being NP-hard, a result we have established, we have proposed two heuristic algorithms. The optimality gap of the heuristic solutions is estimated using exact solutions. The latter ones are obtained using a branch and bound method we have proposed
Saidi, Selma. "Optimisation des transferts de données sur systèmes multiprocesseurs sur puce". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00875582.
Texto completoCabout, Thomas. "Optimisation technologique et caractérisation électrique de mémoires résistives OxRRAM pour applications basse consommation". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4778/document.
Texto completoToday, non-volatile memory market is dominated by charge storage based technologies. However, this technology reaches his scaling limits and solutions to continue miniaturization meet important technological blocks. Thus, to continue scaling for advanced nodes, new non-volatile solutions are developed. Among them, oxide based resistive memories (OxRRAM) are intensively studied. Based on resistance switching of Metal/Isolator/Metal stack, this technology shows promising performances and scaling perspective but isn’t mature and still suffer from a lake of switching mechanism physical understanding.Results presented in this thesis aim to contribute to the development of OxRRAM technology. In a first part, an analysis of different materials constituting RRAM allow us to compare unipolar and bipolar switching modes and select the bipolar one that benefit from lower programming voltage and better performances. Then identified memory stack TiNHfO2Ti have been integrated in 1T1R structure in order to evaluate performances and limitation of this structure. Operating of 1T1R structure have been carefully studied and good endurance and retention performances are demonstrated. Finally, in the last part, thermal activation of switching characteristics have been studied in order to provide some understanding of the underling physical mechanisms. Reset operation is found to be triggered by local temperature while retention performances are dependent of Set temperature
Agharben, El Amine. "Optimisation et réduction de la variabilité d’une nouvelle architecture mémoire non volatile ultra basse consommation". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEM013.
Texto completoThe global semiconductor market is experiencing steady growth due to the development of consumer electronics and the wake of the non-volatile memory market. The importance of these memory products has been accentuated since the beginning of the 2000s by the introduction of nomadic products such as smartphones or, more recently, the Internet of things. Because of their performance and reliability, Flash technology is currently the standard for non-volatile memory. However, the high cost of microelectronic equipment makes it impossible to depreciate them on a technological generation. This encourages industry to adapt equipment from an older generation to more demanding manufacturing processes. This strategy is not without consequence on the spread of the physical characteristics (geometric dimension, thickness ...) and electrical (current, voltage ...) of the devices. In this context, the subject of my thesis is “Optimization and reduction of the variability of a new architecture ultra-low power non-volatile memory”.This study aims to continue the work begun by STMicroelectronics on the improvement, study and implementation of Run-to-Run (R2R) control loops on a new ultra-low power memory cell. In order to ensure the implementation of a relevant regulation, it is essential to be able to simulate the process manufacturing influence on the electrical behavior of the cells, using statistical tools as well as the electric characterization
Barci, Marinela. "Caractérisation électrique et optimisation technologique des mémoires résistives Conductive Bridge Memory (CBRAM) afin d’optimiser la performance, la vitesse et la fiabilité". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT022/document.
Texto completoFlash technology is approaching its scaling limits, so the demand for novel memory technologies is increasing. Promising replacing candidates are the emerging non volatile technologies such as Conductive Bridge Memory (CBRAM), Oxide based Resistive RAM (OXRAM), Magnetic Random Access Memory (MRAM) and Phase Change Memory (PCRAM). In particular, CBRAM is based on a simple Metal-Insulator-Metal (MIM) structure and presents several advantages compared to the other technologies. CBRAM is non volatile, i.e. it keeps the information when the power is off, it is scalable down to 10nm technology node, it can be easily integrated into the Back-End-of-Line (BEOL), finally, it has high operation speed at low voltages and low cost per bit. Nevertheless, demands for the industrialization of CBRAM are very stringent and issues related to device reliability are still to be faced. In this thesis we analyze two generations of CBRAM technology, each one addressing a specific application market. The first part of the PhD is dedicated to the electricalstudy of Cu-based/GdOx structures, which present the advantages of a very stable data retention and resistance to soldering reflow and also good endurance behavior. This CBRAM family addresses mainly the high temperature applications as automotive. To fulfill the specification requirements, doping of metal-oxide andbilayers are integrated to decrease the forming voltage and increase the programmingwindow. Better endurance performance is also achieved. The second part isdedicated to a new CBRAM technology, with a simple MIM structure. In this case, the device showsfast operation speed of 20ns at low voltages of 2V, combined with satisfying endurance and data retention. This technology seems to be compatible with the growing Internet of Things (IOT) market. In summary, during the PhD research, the main objective was to study the reliability of the embedded CBRAM devices in terms of forming, endurance and data retention. Some methodologies were developed and the electrical set-up was modified and adapted to specific measurements. Physical models were developed to explain and better fit the experimental results. Based on the obtained results, we demonstrate that the CBRAM technology is highly promising for future NVM applications
Abid, Fatma. "Contribution à la robustesse et à l'optimisation fiabiliste des structures Uncertainty of shape memory alloy micro-actuator using generalized polynomial chaos methodUncertainty of shape memory alloy micro-actuator using generalized polynomial chaos method Numerical modeling of shape memory alloy problem in presence of perturbation : application to Cu-Al-Zn-Mn specimen An approach for the reliability-based design optimization of shape memory alloy structure Surrogate models for uncertainty analysis of micro-actuator". Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR24.
Texto completoThe design of economic system leads to many advances in the fields of modeling and optimization, allowing the analysis of structures more and more complex. However, optimized designs can suffer from uncertain parameters that may not meet certain reliability criteria. To ensure the proper functioning of the structure, it is important to consider uncertainty study is called the reliability analysis. The integration of reliability analysis in optimization problems is a new discipline introducing reliability criteria in the search for the optimal configuration of structures, this is the domain of reliability optimization (RBDO). This RBDO methodology aims to consider the propagation of uncertainties in the mechanical performance by relying on a probabilistic modeling of input parameter fluctuations. In this context, this thesis focuses on a robust analysis and a reliability optimization of complex mechanical problems. It is important to consider the uncertain parameters of the system to ensure a robust design. The objective of the RBDO method is to design a structure in order to establish a good compromise between the cost and the reliability assurance. As a result, several methods, such as the hybrid method and the optimum safety factor method, have been developed to achieve this goal. To address the complexity of complex mechanical problems with uncertain parameters, methodologies specific to this issue, such as meta-modeling methods, have been developed to build a mechanical substitution model, which at the same time satisfies the efficiency and the precision of the model
Saadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.
Texto completoIn this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
Zaourar, Lilia Koutchoukali. "Recherche opérationnelle et optimisation pour la conception testable de circuits intégrés complexes". Grenoble, 2010. http://www.theses.fr/2010GRENM055.
Texto completoThis thesis is a research contribution interfacing operations research and microelectronics. It considers the use of combinatorial optimization techniques for DFT (Design For Test) of Integrated Circuits (IC). With the growing complexity of current IC both quality and cost during manufacturing testing have become important parameters in the semiconductor industry. To ensure proper functioning of the IC, the testing step is more than ever a crucial and difficult step in the overall IC manufacturing process. To answer market requirements, chip testing should be fast and effective in uncovering defects. For this, it becomes essential to apprehend the test phase from the design steps of IC. In this context, DFT techniques and methodologies aim at improving the testability of IC. In previous research works, several problems of optimization and decision making were derived from the micro- electronics domain. Most of previous research contributions dealt with problems of combinatorial optimization for placement and routing during IC design. In this thesis, a higher design level is considered where the DFT problem is analyzed at the Register Transfer Level (RTL) before the logic synthesis process starts. This thesis is structured into three parts. In the first part, preliminaries and basic concepts of operations research, IC design and manufacturing are introduced. Next, both our approach and the solution tools which are used in the rest of this work are presented. In the second part, the problem of optimizing the insertion of scan chains is considered. Currently, " internal scan" is a widely adopted DFT technique for sequential digital designs where the design flip-flops are connected into a daisy chain manner with a full controllability and observability from primary inputs and outputs. In this part of the research work, different algorithms are developed to provide an automated and optimal solution during the generation of an RTL scan architecture where several parameters are considered: area, test time and power consumption in full compliance with functional performance. This problem has been modelled as the search for short chains in a weighted graph. The solution methods used are based on finding minimal length Hamiltonian chains. This work was accomplished in collaboration with DeFacTo Technologies, an EDA start-up close to Grenoble. The third part deals with the problem of sharing BIST (Built In Self Test) blocks for testing memories. The problem can be formulated as follows: given the memories with various types and sizes, and sharing rules for series and parallel wrappers, we have to identify solutions to the problem by associating a wrapper with each memory. The solution should minimize the surface, the power consumption and test time of IC. To solve this problem, we designed a prototype called Memory BIST Optimizer (MBO). It consists of two steps of resolution and a validation phase. The first step creates groups of compatibility in accordance with the rules of abstraction and sharing that depend on technologies. The second phase uses genetic algorithms for multi-objective optimization in order to obtain a set of non dominated solutions. Finally, the validation verifies that the solution provided is valid. In addition, it displays all solutions through a graphical or textual interface. This allows the user to choose the solution that fits best. The tool MBO is currently integrated into an industrial flow within ST-microelectronics
Alrammal, Muath. "Algorithms for XML stream processing : massive data, external memory and scalable performance". Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00779309.
Texto completoGu, Xiaojun. "Optimization of Shape Memory Alloy Structures with Respect to Fatigue". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY012/document.
Texto completoThis thesis presents a comprehensive and effi cient structural optimization approach for shape memory alloys (SMAs) with respect to fatigue. The approach consists of three steps: First, the development of a suitable constitutive model capable of predicting, with good accuracy, the stabilized thermomechanical stress state of a SMA structure subjected to multiaxial nonproportional cyclic loading. The dependence of the saturated residual strain on temperature and loading rate is discussed. In order to overcome numerical convergence problems in situations where the phase transformation process presents little or no positivehardening, the large time increment method (LATIN) is utilized in combination with the ZM (Zaki-Moumni) model to simulate SMA structures instead of conventional incremental methods. Second, a shakedown-based fatigue criterion analogous to the Dang Van model for elastoplastic metals is derived for SMAs to predict whether a SMA structure subjected to high-cycle loading would undergo fatigue. The proposed criterion computes a fatigue factor at each material point, indicating its degree of safeness with respect to high-cycle fatigue. Third, a structural optimization approach, which can be used to improve the fatigue lifetime estimated using the proposed fatigue criterion is presented. The prospects of this work include the validation of the optimization approach with experimental data
Damaj, Rabih. "Inférence statistique sur le processus de Mino". Thesis, Lorient, 2015. http://www.theses.fr/2015LORIS369/document.
Texto completoThe subject of this PhD thesis is the statistical inference on Mino process that we define as a one-memory self-exciting point process which intensiy has a special form. We begin with a general description of self-exciting point processes and we present methods used to estimate the intensity parameters of these processes. We consider the special case of a one-memory self-exciting point process, used in signal processing. We call the process: the Mino process. This process can be interpreted as a renewal process which interarrival times that follow a special distribution that we study in details. In order to estimate the parameters of a Mino process intensity, we utilize the maximum likelihood method. We solve the likelihood equations with a Newton-Raphson algorithm. We show the efficiency of the method on simulated data. The convergence of the Newton-Raphson algorithm and, the existence and uniqueness of the maximun likelihood estimators are proved. Lastly, we construct a test of hypothesis to assess whether a point process is self-exciting or not
Jacquelin, Mathias. "Memory-aware algorithms : from multicores to large scale platforms". Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2011. http://tel.archives-ouvertes.fr/tel-00662525.
Texto completoCastro, Márcio. "Optimisation de la performance des applications de mémoire transactionnelle sur des plates-formes multicoeurs : une approche basée sur l'apprentissage automatique". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM074/document.
Texto completoMulticore processors are now a mainstream approach to deliver higher performance to parallel applications. In order to develop efficient parallel applications for those platforms, developers must take care of several aspects, ranging from the architectural to the application level. In this context, Transactional Memory (TM) appears as a programmer friendly alternative to traditional lock-based concurrency for those platforms. It allows programmers to write parallel code as transactions, which are guaranteed to execute atomically and in isolation regardless of eventual data races. At runtime, transactions are executed speculatively and conflicts are solved by re-executing conflicting transactions. Although TM intends to simplify concurrent programming, the best performance can only be obtained if the underlying runtime system matches the application and platform characteristics. The contributions of this thesis concern the analysis and improvement of the performance of TM applications based on Software Transactional Memory (STM) on multicore platforms. Firstly, we show that the TM model makes the performance analysis of TM applications a daunting task. To tackle this problem, we propose a generic and portable tracing mechanism that gathers specific TM events, allowing us to better understand the performances obtained. The traced data can be used, for instance, to discover if the TM application presents points of contention or if the contention is spread out over the whole execution. Our tracing mechanism can be used with different TM applications and STM systems without any changes in their original source codes. Secondly, we address the performance improvement of TM applications on multicores. We point out that thread mapping is very important for TM applications and it can considerably improve the global performances achieved. To deal with the large diversity of TM applications, STM systems and multicore platforms, we propose an approach based on Machine Learning to automatically predict suitable thread mapping strategies for TM applications. During a prior learning phase, we profile several TM applications running on different STM systems to construct a predictor. We then use the predictor to perform static or dynamic thread mapping in a state-of-the-art STM system, making it transparent to the users. Finally, we perform an experimental evaluation and we show that the static approach is fairly accurate and can improve the performance of a set of TM applications by up to 18%. Concerning the dynamic approach, we show that it can detect different phase changes during the execution of TM applications composed of diverse workloads, predicting thread mappings adapted for each phase. On those applications, we achieve performance improvements of up to 31% in comparison to the best static strategy
Gou, Changjiang. "Task Mapping and Load-balancing for Performance, Memory, Reliability and Energy". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN047.
Texto completoThis thesis focuses on multi-objective optimization problems arising when running scientific applications on high performance computing platforms and streaming applications on embedded systems. These optimization problems are all proven to be NP-complete, hence our efforts are mainly on designing efficient heuristics for general cases, and proposing optimal solutions for special cases.Some scientific applications are commonly modeled as rooted trees. Due to the size of temporary data, processing such a tree may exceed the local memory capacity. A practical solution on a multiprocessor system is to partition the tree into many subtrees, and run each on a processor, which is equipped with a local memory. We studied how to partition the tree into several subtrees such that each subtree fits in local memory and the makespan is minimized, when communication costs between processors are accounted for.Then, a practical work of tree scheduling arising in parallel sparse matrix solver is examined. The objective is to minimize the factorization time by exhibiting good data locality and load balancing. The proportional mapping technique is a widely used approach to solve this resource-allocation problem. It achieves good data locality by assigning the same processors to large parts of the task tree. However, it may limit load balancing in some cases. Based on proportional mapping, a dynamic scheduling algorithm is proposed. It relaxes the data locality criterion to improve load balancing. The performance of our approach has been validated by extensive experiments with the parallel sparse matrix direct solver PaStiX.Streaming applications often appear in video and audio domains. They are characterized by a series of operations on streaming data, and a high throughput. Multi-Processor System on Chip (MPSoC) is a multi/many-core embedded system that integrates many specific cores through a high speed interconnect on a single die. Such systems are widely used for multimedia applications. Lots of MPSoCs are batteries-operated. Such a tight energy budget intrinsically calls for an efficient schedule to meet the intensive computation demands. Dynamic Voltage and Frequency Scaling (DVFS) can save energy by decreasing the frequency and voltage at the price of increasing failure rates. Another technique to reduce the energy cost and meet the reliability target consists in running multiple copies of tasks. We first model applications as linear chains and study how to minimize the energy consumption under throughput and reliability constraints, using DVFS and duplication technique on MPSoC platforms.Then, in a following study, with the same optimization goal, we model streaming applications as series-parallel graphs, which are more complex than simple chains and more realistic. The target platform has a hierarchical communication system with two levels, which is common in embedded systems and high performance computing platforms. The reliability is guaranteed through either running tasks at the maximum speed or triplication of tasks. Several efficient heuristics are proposed to tackle this NP-complete optimization problem
Longnos, Florian. "Etude et optimisation des performances électriques et de la fiabilité de mémoires résistives à pont conducteur à base de chalcogénure/Ag ou d'oxyde métallique/Cu". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT046.
Texto completoNon-volatile memory technology has recently become the key driver for growth in the semiconductor business, and an enabler for new applications and concepts in the field of information and communication technologies (ICT). In order to overcome the limitations in terms of scalability, power consumption and fabrication complexity of Flash memory, semiconductor industry is currently assessing alternative solutions. Among them, Conductive Bridge Memories (CBRAM) rely on the resistance switching of a solid electrolyte induced by the migration and redox reactions of metallic ions. This technology is appealing due to its simple two-terminal structure, and its promising performances in terms of low power consumption, program/erase speed. Furthermore, the CBRAM is a memory technology that can be easily integrated with standard CMOS technology in the back end of line (BEOL). In this work we study the electrical performances and reliability of two different CBRAM technologies, specifically using chalcogenides (GeS2) and metal oxide as electrolyte. We first focus on GeS2-based CBRAM, where the effect of doping with Ag and Sb of GeS2 electrolyte is extensively investigated through electrical characterization analysis. The physical mechanisms governing the switching kinetics and the thermal stability are also addressed by means of electrical measurements, empirical model and 1st principle calculations. The influence of the different set/reset programming conditions is studied on a metal oxide based CBRAM technology. Based on this analysis, the programming conditions able to maximize the memory window, improve the endurance and minimize the variability are determined
Alonso, Thierry. "Caractérisation par essais DMA et optimisation du comportement thermomécanique de fils de NiTi - Application à une aiguille médicale déformable". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI028/document.
Texto completoMany medical procedures use needles. A solution is proposed to control and modifyneedle trajectory during its insertion. This steerable needle must be able to avoid anobstacle and reach the target with more accuracy. The solution uses Nickel Titanium(NiTi) shape memory alloy. A new experimental method is proposed to characterize NiTiwires. This method is based on experimental device wich allows to perform DynamicMechanical Analysis (DMA) during a tensile test or during a temperature sweep understress. DMA measurements can detect many phenomena : elasticity, phase transformation,reorientation, plasticity. Results for a commercial NiTi wire are presented and analyzed.Storage modulus evolution analysis shows multistage phase transformations for which thestress-temperature diagram has been established. Values of elastic modulus are determinedfor austenite, martensite and R phase. Estimation models are proposed to determinestorage modulus evolution during tensile test with DMA and temperature sweep understress with DMA. The last part of this work studies the effect of heat treatment on acold worked Niti wire. A range of heat treatments was performed. Thermomechanicaltreatment effects were investigated both with tensile tests and temperature sweeps understress with DMA
Hubert, Quentin. "Optimisation de mémoires PCRAM pour générations sub-40 nm : intégration de matériaux alternatifs et structures innovantes". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-01061795.
Texto completoAzzaz, Mourad. "Optimisation des mémoires résistives OxRAM à base d’oxydes métalliques pour intégration comme mémoires embarquées dans un nœud technologique CMOS avancé". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT052.
Texto completoEmbedded Flash memories integration on advanced CMOS technological nodes such as the 28nm leads to serious compatibility problems with the new manufacturing steps such as the high-permittivity gate dielectric, the use of metal gate, etc. The addition of a conventional double-grid device such as the one for Flash appears to be very expensive in terms of number of masks and additional manufacturing steps. Many alternatives have emerged: phase change memories PCRAM, magnetic memories MRAM and resistive memories OxRAM. However, the high programming current of the PCRAM memories and the risks associated to the contamination of the materials used for the MRAM memories represent the weak points of these technologies. On the other hand, OxRAM memories are particularly attractive for integration as CMOS embedded memory. The materials used (metal oxide dielectric such as HfO₂ or Ta₂O₅) compatible with the CMOS manufacturing process and their low programming voltages due to filament conduction are an advantage for OxRAM memories.In this thesis, an in depth memory stack optimization is done to make up the OxRAM memory cell in order to be integrated into a matrix of memories. Thus, various top and bottom electrodes and various switching oxides have been studied in order to better control and improve the variability of the resistive states of the OxRAM memory cell. An evaluation of the reliability and the main memory performances in terms of Forming voltage, memory window, endurance and thermal stability were performed for each memory stack through electrical characterizations. These assessments highlighted efficient memory stacks which have been integrated into a 16Kb demonstrator. Finally, a study of the variability of the resistive states as well as their degradation mechanisms during the endurance and thermal stability were carried out through simples models and atomistic simulations (ab-initio calculations)
Koussa, Badreddin. "Optimisation des performances d'un système de transmission multimédia sans fil basé sur la réduction du PAPR dans des configurations réalistes". Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT2260/document.
Texto completoIn this thesis, we are interested on the performances optimization of multimedia transmissions systems with an original contribution combining RF circuits' imperfections presented by the power amplifier (PA) nonlinearities and the transmission channel distortions. The studied system uses the OFDM technique which is the most widespread multicarrier modulation in recent radio communications systems. However, its major drawback is the high PAPR value, which degrades the transmission quality due to the PA nonlinearities. To reduce the PAPR, we first propose to improve the TR method in terms of convergence speed and PAPR reduction, by studying several optimization algorithms. We show that the conjugate gradient algorithm provides the best performance while respecting the frequency specifica-tions of the IEEE 802.11a standard. Thereafter, TR method has been evaluated experimentally in the presence of a commercial PA (SZP-2026Z) and using a measurement bench. It is shown that the TR method improves the quality of service (QoS), with 18% reduction in PA power consumption. The experimental study has resulted to choosing a realistic PA model consider-ing memory effects. This PA model has been integrated into a SISO simulation chain includ-ing also a realistic channel model. This chain is used to evaluate the TR method performances under realistic transmission conditions. Finally, we propose to apply the TR method in a closed-loop MIMO-OFDM chain dedicated to the transmission of scalable multimedia con-tent in a realistic context with the IEEE 802.1n standard. This study presents a new contribu-tion of the TR method evaluation to improve the visual quality of the JPWL transmitted imag-es, considering in the same time the multimedia content, the PA nonlinearity and the channel transmission distortions
Bonarota, Matthieu. "Optimisation de la programmation d’un cristal dopé aux ions de terres rares, opérant comme processeur analogique d’analyse spectrale RF, ou de stockage d’information quantique". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112324/document.
Texto completoThe development of a quantum memory for light involves the most fundamental aspects of the light-matter interaction. To store the quantum information carried by light, the material has to be able to stay in a state of quantum superposition. The storage time is limited by the lifetime of this state, characterized by the coherence time. The first experiments involved the well-known cold atomic vapors. More recently, Rare Earth Ions doped Crystals (REIC) have drawn attention because of their remarkably long coherence time, combined with a large interaction bandwidth. Specific protocols have been proposed to take the most out of these properties. We have opted for a promising spin-off of the well-known photon echo, named the Atomic Frequency Comb (AFC, proposed in 2008), based on the transmission of the incoming field through a spectrally periodic absorption profile. The first chapters of this manuscript present this protocol and our works aimed at improving its efficiency (the probability for capturing and retrieving the incoming information), increasing its bandwidth and its multiplexing capacity and measuring its noise. The following chapters present a new protocol, proposed in our group during this thesis, and called Revival Of Silenced Echo (ROSE). This protocol, similar to the photon echo, have been demonstrated and characterized experimentally. It seems really promising in terms of efficiency, bandwidth and noise
Morisset, Robin. "Compiler optimisations and relaxed memory consistency models". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE050/document.
Texto completoModern multiprocessors architectures and programming languages exhibit weakly consistent memories. Their behaviour is formalised by the memory model of the architecture or programming language; it precisely defines which write operation can be returned by each shared memory read. This is not always the latest store to the same variable, because of optimisations in the processors such as speculative execution of instructions, the complex effects of caches, and optimisations in the compilers. In this thesis we focus on the C11 memory model that is defined by the 2011 edition of the C standard. Our contributions are threefold. First, we focused on the theory surrounding the C11 model, formally studying which compiler optimisations it enables. We show that many common compiler optimisations are allowed, but, surprisingly, some important ones are forbidden. Secondly, building on our results, we developed a random testing methodology for detecting when mainstream compilers such as GCC or Clang perform an incorrect optimisation with respect to the memory model. We found several bugs in GCC, all promptly fixed. We also implemented a novel optimisation pass in LLVM, that looks for special instructions that restrict processor optimisations - called fence instructions - and eliminates the redundant ones. Finally, we developed a user-level scheduler for lightweight threads communicating through first-in first-out single-producer single-consumer queues. This programming model is known as Kahn process networks, and we show how to efficiently implement it, using C11 synchronisation primitives. This shows that despite its flaws, C11 can be usable in practice
Idrissi, Aouad Maha. "Conception d'algorithmes hybrides pour l'optimisation de l'énergie mémoire dans les systèmes embarqués et de fonctions multimodales". Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10029/document.
Texto completoRésumé en anglais : Memory is considered to be greedy in energy consumption, a sensitive issue, especially in embedded systems. The global optimization of multimodal functions is also a difficult problem because of the large number of local optima of these functions. In this thesis report, I present various new hybrid and distributed algorithms to solve these two optimization problems. These algorithms are compared with conventional methods used in the literature and the results obtained are encouraging. Indeed, these results show a reduction in memory energy consumption by about 76% to more than 98% on our benchmarks on one hand. On the other hand, in the case of global optimization of multimodal functions, our hybrid algorithms converge more often to the global optimum solution. Distributed and cooperative versions of these new hybrid algorithms are also proposed. They are more faster than their respective sequential versions
Nunes, Sampaio Diogo. "Profile guided hybrid compilation". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM082/document.
Texto completoThe end of chip frequency scaling capacity, due heat dissipation limitations, made manufacturers search for an alternative to sustain the processing capacity growth. The chosen solution was to increase the hardware parallelism, by packing multiple independent processors in a single chip, in a Multiple-Instruction Multiple-Data (MIMD) fashion, each with special instructions to operate over a vector of data, in a Single-Instruction Multiple-Data (SIMD) manner. Such paradigm change, brought to software developer the convoluted task of producing efficient and scalable applications. Programming languages and associated tools evolved to aid such task for new developed applications. But automated optimizations capable of coping with such a new complex hardware, from legacy, single threaded applications, is still lacking.To apply code transformations, either developers or compilers, require to assert that, by doing so, they are not changing the expected comportment of the application producing unexpected results. But syntactically poor codes, such as use of pointer parameters with multiple possible indirections, complex loop structures, or incomplete codes, make very hard to extract application behavior solely from the source code in what is called a static analyses. To cope with the lack of information extracted from the source code, many tools and research has been done in, how to use dynamic analyses, that does application profiling based on run-time information, to fill the missing information. The combination of static and dynamic information to characterize an application are called hybrid analyses. This works advocates for the use of hybrid analyses to be able to optimizations on loops, regions where most of computations are done. It proposes a framework capable of statically applying some complex loop transformations, that previously would be considered unsafe, by assuring their safe use during run-time with a lightweight test.The proposed framework uses application execution profiling to help the static loop optimizer to: 1) identify and classify program hot-spots, so as to focus only on regions vital for the execution time; 2) guide the optimizer in understanding the overall loop behavior, so as to reduce the valid loop transformations search space; 3) using instruction's memory access functions, it statically builds a lightweight run-time test that determine, based on the program parameters values, if a given optimization is safe to be used or not. It's applicability is shown by performing complex loop transformations into a variety of loops, obtained from applications of different fields, and demonstrating that the run-time overhead is insignificant compared to the loop execution time or gained performance, in the vast majority of cases
Roche, Gilles. "L'angiographie dynamique des membres inferieurs : evaluation, optimisation et protocoles". Rennes 1, 1992. http://www.theses.fr/1992REN1M036.
Texto completoNguyen, Kim Thanh. "Optimisation et conception d’une prothèse de membre inférieur : matériaux, simulations et prototypage". Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST046.
Texto completoIt is proposed in this PhD work to develop an optimal design of a prosthetic part suitable for additive fabrication, based on material engineering and structural design, as well as manufacturing and testing/characterization. The objective is to find a way to obtain a functional prosthesis satisfying all the operational requirements in terms of material strength and human/structure matching. Finite element-based simulations will also be carried out to help in the design process.The work focuses first on numerical simulations than on experiments. Today, numerical simulations have developed strongly alongside additive manufacturing and materials science. These new methods make it possible to innovate in the field of prosthesis design. For example, the combination of numerical simulation and optimization associated with the use of innovative materials, allow designing prosthetic systems with the desired properties to cover the degraded functions of the patient.Experimental work is carried out to identify the interaction between the prosthetic socket and the stump. The stump’s contact pressure and the socket’s stress are measured by using the electronic circuit. The prosthetic socket is fabricated by using additive manufacturing technique. The stump model is also designed and manufactured based on additive fabrication and a 1cm-silicone layer is added on the outer surface of the stump.Keyword: FE Simulation, Additive Fabrication, Composites, Optimization
Naaim, Alexandre. "Modélisation cinématique et dynamique avancée du membre supérieur pour l’analyse clinique". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1014/document.
Texto completoSoft Tissue Artefact (STA) is one of the most important limitations when measuring upper limb kinematics through marker-based motion capture techniques, especially for the scapula. Multi Body Optimisation (MBO) has already been proposed to correct STA when measuring lower limb kinematics and can be easily adapted for upper limb. For this purpose, the joint kinematic constraints should be as anatomical as possible. The aim of this thesis was thus to define and validate an anatomical upper limb kinematic model that could be used both to correct STA through the use of MBO and for future musculoskeletal models developments. For this purpose, a model integrating closed loop models of the forearm and of the scapula belt have been developed, including a new anatomical-based model of the scapulothoracic joint. This model constrained the scapula plane to be tangent to an ellipsoid modelling the thorax. All these models were confronted to typical models extracted from the literature through cadaveric and in vivo intracortical pins studies. All models generated similar error when evaluating their ability to mimic the bones kinematics and to correct STA. However, the new forearm and scapulothoracic models were more interesting when considering further musculoskeletal developments: The forearm model allows considering both the ulna and the radius and the scapulothoracic model better represents the constraint existing between the thorax and the scapula. This thesis allowed developing a complete anatomical upper limb kinematic chain. Although the STA correction obtained was not as good as expected, the use of this approach for a future musculoskeletal models has been validated
TRUONG, DAN NAM. "Optimisations logicielles de la localite : le placement precis des donnees en memoire". Rennes 1, 1998. http://www.theses.fr/1998REN10094.
Texto completoDakhil, Nawfal. "Analyse et optimisation des interactions membre/prothèse dans les cas d'amputation des membres inférieurs". Thesis, Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0330.
Texto completoThere are nearly 20,000 new amputations each year in France. Although technical solutions exist to support patients and restore mobility to them, clinical complications are frequent and too many patients still leave out using them. Prosthetists lack precise indicators and sometimes have to review their patients several times before obtaining an acceptable solution.This research project, in the field of physical medicine and rehabilitation, aims to shed new light on the mechanical interactions between the stump and the prosthesis in order to optimize the design phase. Behind the scenes, we think about the potential benefits of the latest 3D printing and personalization techniques can bring.The first stage of this work has concerned the analysis of the state of the art: with the epidemiology and etiology of amputees, the different types of existing trans-tibial prostheses, their manufacturing methods and finally the study. Digital approaches developed for their improvements.The second step enabled the development of a finite element biomechanical model of a stump, from a patient's residual limb, coupled with an idealized personalized socket. A measurement campaign on 8 patients made it possible to compare the pressure values at the interface between the stump and the socket for the experimental and numerical results.The last step was devoted to the study of the socket reduction technique used by prosthetists.In conclusion of this work, recommendations of good practices are proposed
Josnin, Matthieu Planchon Bernard. "Optimisation du temps d'hospitalisation des patients présentant une thrombose veineuse profonde des membres inférieurs". [S.l.] : [s.n.], 2007. http://castore.univ-nantes.fr/castore/GetOAIRef?idDoc=26121.
Texto completoDelespierre, Tiba. "Etude de cas sur architectures à mémoires distribuées : une maquette systolique programmable et l'hypercube d'Intel". Paris 9, 1987. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1987PA090073.
Texto completoGlaudin, Lilian. "Stratégies multicouche, avec mémoire, et à métrique variable en méthodes de point fixe pour l'éclatement d'opérateurs monotones et l'optimisation". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS119.
Texto completoSeveral apparently unrelated strategies coexist to implement algorithms for solving monotone inclusions in Hilbert spaces. We propose a synthetic framework for fixed point construction which makes it possible to capture various algorithmic approaches, clarify and generalize their asymptotic behavior, and design new iterative schemes for nonlinear analysis and convex optimization. Our methodology, which is anchored on an averaged quasinonexpansive operator composition model, allows us to advance the theory of fixed point algorithms on several fronts, and to impact their application fields. Numerical examples are provided in the context of image restoration, where we propose a new viewpoint on the formulation of variational problems
Guemann, Matthieu. "Vers un contrôle sensori-moteur bio-inspiré des prothèses myoélectriques du membre supérieur". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0273.
Texto completoThe loss of autonomy caused by the upper limb amputation affects a young and active population in France. The physical and psychological consequences raise some technical, scientific and clinical issues. The low prevalence of upper limb amputation is such that this affection is considered a rare disease. Today's prostheses are offering new possibilities of motion, but they are still limited in their command process. Current controls of these prostheses are non-intuitive and complex, leading to a high abandon rate. Research on this field highlights that to be fully functional and used by patients, prostheses should be able to (i) generate reflex responses, and (ii) feedback the sensation lost. In this thesis, we aimed to explore these two aspects, which are the reflex responses and the sensory substitution. The first part of this work investigates the regulation of the motor command through a spinal network that represents the low-level sensorimotor loops. We have tested this network connected to a musculoskeletal model of an arm with the goal to produce movements with multiple amplitudes and durations. The network's capacities were tested using three optimization algorithms, allowing to explore the behavioral space (i.e. the ensemble of movements produced by the neuromechanical simulations). Although very simplified, this system was capable of producing biologically acceptable movements, in the presence of gravity. This simple neural network produced a rich ensemble of behaviors, each given movement being possibly achieved with different combinations of parameters values. This type of network seems to be a good candidate to make the link between the basics descending commands such as the recorded muscle activity (EMG) and the prostheses motions. The other part of the thesis focused on sensory substitution. We built a vibrotactile device giving feedback of elbow angle to the subject. We found that patients and non-amputee subjects had good scores regarding spatial discrimination with vibrotactile stimulations, and we showed that they were all able to control a virtual arm only guided by the vibrotactile feedback during reaching tasks. However, adding proprioceptive feedback was not found to improve performance when compared to only visual information. Yet, it is important to stress that it did not deteriorate performance neither. Furthermore, the control involving both feedback was preferred by the participants. Taken together, this work provides useful information for the improvement of the myoelectric control of prostheses, while aiming to approach a natural and intuitive control of movement
Gasparutto, Xavier. "Modélisation articulaire pour la cinématique et la dynamique du membre inférieur". Thesis, Lyon 1, 2013. http://www.theses.fr/2013LYO10247/document.
Texto completoThe main objective of this work is to overcome the most classical hypotheses used in kinematics (lower pair mechanical joints) and inverse dynamics computation (joints without resistance) including the estimation of muscular forces. Kinematics is addressed in the first part of the thesis by using “geometric” kinematic models consisting in simple elements (sphere, plane, shaft) modeling the anatomical structures. These models correspond to constraints in the kinematic computation (especially in multi-body optimization). The work consisted in introducing deformable ligaments by using a penalty-based method. It has beenshowed that this method used with a generic geometric model improved the estimation of the knee kinematics from the skin markers, when compared to more classical methods, and introduce physiological couplings between the degrees of freedom. Model personalization is also considered thanks to the flexibility of the method. The influence of the passive structure actions during gait is studied in the second part of the thesis. The work consisted in a local and a global study of those actions. The local study showed that the influence on the joint contact and musculo-tendon forces of the ligament passive moments is limited. The global study showed that the passive moments of the whole peri-articular structures contribute to the motor moments during gait and that the passive ligament moments available in the literature are not appropriate. The long term objective of those studies is to develop a multi-scale approach of the lower limb modeling. The proposed articular modeling (with more complex joints) allows a better interaction between the different scales of modeling (rigid multi-body vs. finite elements)
Rostami, Mostafa. "Contribution à l'étude dynamique de la phase unipodale de la marche sagittale, et étude expérimentale du comportement dynamique d'un membre locomoteur anthropomorphe de robot bipède". Poitiers, 1999. http://www.theses.fr/1999POIT2281.
Texto completoCalvin, Christophe. "Minimisation du sur-coût des communications dans la parallélisation des algorithmes numériques". Phd thesis, Grenoble INPG, 1995. http://tel.archives-ouvertes.fr/tel-00005034.
Texto completoMazure-Bonnefoy, Alice. "Modèle cinématique et dynamique tridimensionnel du membre inférieur : Estimation des forces musculaires et des réactions articulaires au cours de la phase d'appui de la marche". Phd thesis, Université Claude Bernard - Lyon I, 2006. http://tel.archives-ouvertes.fr/tel-00567644.
Texto completoNovytskyi, Dimitri. "Méthodes géométriques pour la mémoire et l'apprentissage". Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00285602.
Texto completo