Dissertationen zum Thema „Méthodologie basée sur les ondes“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Méthodologie basée sur les ondes" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Chatry, Benoit. „Contribution à l'évaluation d'agression électromagnétique. Méthodologie basée sur l'utilisation des éléments finis et des couches PML“. Limoges, 1998. http://www.theses.fr/1998LIMO0001.
Der volle Inhalt der QuelleCui, Dongze. „A Contribution to Vibroacoustics of Highly Heterogeneous Metastructures through Wave Finite Element Scheme“. Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. http://www.theses.fr/2024ECDL0031.
Der volle Inhalt der QuelleThe research aims to extend existing studies for heterogeneous metastructures with high contrast and high dissipation features. The multi-scale dynamics, vibroacoustic indicators, wave coupling effect, and high-order waves of heterogeneous metastructures are investigated within the wave-based frameworks. The wave-based models for Highly Contrasted Structures (HCS) and Highly Dissipative Structures (HDS) are explored. Various methods for computing the vibroacoustic indicators, such as the wavenumber space, Damping Loss Factor (DLF), and Sound Transmission Loss (STL), are reviewed. Special attention is placed on the Asymptotic Homogenization Method (AHM) exploiting the Zig-Zag model and homogenization technique to predict the multi-scale dynamics of HCS by the bending wavenumbers. Meanwhile, the analytical Transfer Matrix Method (TMM) and its generalization for complex structures by the Finite Element (FE) model (General Transfer Matrix Method, GTMM), the semi-analytical General Laminate Model (GLM) employing Mindlin's displacement theory, the numerical Wave Finite Element (WFE) scheme are presented. Evaluation on the robustness and accuracy of AHM and GLM is made by comparing the wavenumber space and DLF with the reference WFE method. The Nonlinear Eigenvalue Problem (NEP) in the WFE scheme for waves propagating in varying directions is solved by a Contour Integral (CI) solver, the complex wavenumbers are tracked based on the energy continuity criteria in the frequency domain. The validity limits of AHM and GLM are verified. The feasibility of applying the WFE method to sandwich structures with non-homogeneous components is shown using the classical FE-based Power Input Method (PIM-FEM). The WFE framework is extended for accurately predicting the global DLF of HDS. It starts by deriving the forced responses of a Unit Cell (UC) representative of the periodic structure when excited by an impinging wave. Then it computes the DLF of the wave via the power balance equation. By employing the Bloch expansion, the response to a point force applied to the periodic structure is decomposed in the Brillouin zone, allowing the prediction of total response via integration over the wavenumber space. The global DLF is derived based on the principle of PIM. For HDS, results of GLM are exploited for validating the wave DLF, the PIM-FEM approach is provided as reference approach for the global DLF. The shrinking influence of bending waves on the DLF estimation for HDS is discussed, as well as the importance of Bloch mode orders. \newline Sound transmission coefficients can be exploited to depict the contribution from the wavenumber space to the STL of the heterogeneous metastructures. The WFE method is applied to study the wave coupling mechanisms influencing the sound insulation performance of HCS and HDS, as well as the importance of symmetric motion to the sandwich structures with a very thick soft core. The same approach is applied to waveguides with complex cross-sections to investigate the wave coupling effect and high-order waves on the accurate STL estimation by analytical TMM, WFE, and GTMM approaches. Special attention is paid to curved periodic structures, the bending-membrane coupling mechanisms influencing the STL are also investigated
Nguyen, Minh Duc. „Méthodologie de test de systèmes mobiles : Une approche basée sur les sénarios“. Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00459848.
Der volle Inhalt der QuelleNguyen, Minh Duc. „Méthodologie de test de systèmes mobiles : une approche basée sur les scénarios“. Toulouse 3, 2009. http://thesesups.ups-tlse.fr/775/.
Der volle Inhalt der QuelleAdvances in wireless networking have yielded the development of mobile computing applications. Their unique characteristics provide new challenges for verification. This dissertation elaborates on the testing technology for mobile systems. As a first step, a review of the state-of-the-art is performed together with a case study - a group membership protocol (GMP) in mobile ad hoc settings - that allowed us to gain insights into testing problems. We present, then, a testing approach which bases on the descriptions of scenarios. We note that the scenario interactions must take into account the spatial configurations of nodes as first class concepts. To cover new specificities of mobile systems in description languages, we introduce some extensions that focus on spatio-temporal relations between nodes and on broadcast communication. The processing of spatial aspect leads to the development of the tool GraphSeq. GraphSeq aims to analyze test traces in order to identify occurrences of successive spatial patterns described in an abstract scenario. The application of GraphSeq (support to implementation of test cases, verification of trace coverage) is shown with the analysis of outputs of a simulator and execution traces of the case study GMP
Chesné, Lou. „Vers une nouvelle méthodologie de conception des bâtiments, basée sur leurs performances bioclimatiques“. Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00825646.
Der volle Inhalt der QuellePassama, Robin. „Conception et développement de contrôleurs de robots - Une méthodologie basée sur les composants logiciels“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00084351.
Der volle Inhalt der QuelleLallier-Daniels, Dominic. „Méthodologie de conception numérique d'un ventilateur hélico-centrifuge basée sur l'emploi du calcul méridien“. Mémoire, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/6186.
Der volle Inhalt der QuellePaludetto, Mario. „Sur la commande de procédés industriels : une méthodologie basée objets et réseaux de pétri“. Toulouse 3, 1991. http://www.theses.fr/1991TOU30219.
Der volle Inhalt der QuelleGuillou, Anne-Claire. „Synthèse architecturale basée sur le modèle polyédrique : validation et extensions de la méthodologie MMAlpha“. Rennes 1, 2003. http://www.theses.fr/2003REN10160.
Der volle Inhalt der QuelleKerbrat, Olivier. „Méthodologie de conception d'outillages modulaires hybrides basée sur l'évaluation quantitative de la complexité de fabrication“. Phd thesis, Ecole centrale de nantes - ECN, 2009. http://tel.archives-ouvertes.fr/tel-00439589.
Der volle Inhalt der QuelleAinsi, les outillages peuvent avantageusement être conçus avec une double approche : modulaire et hybride. Les outillages sont vus non plus comme une seule pièce, mais comme un puzzle en trois dimensions, avec différents modules fabriqués séparément puis assemblés. L'approche modulaire permet de prendre en compte les différentes variantes d'une même famille de pièces à produire en facilitant un changement rapide des parties de l'outillage. L'approche hybride permet de choisir le procédé de fabrication le plus adapté pour chacun des modules de l'outillage. Nous nous sommes intéressés aux procédés d‟usinage par enlèvement de matière ainsi qu'aux procédés de fabrication rapide par ajout de matière. Ces technologies additives arrivent à maturité et, bien qu'un haut niveau de qualité soit encore délicat à obtenir, les possibilités de réalisation de formes difficiles, voire impossibles, à usiner par enlèvement de matière rendent ces procédés très attractifs.
Ce travail de thèse consiste donc en l'élaboration d'une méthodologie de conception d'outillages modulaires hybrides. Cette méthode permet, dans un premier temps, d'analyser la complexité de fabrication des outillages lors de leur conception. Dans un deuxième temps, afin de réduire la complexité de fabrication (et par conséquent, diminuer les temps et coûts de réalisation à qualité égale), une nouvelle conception de l'outillage est proposée, en appliquant les points de vue modulaire et hybride. La complexité de fabrication de ce nouvel outillage est ensuite analysée, puis comparée à la première afin de quantifier les gains induits par notre approche modulaire hybride.
Une maquette informatique a donc été développée et implémentée dans un logiciel de CAO pour mettre en évidence les possibilités d'utilisation de la méthodologie lors de la phase de conception d'outillages. Elle est testée sur différentes pièces-test et outillages industriels, en particulier dans le cadre du projet EMOA (Excellence dans la Maîtrise de l'Ouvrant Automobile haut de gamme) piloté par PSA Peugeot-Citroën.
Alby, Emmanuel. „Élaboration d'une méthodologie de relevé d'objets architecturaux Contribution basée sur la combinaison de techniques d'acquisition“. Phd thesis, Université Henri Poincaré - Nancy I, 2006. http://tel.archives-ouvertes.fr/tel-00132784.
Der volle Inhalt der QuelleAlby, Emmanuel. „Elaboration d'une méthodologie de relevé d'objets architecturaux : contribution basée sur la combinaison de techniques d'acquisition“. Nancy 1, 2006. https://tel.archives-ouvertes.fr/tel-00132784.
Der volle Inhalt der QuelleThe external survey of an architectural work is a way to create a representation of the building in its conservation condition. Two techniques of remote acquisition differ by their effectiveness and the quality of the produced data: photogrammetry and laser scanning. These two techniques depend on optical principles: what cannot be seen cannot be measured. The combination of these techniques can improve the data quality, but unmeasured zones always remain, therefore cannot be represented. In order to solve this problem, we put forward the hypothesis that using architectural knowledge may allow to rebuild these zones during the modeling process. This study suggests a modeling process based on the combination of these two techniques and on the integration of the available architectural knowledge, from paper documentation or from the built works construction rules. An architectural work being complex and the data numerous, a division of the modeling process in several distinct stages appears necessary. We suggest dividing modeling process according to different figuration of level of details frequently used to represent architecture, and define a process using information in a progressive way. Thus our approach consists in integrating dimensional data into architectural documentation, in order to develop a modeling process providing a model as complete as possible
Kerbrat, Olivier. „Méthodologie de conception d'outillages modulaires hybrides basée sur l'évaluation quantitative de la compléxité de fabrication“. Ecole Centrale de Nantes, 2009. http://www.theses.fr/2009ECDN0005.
Der volle Inhalt der QuelleEl, gamoussi Sarah. „Proposition d'une méthodologie d'amélioration du Processus de Développement de Produits basée sur une approche Lean“. Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC052/document.
Der volle Inhalt der QuelleThe current industrial world is facing frequent changes, which are sometimes abrupt. This is due to the scientific and technical progresses and increasingly challenging needs of customers. Therefore, to deal with these disruptions and ensure competitiveness, manufacturers are developing methods to control and improve their processes for greater performance. Lean management is one of such methods that has proven its effectiveness in controlling and improving process (especially in manufacturing process). However, the Product Development Process (PDP) remains one of the most important industrial challenges, since it is where PDP actors take the major decisions related to strategy, organization, delay, development and manufacturing costs. That is why methods to control and improve this process have to be developed. Nevertheless, the PDP complexity makes direct application of existing Lean methods difficult. Thus, this research project aims at proposing a methodology for controlling and improving the PDP based on a Lean approach in order to manage the creation and the transformation of the value in this process. This methodology adapts especially the first two principles of Lean, namely the definition of value and its mapping in the given process. We so propose a framework for defining value that can be adapted to the evolutions of the company’s strategy. This definition tries to exhaustively take into account all the PDP stakeholders, so that the value is shared by everyone. We also propose a value mapping tool for the PDP, which complies with PDP specificities and proposed value definition.All these research works have been tested in the industrial partner, Exxelia Technologies
Dericquebourg, Thomas. „Méthodologie de conception préliminaire robuste des assemblages vissés basée sur des modèles de pré-dimensionnement“. Toulouse, INSA, 2009. http://www.theses.fr/2009ISAT0031.
Der volle Inhalt der QuelleIn the current context, consisting in continually reducing the design process duration, it is fundamental to make relevant choices upstream of the projects so as to avoid late reconsiderations, that are time and resources consuming, and to enable an efficient concurrent engineering. The first steps of the design process are characterized by the uncertainties and the main parameters indeterminations. As a result, the first choices have to be robust against the potential data and parameters variations that may appear all along the design process. In the automotive industries, the bolted joints are widespread since they are practical for setting up removable junctions between subassemblies. However, tools for pre-dimensioning them are lacking. Although numerous models do exist, they essentially handle single-bolt joints or specific applications, or the models are too much time consuming to be used in a preliminary phase. That is why a robust approach to the preliminary design of bolted joints has been developed. This approach is firstly based on the development of pre-dimensioning models, a generic local model dedicated to describe a single bolt joint and a global model in order to take into consideration multi-bolted joints. The reduced time requirement has continually be considered so as to then integrate theses models in a preliminary robust design strategy. The robust approach to the preliminary design of bolted joints uses a local pre-dimensioning model to describe the complex behavior of a bolted-joint in a very short time. This model is generic enough to handle several kinds of parts and its validity field has been clearly defined to know its limits. The methodology is then based on a global model that enables to consider multi-bolted joints and so their mutual interactions. This approach has been validated on a specified bolted joints category. Eventually a robust pre-design strategy has been carried out from the previous models. This strategy consists in scanning different potential configurations in order to design a multi-bolted joint assembly and in estimating the associated risks and the projected costs for each configuration. In this strategy, optimized designs of experiments are used as well as an innovative clustering approach to aggregate solutions and then to give some shape recommendations of the assembly and so to guide the sequel step of detail design. Actually, the overall methodology allows to give a set of relevant information concerning the faulty risks and the projected costs of possible solutions in a reduced time. These data can then help the designer to make the best choices of a configuration for a bolted joints assembly in a preliminary design with a risk and cost control
Kallel, Asma. „Modélisation et conception d'une antenne plasma à balayage basée sur des ondes de fuite“. Phd thesis, Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2514/.
Der volle Inhalt der QuelleIn this work, a beam scanning leaky-wave antenna working at a fixed operating frequency and constructed from a grounded plasma layer is proposed. The radiation angle can be tuned by the plasma electron density which is controlled by the power. A 2D theoretical model based on a canonical structure is proposed to study the leaky waves. The antenna parameters (plasma thickness, length and permittivity) are dimensioned using this theoretical model, at 10 GHz, and a microwave source is chosen to excite the antenna. The scanning range of about 60° needs a plasma reaching an electron density of. In a second step an inductively coupled plasma source is chosen since it meets the dimensioning requirements. The measurements of the plasma parameters confirm the requirements. Finally, the antenna prototype is designed
Kallel, Asma. „Modélisation et conception d'une antenne plasma à balayage basée sur des ondes de fuite“. Phd thesis, Toulouse 3, 2014. http://oatao.univ-toulouse.fr/14142/1/kallel.pdf.
Der volle Inhalt der QuelleLecomte, Stéphane. „Méthodologie de conception basée sur les modèles de haut niveau pour les systèmes de radio logicielle“. Rennes 1, 2011. http://www.theses.fr/2011REN1S142.
Der volle Inhalt der QuelleIn this thesis, we suggest a hardware/software co-design methodology for software radio systems, and more generally for flexible embedded electronics systems, allowing to answer the new design challenges it imposes and to improve the productivity. Our co-design methodology is based on a high-level UML/MARTE (extension of UML dedicated to the hardware modeling) modeling approach. Based on model driven architecture (derived from model driven engineering), our methodology allows to start at a highlevel modeling level and go down to the hardware implementation (generation of VHDL code) by successive rules of transformation and iterative refinements of the models. For that, we defined the middle level of modeling, e. G. Execution Modeling Level, which allows focusing on hardware/software partitioning and focusing on the exploration of architecture to design the hardware platform. To complete the generation of the hardware design language, associated with this methodology, we recommend to couple a co-design methodology based on high-level models with the behavioral synthesis concept. This approach is illustrated with a MIMO decoder example. Finally, in the software radio context, we suggest an extension of the methodology in order to take into account the flexibility of the embedded systems. For that, we include into our methodology an architecture defined at Supélec to manage the reconfiguration. An execution of high-level models on a real radio platform allowed to validate our approach
Pham, Hoang Anh. „Coordination de systèmes sous-marins autonomes basée sur une méthodologie intégrée dans un environnement Open-source“. Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL0020.
Der volle Inhalt der QuelleThis thesis studies the coordination of autonomous underwater robots in the context of coastal seabed exploration or facility inspections. Investigating an integrated methodology, we have created a framework to design and simulate low-cost underwater robot controls with different model assumptions of increasing complexity (linear, non-linear, and finally non-linear with uncertainties). By using this framework, we have studied algorithms to solve the problem of formation control, collision avoidance between robots and obstacle avoidance of a group of underwater robots. More precisely, we first consider underwater robot models as linear systems of simple integrator type, from which we can build a formation controller using consensus and avoidance algorithms. We then extend these algorithms for the nonlinear dynamic model of a Bluerov robot in an iterative design process. Then we have integrated a Radial Basis Function neural network, already proven in convergence and stability, with the algebraic controller to estimate and compensate for uncertainties in the robot model. Finally, we have presented simulation results and real basin tests to validate the proposed concepts. This work also aims to convert a remotely operated ROV into an autonomous ROV-AUV hybrid
Gacem, Amina. „Méthodologie d’évaluation de performances basée sur l’identification de modèles de comportements : applications à différentes situations de handicap“. Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0053.
Der volle Inhalt der QuelleThe performance assessment is an important process to identify the abilities and the limits of a person. Currently, the assessment requires the mediation of a specialist (doctor, therapist, etc. ) which must performs analysis and tests to reach a subjective decision. In the literature, several works propose assessment methods based on performance criteria: it is a quantitative evaluation which is objective. This type of evaluation is usually based on statistical analysis. In this work, a new methodology of performance assessment is proposed. It is based on the identification of reference behaviours. Those behaviours are then used as references for the evaluation of other people. The identification of reference behaviours is an essential element of our work. It is based on classification methods. In our work, we have tested two different methods. The first one is the "Fuzzy C-means" which allows a thorough search of reference behaviours. However, behaviours are represented by proxy criteria. The second method is the "Hidden Markov Models". It offers a time series analysis based on the temporal behaviour variation. However, it is not easy to determine the training phase of this method. This assessment methodology has been applied in the context of different applications designed for disabled people: driving electric wheelchair, driving an automobile and the use of pointing devices (mouse, trackball, joystick, etc. ). In each application, a protocol and an ecological situation are defined in order to evaluate participants on different platforms involving functional control interfaces (joystick, mouse, steering wheel, etc. ). Then, statistical tools are used to analyze the data and provide a first interpretation of behaviours. The application of our methodology identifies different reference behaviours and the assessment by comparing behaviours let to identify different levels of expertise. In each of the studied applications, our methodology identifies automatically different reference behaviours. Then, the assessment of people, carried out by comparing to the reference behaviours, let identify different levels of expertise and illustrate the evolution of learning during the assessment. The proposed evaluation methodology is an iterative process. So that, the population of experienced people can be enriched by adding people who become stable after assessment. Therefore, this allows the search for new reference behaviours
Pierret, Jean-Dominique. „Méthodologie et structuration d'un outil de découverte de connaissances basé sur la littérature biomédicale : une application basée sur l'exploitation du MeSH“. Toulon, 2006. http://tel.archives-ouvertes.fr/tel-00011704.
Der volle Inhalt der QuelleThe information available in bibliographic databases is dated and validated by a long process and becomes not very innovative. Usually bibliographic databases are consultated in a boolean way. The result of a request represente is a set of known which do not bring any additional novelty. In 1985 Don Swanson proposed an original method to draw out innovative information from bibliographic databases. His reasoning is based on systematic use of the biomedical literature to draw the latent connections between different well established knowledges. He demonstrated unsuspected potential of bibliographic databases in knowledge discovery. The value of his work did not lie in the nature of the available information but consisted in the methodology he used. This general methodology was mainly applied on validated and structured information that is bibliographic information. We propose to test the robustness of Swanson's theory by setting out the methods inspired by this theory. These methods led to the same conclusions as Don Swanson's ones. Then we explain how we developed a knowledge discovery system based on the literature available from public biomedical information sources
Hobeika, Christelle. „Méthodologie de vérification automatique basée sur l'utilisation des tests structurels de transition avec insertion de registres à balayage“. Mémoire, École de technologie supérieure, 2011. http://espace.etsmtl.ca/931/1/HOBEIKA_Christelle.pdf.
Der volle Inhalt der QuelleLeroux-Beaudout, Renan. „Méthodologie de conception de systèmes de simulations en entreprise étendue, basée sur l'ingénierie système dirigée par les modèles“. Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30089.
Der volle Inhalt der QuelleThis manuscript presents a methodology for the design of "early" simulations in extended enterprise, based on model-driven system engineering. The goal is to allow the system architect to explore alternative solutions, and to verify and/or validate the system architecture being designed, in relation to the user requirements. This methodology is divided into two complementary axes : the method part (new) and the means of execution, without which there can be no simulation. This new method is based on the following principle : starting from the user requirements to create the system architecture model, then derive the simulation architecture, develop the executable models and run the simulation in relation to objectives of verification and/or validation. By doing this, potential differences in interpretations between the system architecture model and simulation models are removed or at least reduced compared to a traditional approach. This method is of matrix type. The columns represent the actors, while the lines correspond to the different steps of the MBSE method used by the system architect for the product, including the refinement steps. The actors are the system architect for the product (SyA), a first new actor introduced by this method : the system architect for the simulation (SiA), the developers of the simulation executable models (SMD), and the second new actor in charge of the execution of the simulation (SEM). The analysis of its qualities and the production of results exploitable by the system architect for the product. As the method relies on a matrix structure, the SyA can request simulations, either in depth to specify a particular point of its model, or more in extension to check the good agreement of the functions between them. With this new matrix approach, the system architect for the product can reuse functions already defined during the upstream or downstream stages of its previous decompositions. Overall, saving time, costs, and confidence. The second axis of this methodology is the realization of an extended enterprise cosimulation (EE) platform, which is a project in itself. Based on a proposal of requirements specifications, the MBSE has defined a functional and physical architecture. The architecture of this platform can be modified according to the simulation needs expressed by the architect of the simulation. This is one of his prerogatives. The proposal introduces a third new player : the Infrastructure Project Manager (IPM) which is in charge of coordinating for the realization of the cosimulation platform, within his company. For an EE of federated type, that is to say from contractor to subcontractor, introduction of two new actors : - the supervisor of IPM, whose rôle is to link IPMs to solve the administrative and interconnection problems, - the person responsible in charge of the execution simulations. He coordinates, with the SEM of each partner, the implementation of simulations, ensures launches, and returns the results to all partners
Rojas, Jhojan Enrique. „Méthodologie d’analyse de fiabilité basée sur des techniques heuristiques d’optimisation et modèles sans maillage : applications aux systèmes mécaniques“. Thesis, Rouen, INSA, 2008. http://www.theses.fr/2008ISAM0003/document.
Der volle Inhalt der QuelleStructural Engineering designs must be adapted to satisfy performance criteria such as safety, functionality, durability and so on, generally established in pre-design phase. Traditionally, engineering designs use deterministic information about dimensions, material properties and external loads. However, the structural behaviour of the complex models needs to take into account different kinds and levels of uncertainties. In this sense, this analysis has to be made preferably in terms of probabilities since the estimate the probability of failure is crucial in Structural Engineering. Hence, reliability is the probability related to the perfect operation of a structural system throughout its functional lifetime; considering normal operation conditions. A major interest of reliability analysis is to find the best compromise between cost and safety. Aiming to eliminate main difficulties of traditional reliability methods such as First and Second Order Reliability Method (FORM and SORM, respectively) this work proposes the so-called Heuristic-based Reliability Method (HBRM). The heuristic optimization techniques used in this method are: Genetic Algorithms, Particle Swarm Optimization and Ant Colony Optimization. The HBRM does not require initial guess of design solution because it’s based on multidirectional research. Moreover, HBRM doesn’t need to compute the partial derivatives of the limit state function with respect to the random variables. The evaluation of these functions is carried out using analytical, semi analytical and numerical models. To this purpose were carried out the following approaches: Ritz method (using MATLAB®), finite element method (through MATLAB® and ANSYS®) and Element-free Galerkin method (via MATLAB®). The combination of these reliability analyses, optimization procedures and modelling methods configures the design based reliability methodology proposed in this work. The previously cited numerical tools were used to evaluate its advantages and disadvantages for specific applications and to demonstrate the applicability and robustness of this alternative approach. Good agreement was observed between the results of bi and three-dimensional applications in statics, stability and dynamics. These numerical examples explore explicit and implicit multi limit state functions for several random variables. Deterministic validation and stochastic analyses lied to Muscolino perturbation method give the bases for reliability analysis in 2-D and 3-D fluidstructure interaction problems. This methodology is applied to an industrial structure lied to a modal synthesis. The results of laminated composite plates modelled by the EFG method are compared with their counterparts obtained by finite elements. Finally, an extension in reliability based design optimization is proposed using the optimal safety factors method. Therefore, numerical applications that perform weight minimization while taking into account a target reliability index using mesh-based and meshless models are proposed
Os projectos de Engenharia Estrutural devem se adaptar a critérios de desempenho, segurança, funcionalidade, durabilidade e outros, estabelecidos na fase de anteprojeto. Tradicionalmente, os projectos utilizam informações de natureza deterministica nas dimensões, propriedades dos materiais e carregamentos externos. No entanto, a modelagem de sistemas complexos implica o tratamento de diferentes tipos e níveis de incertezas. Neste sentido, a previsão do comportamento deve preferivelmente ser realizada em termos de probabilidades dado que a estimativa da probabilidade de sucesso de um critério é uma necessidade primária na Engenharia Estrutural. Assim, a confiabilidade é a probabilidade relacionada à perfeita operação de um sistema estrutural durante um determinado tempo em condições normais de operação. O principal objetivo desta análise é encontrar o melhor compromisso entre custo e segurança. Visando a paliar as principais desvantagens dos métodos tradicionais FORM e SORM (First and Second Order Reliability Method), esta tese propõe um método de análise de confiabilidade baseado em técnicas de optimização heurísticas denominado HBRM (Heuristic-based Reliability Method). Os métodos heurísticos de otimização utilizados por este método são: Algoritmos Genéticos (Genetic Algorithms), Optimização por Bandos Particulares (Particle Swarm Optimisation) e Optimização por Colónia de Formigas (Ant Colony Optimization). O método HBRM não requer de uma estimativa inicial da solução e opera de acordo com o princípio de busca multidirecional, sem efetuar o cálculo de derivadas parciais da função de estado limite em relação às variáveis aleatórias. A avaliação das funções de estado limite é realizada utilizando modelos analíticos, semi analíticos e numéricos. Com este fim, a implementação do método de Ritz (via MATLAB®), o método dos elementos terminados (via MATLAB® e ANSYS®) e o método sem malha de Galerkin (Element-free Galerkin via MATLAB®) foi necessária. A combinação da análise de confiabilidade, os métodos de optimização e métodos de modelagem, acima mencionados, configura a metodologia de projeto proposta nesta tese. A utilização de diferentes métodos de modelagem e de otimização teve por objetivo destacar as suas vantagens e desvantagens em aplicações específicas, assim como demonstrar a aplicabilidade e a robustez da metodologia de análise de confiabilidade utilizando estas técnicas numéricas. Isto foi possível graças aos bons resultados encontrados na maior parte das aplicações. As aplicações foram uni, bi e tridimensionais em estática, estabilidade e dinâmica de estruturas, as quais exploram a avaliação explícita e implícita de funções de estado limite de várias variáveis aleatórias. Procedimentos de validação déterministica e de análises estocásticas, aplicando o método de perturbação de Muscolino, fornecem as bases da análise de confiabilidade nas aplicações de problemas de iteração fluído-estrutura bi e tridimensionais. A metodologia é testada com uma estrutura industrial. Resultados de aplicações bidimensionais em estratificados compostos, modelados pelo método EFG são comparados com os obtidos por elementos finitos. No fim da tese, uma extensão da metodologia à optimização baseada em confiabilidade é proposta aplicando o método dos factores óptimos de segurança. Finalmente são apresentadas as aplicações para a minimização do peso em sistemas modelados pelo método de EF e o método EFG que exigem um índice de confiabilidade alvo
Correa, Matthieu. „Développement et validation d'une méthodologie basée sur la mécanomyographie pour analyser les variations d'effort et la fatigue musculaire“. Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASW005.
Der volle Inhalt der QuelleToday, the prevention of musculoskeletal disorders is a major public health challenge. Among the risk factors for the development of these disorders, fatigue plays a predominant role. Electromyography (EMG) is the reference method for detecting fatigue in the field, but it has several functional limitations for routine use in professional settings. Mechanomyography (MMG), which measures the vibrations of active muscle fibers, could be a relevant alternative for measuring muscle activity and associated fatigue. However, MMG data incorporate movement accelerations, which is its main limitation.In this thesis research, we first compared EMG and MMG data under isometric conditions. Then, we demonstrated that MMG is more sensitive to variations in dynamic efforts than EMG during controlled, constant-velocity load lifting. We also proposed a new adaptive motion artifact filtering method, which was compared to a traditional filtering method. Finally, we showed that MMG, like EMG, could detect upper limb muscle fatigue during repetitive load handling. Certain frequency bands of the MMG signal should be considered for optimal detection of muscle fatigue in the field. All these results were discussed and compared with the literature to define the best framework for using the MMG method to better prevent workplace injuries
Marteau, Hubert. „Une méthode d'analyse de données textuelles pour les sciences sociales basée sur l'évolution des textes“. Tours, 2005. http://www.theses.fr/2005TOUR4028.
Der volle Inhalt der QuelleThis PhD Thesis aims at bringing to sociologists a data-processing tool wich allows them to analyse of semi-directing open talks. The proposed tool performs in two steps : an indexation of the talks followed by a classification. Usually, indexing methods rely on a general stastistical analysis. Such methods are suited for texts having contents and structure ( literary texts, scientific texts,. . . ). These texts have more vocabulary and structure than talks (limitation to 1000 words for suche texts). On the basis of the assumption that the sociological membership strongly induces the form of the speech, we propose various methods to evaluate the structure and the evolution of the texts. The methods attempt to find new representations of texts (image, signal) and to extract values from these new representations. Selected classification is a classification by trees (NJ). It has a low complexity and it respects distances, then this method is a good solution to provide a help to classification
Nguyen, Cong Tin. „Implémentation semi-automatique du protocole ELDA et contribution à une méthodologie de développement des protocoles d'application basée sur ESTELLE“. Clermont-Ferrand 2, 1993. http://www.theses.fr/1993CLF21487.
Der volle Inhalt der QuelleDubois, Florentine. „Une méthodologie de conception de modèles analytiques de surface et de puissance de réseaux sur puce hautement paramétriques basée sur une méthode d'apprentissage automatique“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00877956.
Der volle Inhalt der QuelleDubois, Florentine. „Une méthodologie de conception de modèles analytiques de surface et de puissance de réseaux sur puce hautement paramétriques basée sur une méthode d’apprentissage automatique“. Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM026/document.
Der volle Inhalt der QuelleIn the last decade, Networks-on-chip (NoCs) have emerged as an efficient and flexible interconnect solution to handle the increasing number of processing elements included in Systems-on-chip (SoCs). NoCs are able to handle high-bandwidth and scalability needs under tight performance constraints. However, they are usually characterized by a large number of architectural and implementation parameters, resulting in a vast design space. In these conditions, finding a suitable NoC architecture for specific platform needs is a challenging issue. Moreover, most of main design decisions (e.g. topology, routing scheme, quality of service) are usually made at architectural-level during the first steps of the design flow, but measuring the effects of these decisions on the final implementation at such high level of abstraction is complex. Static analysis (i.e. non-simulation-based methods) has emerged to fulfill this need of reliable performance and cost estimation methods available early in the design flow. As the level of abstraction of static analysis is high, it is unrealistic to expect an accurate estimation of the performance or cost of the chip. Fidelity (i.e. characterization of the main tendencies of a metric) is thus the main objective rather than accuracy. This thesis proposes a modeling methodology to design static cost analysis of NoC components. The proposed method is mainly oriented towards generality. In particular, no assumption is made neither on the number of parameters of the components nor on the dependences of the modeled metric on these parameters. We are then able to address components with millions of configurations possibilities (order of 1e+30 configuration possibilities) and to estimate cost of complex NoCs composed of a large number of these components at architectural-level. It is difficult to model that kind of components with experimental analytical models due to the huge number of configuration possibilities. We thus propose a fully-automated modeling flow which can be applied directly to any architecture and technology. The output of the flow is a NoC component cost predictor able to estimate a metric of interest for any configuration of the design space in few seconds. The flow builds fine-grained analytical models on the basis of gate-level results and a machine-learning method. It is then able to design models with a better fidelity than purely-mathematical methods while preserving their main qualities (i.e. low complexity, early availability). Moreover, it is also able to take into account the effects of the technology on the performance. We propose to use an interpolation method based on Kriging theory. By using Kriging methodology, the number of implementation flow runs required in the modeling process is minimized and the main characteristics of the metrics in space are modeled both globally and locally. The method is applied to model logic area of key NoC components. The inclusion of traffic is then addressed and a NoC router leakage and average dynamic power model is designed on this basis
François, Michaël. „Génération de nombres pseudo-aléatoires basée sur des systèmes multi-physiques exotiques et chiffrement d'images“. Troyes, 2012. http://www.theses.fr/2012TROY0023.
Der volle Inhalt der QuelleThe use of (pseudo)-random numbers has taken an important dimension in recent decades. Many applications in the field of telecommunications, cryptography, numerical simulations or gambling, have contributed to the development and the use of these numbers. The methods used for the generation of (pseudo)- random numbers are based on two types of processes: physical and algorithmic. In this PhD thesis, two classes of generators based on the principles of physical measurements and mathematical processes are presented. For each class two generators are presented. The first class of generators operates the response of a physical system that serves as a source for the generation of random sequences. This class uses both simulation results and the results of interferometric measurements to produce sequences of random numbers. The second class of generators is based on two types of chaotic functions and uses the outputs of these functions as an index permutation on an initial vector. This PhD thesis also focuses on encryption systems for data protection. Two encryption algorithms using chaotic functions are proposed. These algorithms use a permutation-substitution process on the bits of the original image. A thorough analysis based on statistical tests confirms the relevance of the developped cryptosystems in this PhD thesis manuscript
Tixier, Jérôme. „Méthodologie d'évaluation du niveau de risque d'un site industriel de type Seveso, basée sur la gravité des accidents majeurs et la vulnérabilité de l'environnement“. Aix-Marseille 1, 2002. http://www.theses.fr/2002AIX11060.
Der volle Inhalt der QuelleGordaliza, Pastor Paula. „Fair learning : une approche basée sur le transport optimale“. Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30084.
Der volle Inhalt der QuelleThe aim of this thesis is two-fold. On the one hand, optimal transportation methods are studied for statistical inference purposes. On the other hand, the recent problem of fair learning is addressed through the prism of optimal transport theory. The generalization of applications based on machine learning models in the everyday life and the professional world has been accompanied by concerns about the ethical issues that may arise from the adoption of these technologies. In the first part of the thesis, we motivate the fairness problem by presenting some comprehensive results from the study of the statistical parity criterion through the analysis of the disparate impact index on the real and well-known Adult Income dataset. Importantly, we show that trying to make fair machine learning models may be a particularly challenging task, especially when the training observations contain bias. Then a review of Mathematics for fairness in machine learning is given in a general setting, with some novel contributions in the analysis of the price for fairness in regression and classification. In the latter, we finish this first part by recasting the links between fairness and predictability in terms of probability metrics. We analyze repair methods based on mapping conditional distributions to the Wasserstein barycenter. Finally, we propose a random repair which yields a tradeoff between minimal information loss and a certain amount of fairness. The second part is devoted to the asymptotic theory of the empirical transportation cost. We provide a Central Limit Theorem for the Monge-Kantorovich distance between two empirical distributions with different sizes n and m, Wp(Pn,Qm), p > = 1, for observations on R. In the case p > 1 our assumptions are sharp in terms of moments and smoothness. We prove results dealing with the choice of centering constants. We provide a consistent estimate of the asymptotic variance which enables to build two sample tests and confidence intervals to certify the similarity between two distributions. These are then used to assess a new criterion of data set fairness in classification. Additionally, we provide a moderate deviation principle for the empirical transportation cost in general dimension. Finally, Wasserstein barycenters and variance-like criterion using Wasserstein distance are used in many problems to analyze the homogeneity of collections of distributions and structural relationships between the observations. We propose the estimation of the quantiles of the empirical process of the Wasserstein's variation using a bootstrap procedure. Then we use these results for statistical inference on a distribution registration model for general deformation functions. The tests are based on the variance of the distributions with respect to their Wasserstein's barycenters for which we prove central limit theorems, including bootstrap versions
Tugui, Catalin Adrian. „Mise en place d'une démarche de conception pour circuits hautes performances basée sur des méthodes d'optimisation automatique“. Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00789352.
Der volle Inhalt der QuelleCunha, Guilherme. „Optimisation d'une méthodologie de simulation numérique pour l'aéroacoustique basée sur un couplage faible des méthodes d'aérodynamique instationnaire et de propagation acoustique“. Thesis, Toulouse, ISAE, 2012. http://www.theses.fr/2012ESAE0028/document.
Der volle Inhalt der QuelleThe present work consisted in improving, assessing and validating further the CFD/CAA surface weak coupling methodology, with respect to its application to realistic problems of aircraft noise. In particular, it was here shown how far such hybrid methodology could (i) cope with all stringent constraints that are dictated by real-life applications, (ii) without being jeopardized by some of the unavoidable side-effects (such as the signal degradation to which CFD data are subjected, when processed or being then acoustically exploited)
Kerbiriou, Corinne. „Développement d'une méthode d'étalonnage d'un radar transhorizon basée sur une analyse fine du fouillis de mer“. Rennes 1, 2002. http://www.theses.fr/2002REN1A003.
Der volle Inhalt der QuelleMughal, Arshad Saleem. „Valorisation industrielle intégrée d'agro-ressources non alimentaires : contribution au développement d'une méthodologie d'analyse énergétique et environnementale basée sur le génie des procédés“. Toulouse, INPT, 1994. http://www.theses.fr/1994INPT042G.
Der volle Inhalt der QuelleBeraud, Benoit. „Méthodologie d'optimisation du contrôle/commande des usines de traitement des eaux résiduaires urbaines basée sur la modélisation et les algorithmes génétiques multi-objectifs“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2009. http://tel.archives-ouvertes.fr/tel-00457236.
Der volle Inhalt der QuelleBeraud, Benoît. „Méthodologie d’optimisation du contrôle/commande des usines de traitement des eaux résiduaires urbaines basée sur la modélisation et les algorithmes génétiques multi-objectifs“. Montpellier 2, 2009. http://www.theses.fr/2009MON20049.
Der volle Inhalt der QuelleThe work presented in this thesis concerns the development of an optimization methodology for control laws of wastewater treatment plants. This work is based on the use of WWTP process models in order to simulate their operation. These simulations are used by a multi-objective genetic algorithm, NSGA-II. This optimization algorithm allows the search of optimal solutions when multiple objectives are considered (e. G. The effluent quality, energy consumption, etc. ). It also enables the visualisation of compromises arising between various control laws as well as their respective best domains of application. In a first part of this work, the optimization methodology in developed around four main axes: the conception of a robust simulation procedure, the choice of input datasets for the simulations, the choice of objectives and constraints to consider and the evaluation of long term performances and robustness of control laws. This methodology is then applied on the literature case study of BSM1. In a second part of the work, the methodology is applied on the real case study of Cambrai wastewater treatment plant. This application includes the development of new aspects like the generation of dynamic input datasets out of daily monitoring measurements of the wastewater treatment plant, as well as the simulation of control laws based on oxydo-reduction potential measurements. This application allowed to analyze the compromises between the control law currently tested on the wastewater treatment plant and a new control law foreseen. The benefits of this modification could hence be clearly observed
Aymard, Emmanuel. „Détermination des efforts aérodynamiques s'exerçant sur une surface portante en rotation par une méthodologie basée sur la vélocimétrie laser : application aux pales d'un rotor d'hélicoptère en vol d'avancement“. Aix-Marseille 2, 1998. http://www.theses.fr/1998AIX22113.
Der volle Inhalt der QuelleRasovska, Ivana. „Contribution à une méthodologie de capitalisation des connaissances basée sur le raisonnement à partir de cas : Application au diagnostic dans une plateforme d'e-maintenance“. Phd thesis, Université de Franche-Comté, 2006. http://tel.archives-ouvertes.fr/tel-00257893.
Der volle Inhalt der QuelleRasovska, Ivana. „Contribution à une méthodologie de capitalisation des connaissance basée sur le raisonnement à partir de cas : application au diagnostic dans une plateforme d'e-maintenance“. Besançon, 2006. https://tel.archives-ouvertes.fr/tel-00257893.
Der volle Inhalt der QuelleFaced with the technological developments, the increasing complexity of the industrial plants and the processes dynamics as well as with organisational changes and staff mobility, maintenance managers want to formalise and capitalize the know and know-how of maintenance operators and experts. To deal with these factors, our objective is to provide a service of maintenance assistance that uses and capitalizes knowledge. Our work has been part of European project Proteus which goal was to develop a generic distributed platform of e-maintenance to integrate and to provide a set of different maintenance systems and applications. We specified four levels of maintenance applications associated each one with a set of decision help systems: equipment analysis, diagnosis and expertise, resource management and maintenance strategy management. These tools require an expertise which we propose to capitalize and preserve in a corporate enterprise memory. In order to create this memory and to develop our diagnostic and repair help system, we introduced a methodology based on the association of knowledge capitalization and knowledge intensive case based reasoning. The development of our system is based on knowledge modeling that consists of a representation model (domain ontology) and a problem solving model (case based reasoning). The suggested models use emerging technologies from the semantic Web which make possible the evolution of e-maintenance concept in a new concept of s-maintenance (semantic maintenance)
Lesueur, Chloé. „Relations entre les mesures de mouvements du sol et les observations macrosismiques en France : Etude basée sur les données accélérométriques du RAP et les données macrosismiques du BCSF“. Strasbourg, 2011. https://publication-theses.unistra.fr/public/theses_doctorat/2011/LESUEUR_Chloe_2011.pdf.
Der volle Inhalt der QuelleComparison between accelerometric and macroseismic observations is made for three Mw~4. 5 earthquakes of eastern France between 2003 and 2005. Scalar and spectral instrumental parameters are processed from the accelerometric data recorded by nine accelerometric stations located between 29km and 180km from the epicentres. Macroseismic data are based on the French Internet reports. In addition to the individual macroseismic intensity, analysis of the internal correlation between the encoded answers highlights four predominant fields of questions, bearing different physical meanings: 1) “Vibratory Motions of small objects”, 2) “Displacement and Fall of Objects”, 3) “Acoustic Noise”, and 4) “Personal Feelings”. Best correlations between macroseismic and instrumental observations are obtained when the macroseismic parameters are averaged over 10km radius circles around each station. Macroseismic intensities predicted by published PGV-intensity relationships quite agree with the observed intensities, contrary to those based on PGA. The correlations between the macroseismic and instrumental data, for intensities between II and V (EMS-98), show that PGV is the instrumental parameter presenting the best correlation with all macroseismic parameters. The correlation with response spectra, exhibits clear frequency dependence over a limited frequency range [0. 5-33Hz]. Horizontal and vertical components are significantly correlated with macroseismic parameters between 1 and 10Hz, a range corresponding to both natural frequencies of most buildings and high energy content in the seismic ground motion. Between 10 and 25Hz, a clear lack of correlation between macroseismic and instrumental data is observed, while beyond 25Hz the correlation coefficient increases, approaching that of the PGA correlation level
Legendre, Anthony. „Ingénierie système et Sûreté de fonctionnement : Méthodologie de synchronisation des modèles d'architecture et d'analyse de risques“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC083/document.
Der volle Inhalt der QuelleClassical organization in disciplinary silos in the industry reaches its limits to manage and control complexity. Problems are discovered too late and the lack of communication between experts prevents the early emergence of solutions. This is why it is urgent to provide new collaborative approaches and ways to exchange the models contents between various engineering fields, early and all along the development cycle. In this context, we are particularly interested in a synchronization approach of models between two engineering fields: system architecture design and dependability analysis.This work proposes a collaborative approach of synchronization of models. It takes into account the study contexts, applied processes, applied methods and viewpoint produced by engineers. Contributions address issues at levels of practices, concepts, implementation, applications and implementation of model synchronization
Pecquois, Romain. „Etude et réalisation d’une source de rayonnement large bande de forte puissance basée sur un concept innovant de transformateur résonant impulsionnel“. Thesis, Pau, 2012. http://www.theses.fr/2012PAUU3047/document.
Der volle Inhalt der QuelleNowadays, a broad range of modern defense applications requires compact pulsed power generators to produce high-power electromagnetic waves. In a conventional design, such generators consist of a primary energy source and an antenna, separated by a power-amplification system, such as a Marx generator or a Tesla transformer, which forwards the energy from the source to the antenna. The present system, however, uses a novel and very compact high-voltage resonant pulsed transformer to drive a dipole antenna. The complete pulsed power source, termed MOUNA (French acronym for “Module Oscillant Utilisant une Nouvelle Architecture”), is composed of a set of batteries, a dc/dc converter for charging four capacitors, four synchronized spark gap switches, a resonant pulsed transformer that can generate 600 kV in 265 ns pulses, an oil peaking switch and, a dipole antenna
Leroy, Yann. „Développement d'une méthodologie de fiabilisation des prises de décisions environnementales dans le cadre d'analyses de cycle de vie basée sur l'analyse et la gestion des incertitudes sur les données d'inventaires“. Phd thesis, Paris, ENSAM, 2009. http://pastel.archives-ouvertes.fr/pastel-00005830.
Der volle Inhalt der QuelleHouhou, Noureddine. „Durabilité des interfaces collées béton/renforts composites : développement d'une méthodologie d'étude basée sur un dispositif de fluage innovant conçu pour être couplé à un vieillissement hygrothermique“. Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00765147.
Der volle Inhalt der QuelleRamos, José. „Méthodologie basée sur la vélocimétrie laser pour l'étude de l'écoulement autour de surfaces portantes en rotation : application à la détermination des efforts locaux sur une pale de rotor en vol stationnaire“. Aix-Marseille 2, 1995. http://www.theses.fr/1995AIX22105.
Der volle Inhalt der QuelleLucanu, Nicolae. „Contribution à l'étude de la diffraction d'une onde électromagnétique plane par des obstacles métalliques en utilisant la méthode itérative basée sur le concept d'onde“. Toulouse, INPT, 2001. http://www.theses.fr/2001INPT024H.
Der volle Inhalt der QuelleDuval, Jean-Baptiste. „Détection numérique de petites imperfections de conductivité en 2D et 3D par une méthode dynamique basée sur l'équation des ondes et le contrôle géométrique“. Phd thesis, Université de Picardie Jules Verne, 2009. http://tel.archives-ouvertes.fr/tel-00429530.
Der volle Inhalt der QuelleGodet, Sylvain. „Instrumentation de mesure sur puce pour systèmes autotestables : application à la mesure de bruit de phase basée sur des résonateurs BAW“. Toulouse 3, 2010. http://thesesups.ups-tlse.fr/987/.
Der volle Inhalt der QuelleThis works deals with an integrated phase noise test bench for BAW resonators. The technology which has been used is the SiGe: C 0. 25 µm BiCMOS7RF process from ST Microelectronics. A current trend is to integrate testing facilities next to more or less complex circuits. The integrated test bench for measuring phase noise can relieve us of the constraints of external probing measurement and high cost. The simultaneous integration of the test circuit with the systems to measure also allows to fully exploiting component matching possibilities available on the same substrate. On-chip measurement greatly simplifies the testing process, minimizing the use of bulky external measurement equipment and high cost. It also allows following the system characteristic variations, in time or after various damages. This measure leads naturally to the design of self-testable, therefore self-reconfigurable, ICs. The goal of this thesis was to define the component architectures and the design of the integrated phase noise test bench, depending on the measurement accuracy. We show that this highly performance instrumentation system can be integrated in a standard SiGe technology