Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: DCT TECHNIQUE.

Rozprawy doktorskie na temat „DCT TECHNIQUE”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „DCT TECHNIQUE”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Guezzi, Messaoud Fadoua. "Analyse de l'apport des technologies d'intégration tri-dimensionnelles pour les imageurs CMOS : application aux imageurs à grande dynamique". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1022/document.

Pełny tekst źródła
Streszczenie:
La poursuite de l'intégration de fonctions toujours plus complexes au sein d'un même circuit constitue un des principaux enjeux de la microélectronique. L'intégration tridimensionnelle par empilement de circuits (3D stacking) constitue une voie prometteuse pour y parvenir. Elle permet notamment de dépasser certaines limitations atteintes par les circuits actuels, plus particulièrement dans les circuits pour lesquelles les données sont distribuées et qui nécessitent des bandes passantes importantes. Néanmoins, à ce jour, très peu de travaux ont montré les avantages de l'intégration 3D, en particulier ceux s'appuyant sur des résultats expérimentaux et de circuits concrets notamment dans le domaine des imageurs. Le présent travail de thèse a eu pour objectif d'exploiter la technologie 3D dans le cadre des capteurs d'images et dépasser la preuve de concept présentée dans l'état de l'art afin d'apporter une analyse concrète des apports de cette technologie dans le domaine des imageurs visibles. Nous avons identifié, d'une part l'extension de dynamique qui requiert un traitement proche pixel, d'autre part la compression locale, destinée à adresser les problèmes d'intégrité du signal, bande passante et consommation qui deviennent critiques avec l'augmentation des formats des imageurs. Ce choix permet d'apporter une réponse à la limitation de la dynamique des capteurs d'images 2D actuels, tout en gardant une architecture classique des pixels et en adressant le problème de la réduction de la quantité de données à transmettre. Une nouvelle méthode de codage flottant par groupe de pixels a été proposée et implémentée. Le principe s'appuie sur l'adaptation du temps d'intégration par groupe de pixels via l'application d'un exposant commun au groupe. Le temps d'intégration est ajusté à l'image suivante. Un premier niveau de compression est ainsi réalisé par le codage mantisse-exposant proposé. L'implémentation de cette technique a été validée sur un démonstrateur 2D au détriment de pixels sacrifiés aveugles de chaque groupe de pixels, comportant l'électronique de génération des signaux de commande de la HDR. La technique d'extension de dynamique proposée est suivie d'une compression à base de DCT (Discrete Cosine Transform} permettant de réduire le flux de données en sortie de la puce imageur. Les deux niveaux de compression permettent d'atteindre des taux de compression élevés allant jusqu'à 93% en maintenant un PSNR de 30dB et une qualité d'image acceptable pour des post-traitements. Une étude théorique de l'apport de l'intégration 3D en termes de consommation a été élaborée. Enfin, un démonstrateur 2D a été réalisé en technologie CMOS 180 nm en vue de valider l'architecture grande dynamique proposée. L'utilisation de la technologie 3D, dans la suite des travaux, permet l'implémentation d'une boucle courte, devenue possible grâce aux interconnexions verticales sans sacrifier des pixels morts. Le traitement local proche du pixel et la réduction de la latence, du flux de données et de la consommation sont les apports majeurs de l'intégration 3D étudiés dans ce travail
With the increase of systems complexity, integrating different technologies together has become a major challenge. Another challenge has traditionally been the limitation on the throughout between different part of the system coming from the interconnections. If traditional two dimensional integration solutions like System In a Package (SIP) bring heterogonous technologies together there is still limitations coming from the restricted number and lengths of interconnections between the different system components. Three Dimensional stacking (3D), by exploiting short vertical interconnections between different circuits of mixed technologies, has the potential to overcome these limitations. Still, despite strong interests for the 3D concepts, there is no advanced analysis of 3D integration benefits, especially in the field of imagers and smart image sensors. This thesis study the potential benefits of 3D integration, with local processing and short feedback loops, for the realisation of a High Dynamic Range (HDR) image sensor. The dense vertical interconnections are used to locally adapt the integration time by group of pixels, called macro-pixels, while keeping a classic pixel architecture and hence a high fill factor. Stacking the pixel section and circuit section enables a compact pixel and the integration of flexible and versatile functions. High Dynamic Range values producing an important quantity of data, the choice has been made to implement data compression to reduce the circuit throughout. A first level of compression is produced by coding the pixel value using a floating format with a common exponent shared among the macro-pixel. A second level of compression is proposed based on a simplified version of the Discrete Cosine Transform (DCT). Using this two level scheme, a compression of 93% can be obtained with a typical PSNR of 30 dB. A validation of the architecture was carried out by the development; fabrication and test of a prototype on a 2D, 180 nm, CMOS technology. A few pixels of each macro-pixel had to be sacrificed to implement the high dynamic range control signals and emulate the 3D integration. The test results are very promising proving the benefits that will bring the 3D integration in term of power consumption and image quality compared to a classic 2D integration. Future realisations of this architecture, done using a real 3D technology, separating sensing and processing on different circuits communicating by vertical interconnection will not need the sacrifice of any pixel to adjust the integration time, improving power consumption, image quality and latency
Style APA, Harvard, Vancouver, ISO itp.
2

Louis-sidney, Ludovic. "Modèles et outils de capitalisation des connaissances en conception : contribution au management et à l'ingénierie des connaissances chez Renault - DCT". Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00659298.

Pełny tekst źródła
Streszczenie:
Le changement de paradigme portant la ressource immatérielle qu'est la connaissance au devant des ressources matérielles est à l'oeuvre dans de nombreuses entreprises industrielles. Nos travaux de recherche mettent en lumière les disciplines du management des connaissances et de l'ingénierie des connaissances, apportant des réponses méthodologiques et techniques pour aborder cette ressource. Nous nous intéressons de façon particulière au mode d'exploitation des connaissances par le biais d'objets tangibles (documents, systèmes d'information). Dans ce cadre nous proposons un modèle conceptuel permettant de structurer les outils supports de connaissances d'un organisme. Ce modèle s'adresse principalement aux entreprises ayant une vision processus de leur fonctionnement, conformément à l'ISO 9000. Il a été évalué suivant deux axes. Le premier axe concerne sa capacité descriptive. Nous montrons que le principe de classification à facettes utilisé est apte à être suffisamment précis et complet pour s'adapter à de nombreuses applications. A cet effet, nous exploitons ce principe dans le domaine de l'ingénierie des connaissances et développons un premier démonstrateur permettant de réaliser des échanges automatisés entre fichiers paramétrés. Le second axe concerne l'aptitude du modèle conceptuel proposé à supporter la construction d'un système d'information contribuant à une démarche de management des connaissances. Un démonstrateur implémentant le modèle a été développé et apporte une vision concrète des possibilités offertes par ce dernier.
Style APA, Harvard, Vancouver, ISO itp.
3

Ekström, Alexander Gösta. "Developing dynamic combinatorial chemistry as a platform for drug discovery". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31073.

Pełny tekst źródła
Streszczenie:
Dynamic combinatorial chemistry (DCC) is a powerful tool to identify new ligands for biological targets. In the technique, library synthesis and hit identification are neatly combined into a single step. A labile functionality between fragments allows the biological target to self-select binders from a dynamic combinatorial library (DCL) of interconverting building blocks. The scope of suitable reversible reactions that proceed under thermodynamic control in physiological conditions has been gradually expanded over the last decades, however DCC has thus far failed to gain traction as a technique appropriate for drug discovery in the pharmaceutical industry. The constraints placed on library size by validated analytical techniques, and the effort-intensive reality of this academically elegant concept have not allowed DCC to develop into a broad-platform technique to compete with the high-throughput screening campaigns favoured by medicinal chemists. This thesis seeks to develop DCL analysis techniques, in an effort to increase the library size and accelerate the analysis of DCC experiments. Using a 19F-labelled core scaffold, we constructed a DCL that could be monitored non-invasively by 19F NMR. Building on NMR techniques developed by fragment screening and non-biological DCC campaigns, the method was developed to circumvent the undesired equilibrium-perturbing side effects arising from sample-consuming analytical methods. The N-acylhydrazone (NAH) DCL equilibrated rapidly at pH 6.2 using 4-amino-L-phenylalanine (4-APA) as a novel, physiologically benign, nucleophilic catalyst. The DCL was designed to target b-ketoacyl-ACP synthase III (FabH), an essential bacterial enzyme and antibiotic target. From the 5-membered DCL, a single combination was identified as a privileged structure by our 19F NMR method. The result correlated well with an in vitro assay, validating 19F NMR as a tool for DCL screening. During the 19F NMR study we identified an established antimicrobial compound, 4,5- dichloro-1,2-dithiole-3-one (HR45), to have potential as a core scaffold from which to develop future DCLs targeting FabH. Despite the potentially tractable chemistry of HR45 for DCC, lack of knowledge around the inhibitory mechanism of the compound prevented us from proceeding. Thus, we used mass spectrometry, NMR and molecular modelling to show that HR45 acts by forming a covalent adduct with S. aureus FabH. The 5-chloro substituent directs attack from the nucleophilic thiol side chain of the essential active site cysteine-112 residue via a Michael-type addition elimination mechanism. Although interesting, this mechanism disfavoured the use of HR45 as a core scaffold for NAH exchange in a DCC campaign. Electrospray ionisation mass spectrometry (ESI-MS) is a powerful technique that allows for larger DCLs by eliminating the size-limitations imposed by the need for spectral or chromatographic resolution of DCL members. We developed a 4-APAcatalysed NAH library targeting the pyridoxal 5’-phosphate (PLP) dependent enzyme 7,8-diaminopelargonic acid synthase (BioA), an essential enzyme in the biotin biosynthesis pathway. We exploited the aldehyde moiety of PLP to form an NAH DCL with a panel of hydrazides, and used the BioA isozymes from M. tuberculosis (Mtb) and E. coli to template the library. A combination of buffer exchange and denaturing ESI-MS allowed us to conduct a DCC experiment with a 29-member DCL. Hits from the DCC experiment correlated well with differential scanning fluorimetry (DSF) results. Of these hits, 5 compounds were selected for further study. In vivo activity was displayed by 2 compounds against E. coli and the ESKAPE pathogen A. baumannii. The identification of compounds with antibacterial activity from a DCL further validates ESI-MS as a platform technology for drug discovery.
Style APA, Harvard, Vancouver, ISO itp.
4

Aimer, Younes. "Étude des performances d'un système de communication sans fil à haut débit". Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2269.

Pełny tekst źródła
Streszczenie:
La demande des usagers en termes de débit, de couverture et de qualité de service croît exponentiellement, avec une demande de plus en plus accrue en énergie électrique pour assurer les liaisons entre les réseaux. Dans ce contexte, les nouvelles formes d’ondes basées sur la modulation OFDM se sont répandues et sont utilisées dans les récentes architectures de radiocommunications. Cependant, ces signaux sont très sensibles aux non-linéarités des amplificateurs de puissance à cause des fortes fluctuations d’enveloppe caractérisées par un fort PAPR qui dégrade le bilan énergétique et le rendement de l’émetteur. Dans ce travail de thèse, nous avons tout d’abord dressé un état de l’art des techniques de réduction du PAPR. Cette présentation nous a permis de proposer une nouvelle méthodologie basée sur les techniques d’entrelacement et de codage. Cette première contribution consiste en l’utilisation de la technique d’entrelacement, en faisant appel aux sous porteuses nulles pour la transmission des informations auxiliaires, tout en respectant les spécifications fréquentielles du standard utilisé. La deuxième contribution est basée sur la combinaison des deux techniques de Shaping et de Transformée en Cosinus Discrète DCT, dans l’objectif d’améliorer davantage les performances du système. Les résultats de simulation ont montré que l’utilisation de ces deux techniques permet un gain significatif en termes de réduction du PAPR, qui se traduit par l’amélioration du rendement. Enfin, nous avons présenté une étude expérimentale pour l’ensemble des techniques proposées afin de confirmer les résultats obtenus en simulation. Ces évaluations sont réalisées avec un banc d'essais de radiocommunications pour le test d'un amplificateur de puissance commercial de technologie LDMOS de 20W, fonctionnant à 3.7 GHz en classe AB. Les résultats obtenus pour les standards IEEE 802.11 montrent que ces nouvelles approches permettent de garantir la robustesse de transmission, d’améliorer la qualité des liaisons et d’optimiser la consommation électrique
The request of the users in terms of rate, coverage and quality of service is growing exponentially, with increasing demand for electrical energy, to ensure networks link. In this context, new waveforms based on the OFDM modulation become widely popular and used intensively in recent radio communications architectures. However, these signals are sensitive to the power amplifier nonlinearities because of their high envelope fluctuations characterized by a high PAPR, which degrades the energy consumption and the transmitter efficiency.In this thesis, we first began by a state art of the PAPR reduction techniques. This presentation allowed us to propose a new method based on interleaving and coding techniques. The first contribution consists on the use of the interleaving technique using null-subcarriers for the transmission of the side information, while respecting the frequency specifications of the used standard. The second one is based on the conjunction of the Shaping technique and the Discrete Cosine Transform (DCT), with the aim of improving the performance of the system. Simulation results show that the use of these two techniques allows a significant gain in terms of PAPR reduction, which results in the improvement of the system efficiency. Finally, we presented an experimental study of the proposed techniques using an RF test bench with a commercial LDMOS 20 W PA, class AB operating at 3.7 GHz. The results obtained for the IEEE 802.11 standards show that the proposed approaches allow the transmission robustness and quality, while optimizing the power consumption
Style APA, Harvard, Vancouver, ISO itp.
5

Edirisuriya, Amila. "Digital Hardware Architectures for Exact and Approximate DCT Computation Using Number Theoretic Techniques". University of Akron / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=akron1363233037.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Muradagha, Rafea. "A modified DFT technique for linear phase measurement". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0005/MQ45336.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Gregušová, Michaela. "Modifikace techniky difúzních gelů (DGT) pro charakterizaci přírodních systémů". Doctoral thesis, Vysoké učení technické v Brně. Fakulta chemická, 2010. http://www.nusl.cz/ntk/nusl-233319.

Pełny tekst źródła
Streszczenie:
Diffusive gradient in thin film technique (DGT) represents a relatively new approach for in situ determinations of labile metal-species in aquatic systems. The DGT device passively accumulates labile species from the solution while deployed in situ, and therefore contamination problems associated with conventional collection and filtration procedures are eliminated. This study deals with a possible modification of DGT technique. The key of using DGT technique for speciation analysis of metals is to find out suitable binding phase and diffusion layer. The new resin gel based on Spheron Oxin (5 sulphophenyl-azo-8-hydroxyquinoline) ion exchanger with a higher selectivity to trace metals than Chelex 100 could potentially provide more information on metals speciation in aquatic systems. The performance of this new binding phase was tested for the determination of Cd, Cu, Ni, Pb and U under laboratory conditions. The hydrogel layer based on poly(2-hydroxyethyl methacrylate) was synthesized and tested as a new diffusion gel for application in DGT technique.
Style APA, Harvard, Vancouver, ISO itp.
8

Rio, Jérémy. "Modélisation à l'échelle atomique de Cycloparaphénylènes avec les techniques ab initio". Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4078/document.

Pełny tekst źródła
Streszczenie:
Le travail de cette thèse porte sur l’étude à l’échelle atomique des molécules de Cycloparaphénylènes ([n]CPPs) et leurs dérivés, par modélisation ab initio (calculs par DFT/LDA). La première étude a été de regarder la stabilité de ces anneaux lors de leurs fonctionnalisations par des halogènes et les changements structuraux induits. La notion importante d’énergie de courbure a été soulevée pour trouver de nouvelles routes de synthèse. L’encapsulation de fullerène C60 à l’intérieur des [10]CPPs est une part très importante de cette thèse, avec plus particulièrement l’interaction entre le dimère d’azafullerène (C59N)2 et deux [10]CPPs. Les interactions supramoléculaires et l’alignement de deux [10]CPPs sur ce dimère ont été regardés théoriquement mais aussi expérimentalement grâce à une collaboration avec deux équipes de recherches en Allemagne et en Grèce. La possibilité d’alignement des [10]CPPs a conduit à l’étude de la fonctionnalisation de ces molécules dans le but de les connecter avec différents ponts de type aromatique, polymère ou métallique pour former une nouvelle famille de pseudo-nanotubes composé de [10]CPPs connectés. Selon les connexions utilisées, les propriétés de conduction des pseudo-nanotubes varient de semi-conducteur à large gap jusqu’à des structures métalliques. Dans ce manuscrit, les interactions entre les [n]CPPs et les nanotubes de carbone ont été étudiés pour former des structures où le [n]CPP interagit à l’intérieur ou autour du nanotube de carbone. Dans ce cadre, l’étude de la rotation du [n]CPP montre une très faible force de frottement et permet ainsi de prévoir une rotation ultra rapide du CPP
The work in this thesis concerns the study at the atomic scale of Cycloparaphenylene ([n]CPP) molecules and their complexes and derivatives, using ab initio modeling methods (DFT/LDA). I initially look at the stability of these molecular rings when functionalized by halogens, and the structural changes induced. The important notion of curvature energy is raised to find new synthesis routes. The encapsulation of C60 fullerene inside [10]CPPs is a very important part of this work and more particularly the interaction between the azafullerene dimer (C59N)2 and two [10]CPPs. This allowed us to look at supramolecular interactions and the alignment of two [10]CPPs on this dimer both theoretically and experimentally, through collaboration with research teams in Germany and Greece. The possibility of templated alignment of [10]CPPs leads to a study on the functionalization of these molecules with the aim of connecting them together with various connectors, for example with aromatic species, polymers or metals to form a new family of pseudo-nanotubes composed of multiply inter-linked [10]CPPs. Depending on the connections used, the conduction properties of the 'pseudo-nanotubes' can vary from wide-gap semiconductors to metallic structures. I also show in this manuscript, that [n]CPPs and carbon nanotubes can interact to form structures where the ring is encapsulated inside or around the carbon nanotube. In this context, the study of the rotation of cycloparaphenylene demonstrate a very low frictional force and thus predict ultra-rapid CPP rotation
Style APA, Harvard, Vancouver, ISO itp.
9

Dale, Brian M. "Optimal Design of MR Image Acquisition Techniques". Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081556784.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Alrasheed, Waleed. "Time and Space Efficient Techniques for Facial Recognition". Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/6238.

Pełny tekst źródła
Streszczenie:
In recent years, there has been an increasing interest in face recognition. As a result, many new facial recognition techniques have been introduced. Recent developments in the field of face recognition have led to an increase in the number of available face recognition commercial products. However, Face recognition techniques are currently constrained by three main factors: recognition accuracy, computational complexity, and storage requirements. The problem is that most of the current face recognition techniques succeed in improving one or two of these factors at the expense of the others. In this dissertation, four novel face recognition techniques that improve the storage and computational requirements of face recognition systems are presented and analyzed. Three of the four novel face recognition techniques to be introduced, namely, Quantized/truncated Transform Domain (QTD), Frequency Domain Thresholding and Quantization (FD-TQ), and Normalized Transform Domain (NTD). All the three techniques utilize the Two-dimensional Discrete Cosine Transform (DCT-II), which reduces the dimensionality of facial feature images, thereby reducing the computational complexity. The fourth novel face recognition technique is introduced, namely, the Normalized Histogram Intensity (NHI). It is based on utilizing the pixel intensity histogram of poses' subimages, which reduces the computational complexity and the needed storage requirements. Various simulation experiments using MATLAB were conducted to test the proposed methods. For the purpose of benchmarking the performance of the proposed methods, the simulation experiments were performed using current state-of-the-art face recognition techniques, namely, Two Dimensional Principal Component Analysis (2DPCA), Two-Directional Two-Dimensional Principal Component Analysis ((2D)^2PCA), and Transform Domain Two Dimensional Principal Component Analysis (TD2DPCA). The experiments were applied to the ORL, Yale, and FERET databases. The experimental results for the proposed techniques confirm that the use of any of the four novel techniques examined in this study results in a significant reduction in computational complexity and storage requirements compared to the state-of-the-art techniques without sacrificing the recognition accuracy.
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering
Style APA, Harvard, Vancouver, ISO itp.
11

Wagner, Philipp. "Modélisation du graphène avec les techniques ab initio". Nantes, 2013. http://www.theses.fr/2013NANT2011.

Pełny tekst źródła
Streszczenie:
Le travail de cette thèse porte sur l'étude du matériau graphène et de nanostructures dérivées, par modélisation ab initio. L'influence des terminaisons chimiques des bords du feuillet graphène (armchair, zigzag et Klein, en considérant aussi des reconstructions possible) a été étudiée. En l'absence de terminaisons, une nouvelle configuration stable «repliée» a été identifiée; qui correspond à la création de structure nanotube le long des bords du feuillet. Une étude des bords hydrogénés a été effectuée, qui montrait des nouvelles configurations de bords Klein reconstruites énergétiquement favorable. En outre, les bords hydrogénés joueront un rôle clef dans les processus de croissance du graphène, et d'éventuels modèles de croissance adaptés via l'addition des dimères de carbone sont proposés. Des terminaisons plus complexes sur des nanorubans de graphène de type armchair de 4 à 25 Å de largeur ont été modélisées également, par exemple impliquant -OH (hydroxyle). L'influence sur la structure et les propriétés électronique, chimique et mécaniques des nanorubans a été étudiée. Cette partie a conduit à rediscuter la notion de module de Young de nanofeuillets (graphène, BN, MoS2, MoTe2 etc. ). Notamment ce travail propose une définition de «volume à prendre en compte» sur la base d'une densité électronique moyenne. Cette nouvelle approche offre un cadre transférable sous-jacent pour calculer le module de Young, et donc de pouvoir extrapoler correctement les valeurs entre le graphène, des nanotubes de carbone et du graphite. Le concept a aussi été étendu à des polymères organiques
In this thesis graphene and related nanostructures were studied, using density functional ab initio modelling techniques. The influence of different edge terminations has been investigated for typical pristine graphene edges (armchair, zigzag and Klein) and several reconstructed edge configurations. For unterminated graphene edges a new stable folded back edge has been identified, creating a nanotube along the graphene edge. A systematic study of hydrogenated edges was performed, and new favourable reconstructed Klein edge configurations were found. Furthermore hydrogenated edges are expected to play an important role for graphene growth processes, and thus possible adapted growth models via carbon dimer addition are proposed. Next more complex edge functionalisations such as hydroxylated (-OH) edges were studied, in particular modelling thin 4 - 25 Å wide armchair graphene nanoribbons. Notably the influence on structural, electronic, chemical and mechanical properties has been investigated. This promises new routes towards controlled design of specific nanoribbon properties. Finally the in-plane Young’s modulus of various nanosheets (including graphene, BN, MoS2, MoTe2 etc. ) were calculated. In this context a new geometry independent volume definition for nanoobjects has been developed, based on the average electron density. This new approach offers a transferable underlying framework to calculate the Young’s modulus, and thus values correctly extrapolate for example between graphene, carbon nanotubes and bulk graphite. The concept was further extended to organic polymers
Style APA, Harvard, Vancouver, ISO itp.
12

Fleury, Christine Collet Anne-Christine. "Intégrer une thématique " Sciences et Société " dans une bibliothèque de lecture publique une approche globale pour la Médiathèque du Bachur /". [S.l.] : [s.n.], 2004. http://www.enssib.fr/bibliotheque/documents/dcb/fleury.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Abdallah, Abdallah Sabry. "Investigation of New Techniques for Face detection". Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33191.

Pełny tekst źródła
Streszczenie:
The task of detecting human faces within either a still image or a video frame is one of the most popular object detection problems. For the last twenty years researchers have shown great interest in this problem because it is an essential pre-processing stage for computing systems that process human faces as input data. Example applications include face recognition systems, vision systems for autonomous robots, human computer interaction systems (HCI), surveillance systems, biometric based authentication systems, video transmission and video compression systems, and content based image retrieval systems. In this thesis, non-traditional methods are investigated for detecting human faces within color images or video frames. The attempted methods are chosen such that the required computing power and memory consumption are adequate for real-time hardware implementation. First, a standard color image database is introduced in order to accomplish fair evaluation and benchmarking of face detection and skin segmentation approaches. Next, a new pre-processing scheme based on skin segmentation is presented to prepare the input image for feature extraction. The presented pre-processing scheme requires relatively low computing power and memory needs. Then, several feature extraction techniques are evaluated. This thesis introduces feature extraction based on Two Dimensional Discrete Cosine Transform (2D-DCT), Two Dimensional Discrete Wavelet Transform (2D-DWT), geometrical moment invariants, and edge detection. It also attempts to construct a hybrid feature vector by the fusion between 2D-DCT coefficients and edge information, as well as the fusion between 2D-DWT coefficients and geometrical moments. A self organizing map (SOM) based classifier is used within all the experiments to distinguish between facial and non-facial samples. Two strategies are tried to make the final decision from the output of a single SOM or multiple SOM. Finally, an FPGA based framework that implements the presented techniques, is presented as well as a partial implementation. Every presented technique has been evaluated consistently using the same dataset. The experiments show very promising results. The highest detection rate of 89.2% was obtained when using a fusion between DCT coefficients and edge information to construct the feature vector. A second highest rate of 88.7% was achieved by using a fusion between DWT coefficients and geometrical moments. Finally, a third highest rate of 85.2% was obtained by calculating the moments of edges.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
14

Nguyen, Thi Minh Tam. "Approches basées sur DCA pour la programmation mathématique avec des contraintes d'équilibre". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0113/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous étudions des approches basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) pour la programmation mathématique avec des contraintes d'équilibre, notée MPEC (Mathematical Programming with Equilibrum Constraints en anglais). Etant un sujet classique et difficile de la programmation mathématique et de la recherche opérationnelle, et de par ses diverses applications importantes, MPEC a attiré l'attention de nombreux chercheurs depuis plusieurs années. La thèse se compose de quatre chapitres principaux. Le chapitre 2 étudie une classe de programmes mathématiques avec des contraintes de complémentarité linéaire. En utilisant quatre fonctions de pénalité, nous reformulons le problème considéré comme des problèmes DC standard, i.e minimisation d'une fonction DC sous les contraintes convexes. Nous développons ensuite des algorithmes appropriés basés sur DCA pour résoudre les problèmes DC résultants. Deux d'entre eux sont reformulés encore sous la forme des problèmes DC généraux (i.e. minimisation d'une fonction DC sous des contraintes DC) pour que les sous-problèmes convexes dans DCA soient plus faciles à résoudre. Après la conception de DCA pour le problème considéré, nous développons ces schémas DCA pour deux cas particuliers: la programmation quadratique avec des contraintes de complémentarité linéaire, et le problème de complémentarité aux valeurs propres. Le chapitre 3 aborde une classe de programmes mathématiques avec des contraintes d'inégalité variationnelle. Nous utilisons une technique de pénalisation pour reformuler le problème considéré comme un programme DC. Une variante de DCA et sa version accélérée sont proposées pour résoudre ce programme DC. Comme application, nous résolvons le problème de détermination du prix de péages dans un réseau de transport avec des demandes fixes (" the second-best toll pricing problem with fixed demands" en anglais). Le chapitre 4 se concentre sur une classe de problèmes d'optimisation à deux niveaux avec des variables binaires dans le niveau supérieur. En utilisant une fonction de pénalité exacte, nous reformulons le problème considéré comme un programme DC standard pour lequel nous développons un algorithme efficace basé sur DCA. Nous appliquons l'algorithme proposé pour résoudre le problème d'interdiction de flot maximum dans un réseau ("maximum flow network interdiction problem" en anglais). Dans le chapitre 5, nous nous intéressons au problème de conception de réseau d'équilibre continu ("continuous equilibrium network design problem" en anglais). Il est modélisé sous forme d'un programme mathématique avec des contraintes de complémentarité, brièvement nommé MPCC (Mathematical Program with Complementarity Constraints en anglais). Nous reformulons ce problème MPCC comme un programme DC général et proposons un schéma DCA approprié pour le problème résultant
In this dissertation, we investigate approaches based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for mathematical programs with equilibrium constraints. Being a classical and challenging topic of nonconvex optimization, and because of its many important applications, mathematical programming with equilibrium constraints has attracted the attention of many researchers since many years. The dissertation consists of four main chapters. Chapter 2 studies a class of mathematical programs with linear complementarity constraints. By using four penalty functions, we reformulate the considered problem as standard DC programs, i.e. minimizing a DC function on a convex set. The appropriate DCA schemes are developed to solve these four DC programs. Two among them are reformulated again as general DC programs (i.e. minimizing a DC function under DC constraints) in order that the convex subproblems in DCA are easier to solve. After designing DCA for the considered problem, we show how to develop these DCA schemes for solving the quadratic problem with linear complementarity constraints and the asymmetric eigenvalue complementarity problem. Chapter 3 addresses a class of mathematical programs with variational inequality constraints. We use a penalty technique to recast the considered problem as a DC program. A variant of DCA and its accelerated version are proposed to solve this DC program. As an application, we tackle the second-best toll pricing problem with fixed demands. Chapter 4 focuses on a class of bilevel optimization problems with binary upper level variables. By using an exact penalty function, we express the bilevel problem as a standard DC program for which an efficient DCA scheme is developed. We apply the proposed algorithm to solve a maximum flow network interdiction problem. In chapter 5, we are interested in the continuous equilibrium network design problem. It was formulated as a Mathematical Program with Complementarity Constraints (MPCC). We reformulate this MPCC problem as a general DC program and then propose a suitable DCA scheme for the resulting problem
Style APA, Harvard, Vancouver, ISO itp.
15

Gauthier, Évelyne. "Les techniques de manipulation du roman populaire dit féminin". Limoges, 1986. http://www.theses.fr/1986LIMO0503.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Gauthier, Évelyne. "Les Techniques de manipulation du roman populaire dit féminin". Lille 3 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb375985549.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Bourguignat, Christelle Muller Joëlle. "La part des ouvrages scientifiques et techniques en bibliothèque municipale". [S.l.] : [s.n.], 2004. http://www.enssib.fr/bibliotheque/documents/dcb/bourguignat.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Devillers, Delphine. "Fiabilisation de la quantification des éléments traces cationiques et anioniques par la technique d'échantillonnage DGT en milieu aquatique naturel". Thesis, Limoges, 2017. http://www.theses.fr/2017LIMO0058/document.

Pełny tekst źródła
Streszczenie:
La technique d’échantillonnage passif DGT (« Diffusive Gradients in Thin Films ») possède de nombreux avantages (intégration des variations temporelles, abaissement des limites de quantification) qui font d’elle une méthode prometteuse pour une utilisation en réseaux de mesure pour quantifier les éléments traces dans les eaux naturelles. Cependant, il existe encore des zones d’ombre qui constituent des freins à son utilisation dans un contexte réglementaire. Ce travail a donc pour objectif d’identifier des biais potentiels et ainsi contribuer à fiabiliser la méthode. Cette étude montre que l’obtention d’un résultat avec une incertitude minimisée doit passer par la détermination expérimentale des facteurs d’élution ; cependant, l’utilisation d’une valeur standard de 0,8 pour le Cr(III) et de 0,85 pour Al(III), Cd(II), Co(II), Cu(II), Ni(II), Pb(II) et Zn(II) est proposée afin d’alléger les manipulations tout en conservant une incertitude raisonnable (<10%). L’étude de l’influence de l’encrassement des dispositifs DGT a montré que la sorption des cations Cd(II), Cu(II) et Pb(II) sur les filtres encrassés affectent respectivement peu, modérément et fortement leur accumulation dans les échantillonneurs et donc leur quantification. Des durées d’exposition de moins d’une semaine sont alors préconisées pour ces éléments. En revanche, l’encrassement a eu un impact négligeable sur le Ni(II) et sur les oxyanions As(V), Cr(VI), Sb(V) et Se(VI). Enfin, une méthode de quantification simultanée du Cr(III), essentiel à la vie, et du Cr(VI), toxique, a été développée en vue d’améliorer l’évaluation de la toxicité d’une eau. Un unique échantillonneur DGT fixe les deux formes tandis qu’elles sont ensuite sélectivement séparées par une étape d’élution. Cette méthode est robuste sur une large gamme de forces ioniques et de concentrations en sulfate mais sur une gamme de pH plus restreinte ne couvrant pas toutes les eaux naturelles (4 à 6)
The passive sampling DGT technique (Diffusive Gradients in Thin Films) has a lot of benefits (time-weighted average concentrations, low limits of quantification) and would therefore be a useful tool for monitoring studies to quantify trace elements in natural water. However, there are still some limitations and grey areas that put the brakes on the development of the method for regulatory applications. The aim of this work is to identify potential biases and contribute to increase the method reliability. This study shows that a minimized uncertainty on results can be obtained only if elution factors are experimentally determined; however, standard values of 0.8 for Cr(III) and 0.85 for Al(III), Cd(II), Co(II), Cu(II), Ni(II), Pb(II) and Zn(II) are suggested to reduce manipulations while keeping reasonable uncertainty (<10%). Studying the influence of fouling developed on DGT devices showed that the sorption of cations Cd(II), Cu(II) and Pb(II) had, respectively, a slight, moderate and strong impact on their accumulation in DGT samplers and therefore on their quantification. Samplers should then be deployed for less than one week. In contrast, fouling had a negligible impact on oxyanions As(V), Cr(VI), Sb(V) and Se(VI). Finally, a method was developed to simultaneously quantify both Cr oxidation states naturally occurring in natural waters, which are Cr(III), essential to life, and Cr(VI), toxic. Both forms are accumulated in a single DGT sampler before being selectively separated during an elution step. This method is robust for wide ranges of ionic strengths and sulfate concentrations but for a narrower range of pH (4 to 6)
Style APA, Harvard, Vancouver, ISO itp.
19

Shuttleworth, Sarah M. "The application of gel-based sampling techniques (DET and DGT) to the measurement of sediment pore-water solutes at high (mm) spatial resolution". Thesis, Lancaster University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369497.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Kane, David. "Evaluating phosphorus availability in soils receiving organic amendment application using the Diffusive Gradients in Thin-films (DGT) technique". Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/8001.

Pełny tekst źródła
Streszczenie:
Phosphorus is a resource in finite supply. Use of organic amendments in agriculture can be a sustainable alternative to inorganic P, provided it can meet crop requirements. However a lack of consistent knowledge of plant P availability following application of organic amendments, limits its potential. Studies suggest chemical extraction procedures, may not reflect plant available P. The Diffusive Gradients in Thin-films (DGT) technique is based on natural diffusion of P via a hydrogel and sorption to a ferrihydrite binding layer; which should accurately represent soil P (CDGT) in a plant available form. The aim of this research was to evaluate changes in soil P availability, following the addition of organic amendments, cattle farmyard manure (FYM), green waste compost (GW), cattle slurry (SLRY) and superphosphate (SP) using Olsen P and DGT. The research included incubation, and glasshouse studies, using ryegrass (Lolium perenne L.). Soils with a history of application of the aforementioned organic amendments were used (Gleadthorpe), as well as a soil deficient in P (Kincraigie). The hypotheses were as follows H1 A build-up of P available by diffusive supply, from historic treatment additions and subsequent availability from fresh treatment additions will be demonstrated by DGT. H2 Historical treatment additions are more important at determining yield and P uptake than fresh additions. H3 DGT can detect changes in P available by diffusive supply following addition of different treatments and subsequently following lysis of microbial cells on a soil deficient in P. H4 DGT will provide a more accurate indication of plant P availability than organic amendments in a soil deficient in P. H5 P measurements using DGT will be lower from organic amendments than superphosphate.H6 DIFS simulations of soil kinetic parameters will provide additional information about how treatments influence P resupply from solid phase to solution following DGT deployment. Cont/d.
Style APA, Harvard, Vancouver, ISO itp.
21

Bouallagui, Sarra. "Techniques d'optimisation déterministe et stochastique pour la résolution de problèmes difficiles en cryptologie". Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00557912.

Pełny tekst źródła
Streszczenie:
Cette thèse s'articule autour des fonctions booléennes liées à la cryptographie et la cryptanalyse de certains schémas d'identification. Les fonctions booléennes possèdent des propriétés algébriques fréquemment utilisées en cryptographie pour constituer des S-Boxes (tables de substitution).Nous nous intéressons, en particulier, à la construction de deux types de fonctions : les fonctions courbes et les fonctions équilibrées de haut degré de non-linéarité.Concernant la cryptanalyse, nous nous focalisons sur les techniques d'identification basées sur les problèmes de perceptron et de perceptron permuté. Nous réalisons une nouvelle attaque sur le schéma afin de décider de sa faisabilité.Nous développons ici des nouvelles méthodes combinant l'approche déterministe DCA (Difference of Convex functions Algorithm) et heuristique (recuit simulé, entropie croisée, algorithmes génétiques...). Cette approche hybride, utilisée dans toute cette thèse, est motivée par les résultats intéressants de la programmation DC.
Style APA, Harvard, Vancouver, ISO itp.
22

Lundell, Johan. "Efficiency Enhancement Techniques for a 0.13 µm CMOS DECT PA". Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96235.

Pełny tekst źródła
Streszczenie:
Different efficiency enhancement techniques for a 1.9 GHz DECT power amplifier (PA) have been investigated. Generally, a higher efficiency can be achieved by varying the supply voltage and/or the bias of the PA or by making topology and/or class changes. In this work, changes in bias and topology have been studied. Focus has been on enhancing efficiency at power back-off to increase talk-time for handset applications. The PA used in this study was a two stage 0.13 μm CMOS PA for 2.5 V operation. In its original configuration, it delivered 28.3 dBm of maximum output power with a PAE of 43.5 % (simulated). At 10 dB power back-off the PAE was only 15.9 %. The largest improvement was obtained using a topology change with the amplifying transistor split into two parallel transistors (class A and B) with variable bias. The PA delivered 29.1 dBm to the load with a PAE of 45.1 %, and 18 % PAE at power back-off; a relative improvement at this level with 13 %. The new PA topology does not require any additional area.
Style APA, Harvard, Vancouver, ISO itp.
23

Ho, Vinh Thanh. "Techniques avancées d'apprentissage automatique basées sur la programmation DC et DCA". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0289/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse, nous développons certaines techniques avancées d'apprentissage automatique dans le cadre de l'apprentissage en ligne et de l'apprentissage par renforcement (« reinforcement learning » en anglais -- RL). L'épine dorsale de nos approches est la programmation DC (Difference of Convex functions) et DCA (DC Algorithm), et leur version en ligne, qui sont reconnues comme de outils puissants d'optimisation non convexe, non différentiable. Cette thèse se compose de deux parties : la première partie étudie certaines techniques d'apprentissage automatique en mode en ligne et la deuxième partie concerne le RL en mode batch et mode en ligne. La première partie comprend deux chapitres correspondant à la classification en ligne (chapitre 2) et la prédiction avec des conseils d'experts (chapitre 3). Ces deux chapitres mentionnent une approche unifiée d'approximation DC pour différents problèmes d'optimisation en ligne dont les fonctions objectives sont des fonctions de perte 0-1. Nous étudions comment développer des algorithmes DCA en ligne efficaces en termes d'aspects théoriques et computationnels. La deuxième partie se compose de quatre chapitres (chapitres 4, 5, 6, 7). Après une brève introduction du RL et ses travaux connexes au chapitre 4, le chapitre 5 vise à fournir des techniques efficaces du RL en mode batch basées sur la programmation DC et DCA. Nous considérons quatre différentes formulations d'optimisation DC en RL pour lesquelles des algorithmes correspondants basés sur DCA sont développés. Nous traitons les problèmes clés de DCA et montrons l'efficacité de ces algorithmes au moyen de diverses expériences. En poursuivant cette étude, au chapitre 6, nous développons les techniques du RL basées sur DCA en mode en ligne et proposons leurs versions alternatives. Comme application, nous abordons le problème du plus court chemin stochastique (« stochastic shortest path » en anglais -- SSP) au chapitre 7. Nous étudions une classe particulière de problèmes de SSP qui peut être reformulée comme une formulation de minimisation de cardinalité et une formulation du RL. La première formulation implique la norme zéro et les variables binaires. Nous proposons un algorithme basé sur DCA en exploitant une approche d'approximation DC de la norme zéro et une technique de pénalité exacte pour les variables binaires. Pour la deuxième formulation, nous utilisons un algorithme batch RL basé sur DCA. Tous les algorithmes proposés sont testés sur des réseaux routiers artificiels
In this dissertation, we develop some advanced machine learning techniques in the framework of online learning and reinforcement learning (RL). The backbones of our approaches are DC (Difference of Convex functions) programming and DCA (DC Algorithm), and their online version that are best known as powerful nonsmooth, nonconvex optimization tools. This dissertation is composed of two parts: the first part studies some online machine learning techniques and the second part concerns RL in both batch and online modes. The first part includes two chapters corresponding to online classification (Chapter 2) and prediction with expert advice (Chapter 3). These two chapters mention a unified DC approximation approach to different online learning algorithms where the observed objective functions are 0-1 loss functions. We thoroughly study how to develop efficient online DCA algorithms in terms of theoretical and computational aspects. The second part consists of four chapters (Chapters 4, 5, 6, 7). After a brief introduction of RL and its related works in Chapter 4, Chapter 5 aims to provide effective RL techniques in batch mode based on DC programming and DCA. In particular, we first consider four different DC optimization formulations for which corresponding attractive DCA-based algorithms are developed, then carefully address the key issues of DCA, and finally, show the computational efficiency of these algorithms through various experiments. Continuing this study, in Chapter 6 we develop DCA-based RL techniques in online mode and propose their alternating versions. As an application, we tackle the stochastic shortest path (SSP) problem in Chapter 7. Especially, a particular class of SSP problems can be reformulated in two directions as a cardinality minimization formulation and an RL formulation. Firstly, the cardinality formulation involves the zero-norm in objective and the binary variables. We propose a DCA-based algorithm by exploiting a DC approximation approach for the zero-norm and an exact penalty technique for the binary variables. Secondly, we make use of the aforementioned DCA-based batch RL algorithm. All proposed algorithms are tested on some artificial road networks
Style APA, Harvard, Vancouver, ISO itp.
24

Soupart, Adrien. "Nouveau regard sur les propriétés photophysiques et photochimiques du complexe tris(2,2'-bipyridine) ruthénium II : apport de la DFT". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30143.

Pełny tekst źródła
Streszczenie:
Les complexes polypyridyles de ruthénium présentent un fort intérêt pour le développement d'applications dans les domaines du photovoltaïque, de la photocatalyse, de la détection, de la thérapie photodynamique ou, plus récemment, de la chimiothérapie photoactivée. Cependant, même pour [Ru(bpy)3]2+ qui est la référence pour ces complexes, de nombreuses observations expérimentales ne sont pas encore rationalisées. C'est pourquoi il est nécessaire de caractériser la topologie des surfaces d'énergie potentielles des états excités et les processus qui y sont à l'œuvre grâce aux méthodes de chimie théorique les plus modernes. La première partie de ce manuscrit présente les méthodes permettant l'exploration de ces surfaces et la rationalisation des propriétés photophysiques de [Ru(bpy)3]2+ et [Ru(tpy)2]2+ : simulation de spectres d'émission résolus en vibration, étude de la désactivation non radiative via la recherche de points de croisement entre les surfaces et l'évaluation des barrières d'énergie entre états par la méthode Nudged Elastic Band. La photoréactivité de [Ru(bpy)3]2+ n'a pas été étudiée d'un point de vue théorique et implique des "dark states" 3MC, sur lesquels très peu de données spectroscopiques sont disponibles, elle représente donc un vrai défi pour les théoriciens. Nous présentons dans une deuxième partie la caractérisation d'un véritable bassin 3MC avec une analyse Natural Bond Orbital des états s'y trouvant. Nous avons simulé les spectres d'absorption (UV-visible, XAS, IR) de tous les états excités triplets de [Ru(bpy)3]2+ que nous confrontons aux rares données expérimentales et leurs interprétations contradictoires afin de guider de futures études. Enfin, nous proposons la première étude mécanistique théorique complète d'une réaction de photosubstitution, la réaction modèle : [Ru(bpy)3]2+ + 2 MeCN → cis/trans [Ru(bpy)2(MeCN)2]2+ + bpy, en explorant les surfaces d'énergie potentielle fondamentales et excitées. Ce mécanisme multi-étapes séquentiel à deux photons nous a permis de rationaliser l'obtention majoritaire du photoproduit cis
Ruthenium polypyridyl complexes are of great interest for photovoltaic applications, photocatalysis, sensing, photodynamic therapy (PDT) or photoactivated chemotherapy (PACT). But even for the archetype [Ru(bpy)3]2+, not all experimental features have been unravelled yet. It is then mandatory to map the topology of the excited states potential energy surfaces and to characterize the associated processes with state-of-the-art theoretical methods. The first part of the manuscript describes the methods used to explore these surfaces and the rationalization of photophysical properties of two complexes, [Ru(bpy)3]2+ and [Ru(tpy)2]2+: simulation of Vibrationally Resolved Electronic emission Spectra (VRES), study of the non-radiative decay process through the optimization of Minimum Energy Crossing Points (MECP) and calculation of energy barriers and minimum energy paths using the Nudged Elastic Band method (NEB). The photoreactivity of [Ru(bpy)3]2+ has never been studied using theoretical methods. It involves 3MC dark states, poorly described by spectroscopic data. Therefore, it represents a great challenge for theoreticians. We describe in a second part a true 3MC basin and a Natural Bond Orbital analysis was conducted on the states composing it. We compare our simulations of various absorption spectra (UV-Vis, XAS, IR) of all triplet excited states of [Ru(bpy)3]2+ with the few experimental data available, and their contradictory interpretations, in order to provide a guide for future experiments. Finally, we propose the first and complete theoretical mechanism for a photosubstitution reaction using the model reaction: [Ru(bpy)3]2+ + 2 MeCN → cis/trans-[Ru(bpy)2(MeCN)2]2+ + bpy, by exploring fundamental and excited potential energy surfaces. This multi-step, sequential, two photon, mechanism allowed us to rationalize the preferential formation of the cis photoproduct
Style APA, Harvard, Vancouver, ISO itp.
25

Ouerhani, Yousri. "Contribution à la définition, à l'optimisation et à l'implantation d'IP de traitement du signal et des données en temps réel sur des cibles programmables". Phd thesis, Université de Bretagne occidentale - Brest, 2012. http://tel.archives-ouvertes.fr/tel-00840866.

Pełny tekst źródła
Streszczenie:
En dépit du succès que les implantations optiques des applications de traitement d'images ont connu, le traitement optique de l'information suscite aujourd'hui moins d'intérêt que dans les années 80-90. Ceci est dû à l'encombrement des réalisations optiques, la qualité des images traitées et le coût des composants optiques. De plus, les réalisations optiques ont eu du mal à s'affranchir de l'avènement des circuits numériques. C'est dans ce cadre que s'inscrivent les travaux de cette thèse dont l'objectif est de proposer une implantation numérique des méthodes optiques de traitement d'images. Pour réaliser cette implantation nous avons choisi d'utiliser les FPGA et les GPU grâce aux bonnes performances de ces circuits en termes de rapidité. En outre, pour améliorer la productivité nous nous sommes focalisés à la réutilisation des blocs préconçus ou IP " Intellectual Properties ". Malgré que les IP commerciales existantes soient optimisées, ces dernières sont souvent payantes et dépendent de la famille de la carte utilisée. La première contribution est de proposer une implantation optimisée des IP pour le calcul de la transformée de Fourier FFT et de la DCT. En effet, le choix de ces deux transformations est justifié par l'utilisation massive de ces deux transformées (FFT et DCT), dans les algorithmes de reconnaissance de formes et de compression, respectivement. La deuxième contribution est de valider le fonctionnement des IP proposées par un banc de test et de mesure. Enfin, la troisième contribution est de concevoir sur FPGA et GPU des implantations numériques des applications de reconnaissance de formes et de compression. Un des résultats probant obtenu dans cette thèse consiste à avoir une rapidité de l'IP FFT proposée 3 fois meilleure que celle de l'IP FFT Xilinx et de pouvoir réaliser 4700 corrélations par seconde.
Style APA, Harvard, Vancouver, ISO itp.
26

Remersaro, Santiago. "On low power test and DFT techniques for test set compaction". Diss., University of Iowa, 2008. https://ir.uiowa.edu/etd/211.

Pełny tekst źródła
Streszczenie:
The objective of manufacturing test is to separate the faulty circuits from the good circuits after they have been manufactured. Three problems encompassed by this task will be mentioned here. First, the reduction of the power consumed during test. The behavior of the circuit during test is modified due to scan insertion and other testing techniques. Due to this, the power consumed during test can be abnormally large, up to several times the power consumed during functional mode. This can result in a good circuit to fail the test or to be damaged due to heating. Second, how to modify the design so that it is easily testable. Since not every possible digital circuit can be tested properly it is necessary to modify the design to alter its behavior during test. This modification should not alter the functional behavior of the circuit. An example of this is test point insertion, a technique aimed at reducing test time and decreasing the number of faulty circuits that pass the test. Third, the creation of a test set for a given design that will both properly accomplish the task and require the least amount of time possible to be applied. The precision in separation of faulty circuits from good circuits depends on the application for which the circuit is intended and, if possible, must be maximized. The test application time is should be as low as possible to reduce test cost. This dissertation contributes to the discipline of manufacturing test and will encompass advances in the afore mentioned areas. First, a method to reduce the power consumed during test is proposed. Second, in the design modification area, a new algorithm to compute test points is proposed. Third, in the test set creation area, a new algorithm to reduce test set application time is introduced. The three algorithms are scalable to current industrial design sizes. Experimental results for the three methods show their effectiveness.
Style APA, Harvard, Vancouver, ISO itp.
27

Venianaki, Maria. "Cancer tissue classification from DCE-MRI data using pattern recognition techniques". Thesis, IMT Alti Studi Lucca, 2019. http://e-theses.imtlucca.it/264/1/Venianaki_phdthesis.pdf.

Pełny tekst źródła
Streszczenie:
Cancer research has significantly advanced in recent years mainly through developments in medical genomics and bioinformatics. It is expected that such approaches will result in more durable tumor control and fewer side effects compared with conventional treatments such as radiotherapy or chemotherapy. From the imaging standpoint, non-invasive imaging biomarkers (IBs) that assess angiogenic response and tumor environment at an early stage of therapy are of utmost importance since they could provide useful insights into therapy planning. However, the extraction of IBs is still an open problem since there are no standardized imaging protocols yet or established methods for the robust extraction of IBs. DCE-MRI is amongst the most promising non-invasive functional imaging modalities while compartmental pharmacokinetic (PK) modeling is the most common technique used for DCE-MRI data analysis. However, PK models suffer from a number of limitations such as modeling complexity, which often leads to variability in the computed biomarkers. To address these problems, alternative DCE-MRI biomarker extraction strategies coupled with a profound understanding of the physiological meaning of IBs is a sine qua non condition. To this end, a more recent model-free approach has been suggested in literature for the analysis of DCE-MRI data, which relies on the shape classification of the time-signal uptake curves of image pixels in a selected tumor region of interest. This thesis is centered on this new approach and the clinical question whether model-free DCE-MRI data analysis has the potential to provide robust, clinically significant biomarkers using pattern recognition and image analysis techniques.
Style APA, Harvard, Vancouver, ISO itp.
28

Polo, Montes Carlos A. "The effect of cementation technique on the retention of adhesively cemented prefabricated posts". Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2007. https://www.mhsl.uab.edu/dt/2007m/polomontes.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Hussein, Ahmed. "Design Techniques for Low Spur Wide Tuning All-Digital Millimeter-Wave Frequency Synthesizers". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/801.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

Gier, Sylvie Caraco Alain. "Quelle place pour les automates de prêt et de retour dans les bibliothèques publiques françaises ? Analyse technique et stratégique". [S.l.] : [s.n.], 2004. http://www.enssib.fr/bibliotheque/documents/dcb/gier.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Rojas, Daniel. "Revenue management techniques applied to the parking industry". [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001835.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Poulos, Konstantinos. "NEW TECHNIQUES ON VLSI CIRCUIT TESTING & EFFICIENT IMPLEMENTATIONS OF ARITHMETIC OPERATIONS". OpenSIUC, 2020. https://opensiuc.lib.siu.edu/dissertations/1872.

Pełny tekst źródła
Streszczenie:
Testing is necessary factor to guarantee that ICs operate according to specifications before being delivered to customers. Testing is a process used to identify ICs containing imperfections or manufacturing defects that may cause failures. Inaccuracy and imperfections can be introduced during the fabrication of the chips due to the complex mechanical and chemical steps required during the manufacturing processes. The testing process step applies test patterns to circuits and analyzes their responses. This work focuses on VLSI circuit testing with two implementations for DFT (Design for testability); the first is an ATPG tool for sequential circuits and the second is a BIT (Built in Test) circuit for high frequency signal classification.There has been a massive increase in the number of transistors integrated in a chip, and the complexity of the circuit is increasing along with it. This growth has become a bottleneck for the test developers. The proposed ATPG tool was designed for testing sequential circuits. Scan Chains in Design For Testability (DFT) gained more prominence due to the increase in the complexity of the modern circuits. As the test time increases along with the number of memory elements in the circuit, new and improved methods are needed. Even though scan chains implementation effectively increases observability and controllability, a big portion of the time is wasted while shifting in and shifting out the test patterns through the scan chain. Additionally, the modern applications require operating speed at higher frequencies and there is a growing demand in testing equipment capable to test CMOS circuits utilized in high frequency applications.With the modern applications requiring operating speed at higher frequencies, there is a growing demand in testing equipment capable to test CMOS circuits utilized in high frequency applications. Two main problems have been associated when using external test equipment to test high frequency circuits; the effect of the resistance and capacitance of the probe on the performance of the circuit under test which leads to a faulty evaluation; and the cost of a dedicated high frequency tester. To solve these problems innovative test techniques are needed such as Built In Test (BIT) where self-evaluation takes place with a small area overhead and reduced requirements for external equipment. In the proposed methodology a Built In Test (BIT) detection circuit provides an efficient way to transform the high frequency response of the circuit under test into a DC signal.This work is focused in two major fields. The first topic is on VLSI circuit testing with two implementations for DFT (Design for testability); the first is an ATPG tool for sequential circuits and the second is a BIT (Built in Test) circuit for high frequency signal classification as explained. The second topic is focused on efficient implementations of arithmetic operations in arbitrary long numbers with emphasis to addition. Arbitrary-Precision arithmetic refers to a set of data structures and algorithms which allows to process much greater numbers that exceed the standard data types. . An application example where arbitrary long numbers are widely used is cryptography, because longer numbers offer higher encryption security. Modern systems typically employ up to 64-bit registers, way less than what an arbitrary number requires, while conventional algorithms do not exploit hardware characteristics as well. Mathematical models such as weather prediction and experimental mathematics require high precision calculations that exceed the precision found in most Arithmetic Logic Units (ALU). In this work, we propose a new scalable algorithm to add arbitrary long numbers. The algorithm performs bitwise logic operations rather than arithmetic on 64-bit registers. We propose two approaches of the same algorithm that utilize the same basic function created according to the rules of binary addition
Style APA, Harvard, Vancouver, ISO itp.
33

Österlund, Helene. "Applications of the DGT technique for measurements of anions and cations in natural waters". Licentiate thesis, Luleå tekniska universitet, Geovetenskap och miljöteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16785.

Pełny tekst źródła
Streszczenie:
Since the toxicity and mobility of trace metals are related to the metals' speciation, robust methods for trace metal speciation analysis are of great interest. During the last 15 years, hundreds of scientific articles have been published on the development and applications of the diffusive gradients in thin films (DGT) passive sampling technique.In this work the commercially available DGT containing ferrihydrite adsorbent, used for determination of phosphate and inorganic arsenic, was characterised with respect to anionic molybdate, antimonate, vanadate and tungstate determination. Tests were performed in the laboratory as well as in the field. Diffusion coefficients were determined for the anions using two different methods with good agreement. Simultaneous measurements of arsenate were conducted as quality control to facilitate comparison of the performance with previous work. The ferrihydrite-backed DGT was concluded useful for application over the pH-range 4 to 10 for vanadate and tungstate, and 4 to <8 for molybdate and antimonate. At pH values ≥8, deteriorating adsorption was observed.The combination of a restricted pore (RP) version of DGT and the normal open pore (OP) DGT was used for speciation of copper and nickel at three brackish water stations with different salinities in the Baltic Sea. Time series and depth profiles were taken, and complementary membrane- (<0.22 μm) and ultrafiltration (<1 kDa) was conducted. Comparing DGT and ultrafiltration measurements indicated that copper and nickel were complexed. Due to small differences in results between the OP and RP DGTs it was suggested that the complexes were smaller than the pore size of the RP gel (~1 nm) resulting in that both DGTs accumulating essentially the same fraction. Further, there seemed to be a trend in copper speciation indicating higher degree of strong complexation with increasing salinity. The low salinity stations are more affected by fluvial inputs which will likely affect the nature and composition of the organic ligands present. Assuming that copper forms more stable complexes with ligands of marine rather than terrestrial origin would be sufficient to explain the observed trend.
Godkänd; 2010; 20100517 (helost); LICENTIATSEMINARIUM Ämnesområde: Tillämpad geologi/Applied Geology Examinator: Professor Johan Ingri, Luleå tekniska universitet Diskutant: Docent Per Andersson, Naturhistoriska Riksmuseet Tid: Fredag den 18 juni 2010 kl 10.00 Plats: F341, Luleå tekniska universitet
Style APA, Harvard, Vancouver, ISO itp.
34

Bořek, Tomáš. "Toxické kovy ve vodě a sedimentech vodní nádrže Brno". Master's thesis, Vysoké učení technické v Brně. Fakulta chemická, 2009. http://www.nusl.cz/ntk/nusl-216543.

Pełny tekst źródła
Streszczenie:
Diploma thesis deals with usage of the diffusive gradients in thin films technique (DGT) for the determination of labile metal species in the surface water and sediments of Brno water reservoir. Sediment and water samples were collected on the selected sides of Brno water reservoir on September and October 2008. The DGT technique was used for determination of depth profiles of Fe, Mn, Pb, Cd, Zn, Cu, Ni and Al. The DGT probes with three different thicknesses of diffusive layer were applied into the sediment samples. The obtained results gave the information about release of metals from solid phase into the pore water of sediment. The concentrations of Fe, Mn, Pb and Cd in sediments were determined by atomic absorption spectrometry after microwave decomposition. The DGT technique was used also for determination of Fe, Mn, Pb and Cd in surface water from Brno water reservoir.
Style APA, Harvard, Vancouver, ISO itp.
35

Meredith, Scott. "Extended techniques in Stanley Friedman's Solus for unaccompanied trumpet". Thesis, connect to online resource, 2008. http://digital.library.unt.edu/permalink/meta-dc-6075.

Pełny tekst źródła
Streszczenie:
Thesis (D.M.A.)--University of North Texas, 2008.
System requirements: Adobe Acrobat Reader. Accompanied by 4 recitals, recorded Apr. 12, 2004, June 3, 2004, June 14, 2005, and Mar. 10, 2008. Includes bibliographical references (p. 36-37).
Style APA, Harvard, Vancouver, ISO itp.
36

Du, Ke. "Noval nanoindentation-based techniques of MEMS and microfluidics applications". [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002778.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Damodara, Eswar Keran C. "Clinical trial to determine the accuracy of prefabricated trays for making alginate impressions". Thesis, Birmingham, Ala. : University of Alabama at Birmingham, 2008. https://www.mhsl.uab.edu/dt/2009r/damodara.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Arwood, Bryan Stuart. "The effectiveness of advanced oxidation techniques in degrading steroids in wastewater". Birmingham, Ala. : University of Alabama at Birmingham, 2010. https://www.mhsl.uab.edu/dt/2010m/arwood.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Gabucci, Ilenia. "Dual energy computed tomography techniques applied to the characterization of cartilage tissue". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25617/.

Pełny tekst źródła
Streszczenie:
The study and visualization of articular cartilage are of great interest for different pathological conditions. Currently, 3D tomographic methods considered for the study of cartilage tissue are Magnetic Resonance Imaging (MRI), which is considered the gold standard technique, and Computed Tomography (CT). However, these have various limitations. MRI is characterized by long acquisition times and difficulties in the presence of metal implants and in the visualization of bones; while CT has difficulties in differentiating soft tissues, characterized by lower attenuation of X-rays. Therefore, in this thesis work, we want to study the properties of Dual Energy Computed Tomography (DECT), which with a single acquisition provides multiple material-specific information using virtual monochromatic and material density reconstructions. In particular, we want to study if DECT without the use of contrast agents, allows a satisfactory visualization of cartilage. Subsequently, we want to investigate how the combined use of DECT and a new cationic contrast agent, called CA4+, can further increase the quality of images. For this purpose, three bovine tibiae supplied by the food chain were considered for a pre-clinical testing. Initially, acquisitions without the use of contrast medium were performed. Afterward, the tibiae were immersed in the contrast medium at three different concentrations (one for each tibia considered). It has been observed that DECT without contrast medium does not allow satisfactory visualization of cartilage, even exploiting the properties of material density and virtual monochromatic reconstructions. On the other hand, the presence of CA4+ contrast medium allows a direct visualization of cartilage and improved distinction with surrounding tissues by exploiting monochromatic and material density reconstructions.
Style APA, Harvard, Vancouver, ISO itp.
40

Shiva, Amir Houshang. "Evaluating the Performance of DGT Technique for Selective Measurement of Trace Metals and Assessment of Environmental Health in Coastal Waters". Thesis, Griffith University, 2016. http://hdl.handle.net/10072/367257.

Pełny tekst źródła
Streszczenie:
The diffusive gradients in thin films (DGT) technique as a passive sampler for measurement of trace metals was validated and evaluated. A systematic determination of diffusion coefficients for a wide range of cationic (Al, Cd, Co, Cu, Mn, Ni, Pb, Zn) and oxyanionic (Al, As, Mo, Sb, V, W) metals in open (ODL) and restricted (RDL) diffusive layers used by the DGT technique was performed. The diffusion coefficients were determined at acidic and neutral pH, using two independent methods, diffusion cells and time-series DGT techniques. The calculated values for many oxyanions were the first reports in the RDL. The diffusion coefficients measured in the ODL were retarded compared to the values reported in water, and further retarded in the RDL for all elements with both methods. A DGT technique with mixed binding layer (MBL), containing both Chelex-100 and Metsorb, was validated for the measurement of Al at pH 4.01 and pH 8.30, where the dominant species shifts from cationic to anionic, respectively. The performance of this DGT- MBL was then evaluated in various coastal sites over a wide range of pH, for the simultaneous measurement of Al and other cationic and oxyanionic metals using both ODL and RDL. The results were compared to the 0.45 µm-filterable and also the measurements of individual binding layers to investigate the selectivity of each DGT type for trace metals. All measured concentrations with all measurement types were compared to the water quality guidelines defined by the Australian and New Zealand Environment and Conservation Council (ANZECC) to assess the environmental health of the studied field sites. The in-situ application of DGT-MBL confirmed the utility of this approach compared to the use of individual DGT-Chelex and DGT-Metsorb samplers, especially for metals like aluminium with complex speciation.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Environment
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
41

Baylis, Charles Passant. "Improved techniques for nonlinear electrothermal FET modeling and measurement validation". [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0001989.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Aravamudhan, Shyam. "Development of micro/nanosensor elements and packaging techniques for oceanography". [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002219.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Ghai, Dhruva V. Mohanty Saraju. "Variability-aware low-power techniques for nanoscale mixed-signal circuits". [Denton, Tex.] : University of North Texas, 2009. http://digital.library.unt.edu/permalink/meta-dc-9850.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Nicolas, Yann. "La réforme maorie de la Bibliothèque Nationale de Nouvelle-Zélande dimension stratégique et enjeux techniques (collections, catalogues, accès, conservation) /". [S.l.] : [s.n.], 2003. http://www.enssib.fr/bibliotheque/documents/dcb/nicolas.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

ALGHAMDI, HASAN A. "Dynamic Cone Penetrometer (DCP) Based Evaluation of Sustainable Low Volume Road Rehabilitation Techniques". Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1470661119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Roy, Soumyaroop. "A compiler-based leakage reduction technique by power-gating functional units in embedded microprocessors". [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001832.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Schmied, Marten. "Business Intelligence in Healthcare - Data Mining Techniques as a Possible Hospital Management Tool in Austria". Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-264283.

Pełny tekst źródła
Streszczenie:
Public healthcare provision is under increasing economic restraints making an efficient and sustainable managerial planning a necessity in the hospital sector. Business Intelligence is the extraction of business relevant knowledge in order to adjust and refine executive operations. On the free market industries have applied the according methods and software dedicated to the generation of Business Intelligence is offered by variety of companies. Data Mining, furthermore, describes the facilitation of algorithms in order to train programs to detect unseen patterns from huge amounts of data. Therefore mining techniques are suitable for adding to the business relevant knowledge, particularly as they can produce more accurate predictions. The thesis examined the status of Information Technology normally utilized in Austrian hospitals and simultaneously identified studies that apply Data Mining to a Hospital Information System to gain Business Intelligence. While the general level of Austrian Hospital Information Systems is well sophisticated, common challenges are present in a separation between clinical and administrative systems and their interfaces. For the Data Mining a majority of studies aims at medical improvements. Some applications were found to have good business relevant prospects but their feasible introduction into the practice needs additional fostering.
Style APA, Harvard, Vancouver, ISO itp.
48

Popović, Slobodan. "Reliability testing of R-DAT tapes subjected to mechanical and environmental stress". Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=56757.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with an examination of the reliability of R-DAT recording media in regard to professional and archival applications. Four brands of R-DAT tapes were subjected to mechanical stress, environmental stress, and a combination of both the mechanical and environmental stresses. Data generated from these tests were analyzed objectively, subjectively evaluated, and subsequently compared. Findings showed that in the majority of cases, the subjective evaluation results corroborate the objective measurements. The study concludes that only one brand of tape exhibited no deterioration of data, while the other three brands failed at various points throughout the testing.
Style APA, Harvard, Vancouver, ISO itp.
49

Turgut, Canan. "Deposition and adsorption of organic matter in the sub-monolayer range studied by experimental and numerical techniques". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0012/document.

Pełny tekst źródła
Streszczenie:
Les traitements plasma présentent un outil efficace, économique et écologique pour la fonctionnalisation de surfaces. Pour cette technique, l’étude du dépôt et de l’adhésion de molécules et précurseurs dans le régime de la sous-monocouche présente un intérêt majeur, car elle définit les propriétés de la surface et l’adhésion de la couche déposée sur le substrat. L’adhésion des molécules lors de la phase initiale du dépôt est contrôlée par les espèces dans le plasma ainsi que par leurs distributions énergétiques et angulaires. Dans le cadre de ce projet, une approche multidisciplinaire combinant calculs DFT et techniques expérimentales pour la préparation et la caractérisation des dépôts dans la sous-monocouche a été utilisée. Des dépôts de PS et PMMA, préparés par bombardement d’Ar sur une surface d’Ag, ont été caractérisés par XPS et ToF-SIMS. La quantité de matière déposée augmente bien avec le temps de dépôt, ou la dose d’irradiation. Les analyses par TOF-SIMS ont également montré que la proportion des grands fragments augmente au détriment des petits. Ceci est contraire aux résultats attendus et peut seulement être expliqué par la recombinaison de petits fragments sur la surface du collecteur. Cette hypothèse est supporté des calculs DFT qui ont montré que l’énergie d’adsorption des petits fragments est plus grande que celle des grands et, par conséquent, leur probabilité d’adsorption doit être également plus élevée. Les calculs DFT ont été étendus sur d’autres substrats, notamment du Si, Pt et Al2O3 et ont montrés que l’énergie d’adhésion est la plus élevée sur Si et Pt
Plasma surface treatments present an efficient, economical and ecological tool for surface functionalization. For this technique the deposition and adhesion of molecules and precursors in the sub-monolayer range are of utmost interest, since this layer defines the surface properties and the adhesion between deposit and substrate. The species in the plasma and their energy and angular distributions control the deposition process. To get insights into the latter, a multidisciplinary approach combining DFT calculations with experimental techniques is used for the preparation and characterisation of sub-monolayer deposits of PS and PMMA. The deposits are prepared by sputter deposition using an Ar beam and analysed by ToF-SIMS and XPS. The amount of deposited matter increases well with deposition time or fluence. ToF-SIMS analyses showed also that the proportion of large fragments on the collector surface is increasing with fluence, although the opposite was expected. This can only be explained by the recombination of smaller fragments to form larger ones. This hypothesis is supported by DFT calculations which showed that the adsorption energy, and hence the adsorption probability, is higher for the small fragments than for the large ones. DFT calculations have been extended to Si, Pt and Al2O3 substrates, showing that adsorption energies are highest for Si and Pt
Style APA, Harvard, Vancouver, ISO itp.
50

Kasprzyk, Christina Ridley. "Practical applications of molecular dynamics techniques and time correlation function theories". [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001644.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii