Rozprawy doktorskie na temat „Qualité graphique”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 25 najlepszych rozpraw doktorskich naukowych na temat „Qualité graphique”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Bothorel, Gwenael. "Algorithmes automatiques pour la fouille visuelle de données et la visualisation de règles d’association : application aux données aéronautiques". Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/13783/1/bothorel.pdf.
Pełny tekst źródłaPoulin-Latulippe, David. "L'influence de la qualité de la relation mère-enfant sur le moment d'apparition de la capacité de figuration graphique chez le garçon de trois ans et demi". Thèse, Université du Québec à Trois-Rivières, 2003. http://depot-e.uqtr.ca/4075/1/000102888.pdf.
Pełny tekst źródłaGirod, Xavier. "Conception par objets : mecano : une Méthode et un Environnement de Construction d'ApplicatioNs par Objets". Phd thesis, Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00339536.
Pełny tekst źródłaRamousse, Florian. "Contributions à l’utilisation de la réalité virtuelle pour la thérapie des troubles du comportement alimentaire". Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. https://bibli.ec-lyon.fr/exl-doc/TH_2024ECDL0023.pdf.
Pełny tekst źródłaThe use of immersive technologies for therapeutic purposes has been practiced for several years. While these techniques were initially applied to phobic disorders, they have gradually expanded to other disorders such as anxiety, schizophrenia and eating disorders. Existing research on the use of virtual reality (VR) for the treatment of eating disorders focuses on two issues : (1) correcting the distortion of the patient’s self-representation, where VR helps correct this erroneous representation through embodiment or visualization of an avatar (2) using the environment with triggering elements of the pathology (e.g., food) to better characterize symptoms and conduct exposure therapy to these cues. The first objective of the thesis is to propose and evaluate an immersive environment inducing conditions of food craving (irresistible urge to consume a product associated with compulsive seeking) in individuals with bulimia nervosa or binge-eating disorder, compared to matched healthy subjects. The development of this environment is based on collaborative design work, in which the use of multi-modal stimuli is an innovative element. The characterization of the environment is based on the current reference measure of food craving in VR, which is a self-assessment using a simple verbal scale. We study variations during the exploration of the scenario before and after each virtual exposure, as well as its association with anxiety induced by the exploration at the same moments. Additionally, certain physiological parameters previously associated with cravings in addictive disorders are measured at different evaluation points (heart rate variability and electrodermal activity). Finally, we also use phenotyping methods based on self-assessment questionnaires to highlight behavioral and emotional dimensions that may contribute to triggering episodes. Moreover, in the context of studies on the desire to eat, visual quality emerges as a major parameter that needs to be controlled to offer environments suitable for user experience constraints and technical limitations. The second objective of the thesis is to study how the visual quality of food stimuli influences the desire to eat in a virtual reality environment. This evaluation is performed on non-pathological individuals, with food visuals of varying graphic quality, pre-classified according to a deep learning-trained metric capable of delivering an average graphic quality score
Issa, Subhi. "Linked data quality : completeness and conciseness". Electronic Thesis or Diss., Paris, CNAM, 2019. http://www.theses.fr/2019CNAM1274.
Pełny tekst źródłaThe wide spread of Semantic Web technologies such as the Resource Description Framework (RDF) enables individuals to build their databases on the Web, to write vocabularies, and define rules to arrange and explain the relationships between data according to the Linked Data principles. As a consequence, a large amount of structured and interlinked data is being generated daily. A close examination of the quality of this data could be very critical, especially, if important research and professional decisions depend on it. The quality of Linked Data is an important aspect to indicate their fitness for use in applications. Several dimensions to assess the quality of Linked Data are identified such as accuracy, completeness, provenance, and conciseness. This thesis focuses on assessing completeness and enhancing conciseness of Linked Data. In particular, we first proposed a completeness calculation approach based on a generated schema. Indeed, as a reference schema is required to assess completeness, we proposed a mining-based approach to derive a suitable schema (i.e., a set of properties) from data. This approach distinguishes between essential properties and marginal ones to generate, for a given dataset, a conceptual schema that meets the user's expectations regarding data completeness constraints. We implemented a prototype called “LOD-CM” to illustrate the process of deriving a conceptual schema of a dataset based on the user's requirements. We further proposed an approach to discover equivalent predicates to improve the conciseness of Linked Data. This approach is based, in addition to a statistical analysis, on a deep semantic analysis of data and on learning algorithms. We argue that studying the meaning of predicates can help to improve the accuracy of results. Finally, a set of experiments was conducted on real-world datasets to evaluate our proposed approaches
Dufay, Arthur. "High quality adaptive rendering of complex photometry virtual environments". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0692/document.
Pełny tekst źródłaImage synthesis for movie production never stopped evolving over the last decades. It seems it has reached a level of realism that cannot be outperformed. However, the software tools available for visual effects (VFX) artists still need to progress. Indeed, too much time is still wasted waiting for results of long computations, especially when previewing VFX. The delays or poor quality of previsualization software poses a real problem for artists. However, the evolution of graphics processing units (GPUs) in recent years suggests a potential improvement of these tools. In particular, by implementing hybrid rasterization/ray tracing algorithms, taking advantage of the computing power of these processors and their massively parallel architecture. This thesis explores the different software bricks needed to set up a complex rendering pipeline on the GPU, that enables a better previsualization of VFX. Several contributions have been brought during this thesis. First, a hybrid rendering pipeline was developed (cf. Chapter 2). Subsequently, various implementation schemes of the Path Tracing algorithm have been tested (cf. Chapter 3), in order to increase the performance of the rendering pipeline on the GPU. A spatial acceleration structure has been implemented (cf. Chapter 4), and an improvement of the traversal algorithm of this structure on GPU has been proposed (cf. Section 4.3.2). Then, a new sample decorrelation method, in the context of random number generation was proposed (cf. Section 5.4) and resulted in a publication [Dufay et al., 2016]. Finally, we combined the Path Tracing algorithm with the Many Lights solution, always with the aim of improving the preview of global illumination. This thesis also led to the submission of three patents and allowed the development of two software tools presented in Appendix A
Tsingos, Nicolas. "Simulation de champs sonores de haute qualité pour des applications graphiques interactives". Phd thesis, Université Joseph Fourier (Grenoble), 1998. http://tel.archives-ouvertes.fr/tel-00528829.
Pełny tekst źródłaGuo, Jinjiang. "Contributions to objective and subjective visual quality assessment of 3d models". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI099.
Pełny tekst źródłaIn computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
GOMES, JUNIOR ALCIDES. "Determinacao de selenio em agua subterranea utilizando a espectrometria de absorcao atomica com atomizacao eletrotermica em forno de grafita (GFAAS) e geracao de hidretos (HGAAS)". reponame:Repositório Institucional do IPEN, 2008. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9378.
Pełny tekst źródłaMade available in DSpace on 2014-10-09T14:10:00Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Gourmelin, Yves. "Optimisation de l'utilisation des systèmes de traitement des analyses biologiques". Paris 12, 1995. http://www.theses.fr/1995PA120012.
Pełny tekst źródłaNabil, mahrous yacoub Sandra. "Evaluation de la qualité de vidéos panoramiques synthétisées". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM067/document.
Pełny tekst źródłaHigh quality panoramic videos for immersive VR content are commonly created using a rig with multiple cameras covering a target scene. Unfortunately, this setup introduces both spatial and temporal artifacts due to the difference in optical centers as well as the imperfect synchronization. Traditional image quality metrics cannot be used to assess the quality of such videos, due to their inability to capture geometric distortions. In this thesis, we propose methods for the objective assessment of panoramic videos based on optical flow and visual salience. We validate this metric with a human-centered study that combines human error annotation and eye-tracking.An important challenge in measuring quality for panoramic videos is the lack of ground truth. We have investigated the use of the original videos as a reference for the output panorama. We note that this approach is not directly applicable, because each pixel in the final panorama can have one to N sources corresponding to N input videos with overlapping regions. We show that this problem can be solved by calculating the standard deviation of displacements of all source pixels from the displacement of the panorama as a measure of distortion. This makes it possible to compare the difference in motion between two given frames in the original videos and motion in the final panorama. Salience maps based on human perception are used to weight the distortion map for more accurate filtering.This method was validated with a human-centered study using an empirical experiment. The experiment was designed to investigate whether humans and the evaluation metric detect and measure the same errors, and to explore which errors are more salient to humans when watching a panoramic video.The methods described have been tested and validated and they provide interesting findings regarding human-based perception for quality metrics. They also open the way to new methods for optimizing video stitching guided by those quality metrics
Poulin-Latulippe, David. "L'influence de la qualité de la relation mère-enfant sur le moment d'apparition de la capacité de figuration graphique chez le garçon de trois ans et demi /". Trois-Rivières : Université du Québec à Trois-Rivières, 2003. http://www.uqtr.ca/biblio/notice/resume/17639359R.html.
Pełny tekst źródłaTiano, Donato. "Learning models on healthcare data with quality indicators". Electronic Thesis or Diss., Lyon 1, 2022. http://www.theses.fr/2022LYO10182.
Pełny tekst źródłaTime series are collections of data obtained through measurements over time. The purpose of this data is to provide food for thought for event extraction and to represent them in an understandable pattern for later use. The whole process of discovering and extracting patterns from the dataset is carried out with several extraction techniques, including machine learning, statistics, and clustering. This domain is then divided by the number of sources adopted to monitor a phenomenon. Univariate time series when the data source is single and multivariate time series when the data source is multiple. The time series is not a simple structure. Each observation in the series has a strong relationship with the other observations. This interrelationship is the main characteristic of time series, and any time series extraction operation has to deal with it. The solution adopted to manage the interrelationship is related to the extraction operations. The main problem with these techniques is that they do not adopt any pre-processing operation on the time series. Raw time series have many undesirable effects, such as noisy points or the huge memory space required for long series. We propose new data mining techniques based on the adoption of the most representative features of time series to obtain new models from the data. The adoption of features has a profound impact on the scalability of systems. Indeed, the extraction of a feature from the time series allows for the reduction of an entire series to a single value. Therefore, it allows for improving the management of time series, reducing the complexity of solutions in terms of time and space. FeatTS proposes a clustering method for univariate time series that extracts the most representative features of the series. FeatTS aims to adopt the features by converting them into graph networks to extract interrelationships between signals. A co-occurrence matrix merges all detected communities. The intuition is that if two time series are similar, they often belong to the same community, and the co-occurrence matrix reveals this. In Time2Feat, we create a new multivariate time series clustering. Time2Feat offers two different extractions to improve the quality of the features. The first type of extraction is called Intra-Signal Features Extraction and allows to obtain of features from each signal of the multivariate time series. Inter-Signal Features Extraction is used to obtain features by considering pairs of signals belonging to the same multivariate time series. Both methods provide interpretable features, which makes further analysis possible. The whole time series clustering process is lighter, which reduces the time needed to obtain the final cluster. Both solutions represent the state of the art in their field. In AnomalyFeat, we propose an algorithm to reveal anomalies from univariate time series. The characteristic of this algorithm is the ability to work among online time series, i.e. each value of the series is obtained in streaming. In the continuity of previous solutions, we adopt the functionality of revealing anomalies in the series. With AnomalyFeat, we unify the two most popular algorithms for anomaly detection: clustering and recurrent neural network. We seek to discover the density area of the new point obtained with clustering
Masson, Jean-Baptiste. "Sigfried 2 : modèles de mélange censurés et autres méthodes statistiques pour la contribution d'indicateurs spatialisables de la qualité de l'air intérieur dans les logements français". Compiègne, 2012. http://www.theses.fr/2012COMP2003.
Pełny tekst źródłaThis Ph. D. Thesis is part of the CIRCE program (Cancer, regional and cantonal environmental inequalities) which aims at comparing maps of cancer occurrence with maps of environmental quality on the French territory. Our specific goal is to build geographic maps of the indoor air quality inside the homes, that are compatible with this approach. We adopted a two-step method: first, build typical profiles of indoor air pollution, then assess these classes’ local frequencies in predefined zones. In the first step, we developed an extension of classical clustering methods, based on Gaussian mixture models and the EM algorithm, to the case of (deterministically) censored data. In the second step, we used non-parametric discrimination tools based on binary decisions trees (CART). Unfortunately, the resulting trees have a very high prediction error rate. Even if known significant associations can be found between indoor pollution and some characteristics of the building and its occupants, it seems very hard to predict the profile, or even a single pollutant’s concentration, from those characteristics. However, our methodology has numerous advantages: simplicity of use, distinct steps enabling a steady control of the results, flexible choice of the variables, availability of a prediction error rate
Vu, Hoang Hiep. "Large-scale and high-quality multi-view stereo". Phd thesis, Université Paris-Est, 2011. http://pastel.archives-ouvertes.fr/pastel-00779426.
Pełny tekst źródłaLelli, leitao Valeria. "Testing and maintenance of graphical user interfaces". Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0022/document.
Pełny tekst źródłaThe software engineering community takes special attention to the quality and the reliability of software systems. Software testing techniques have been developed to find errors in code. Software quality criteria and measurement techniques have also been assessed to detect error-prone code. In this thesis, we argue that the same attention has to be investigated on the quality and reliability of GUIs, from a software engineering point of view. We specifically make two contributions on this topic. First, GUIs can be affected by errors stemming from development mistakes. The first contribution of this thesis is a fault model that identifies and classifies GUI faults. We show that GUI faults are diverse and imply different testing techniques to be detected. Second, like any code artifact GUI code should be analyzed statically to detect implementation defects and design smells. As for the second contribution, we focus on design smells that can affect GUIs specifically. We identify and characterize a new type of design smell, called Blob listener. It occurs when a GUI listener, that gathers events to treat and transform as commands, can produce more than one command. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in a tool called InspectorGuidget. Experiments we conducted exhibits positive results regarding the ability of InspectorGuidget in detecting Blob listeners. To counteract the use of Blob listeners, we propose good coding practices regarding the development of GUI listeners
Courilleau, Nicolas. "Visualisation et traitements interactifs de grilles régulières 3D haute-résolution virtualisées sur GPU. Application aux données biomédicales pour la microscopie virtuelle en environnement HPC". Thesis, Reims, 2019. http://www.theses.fr/2019REIMS013.
Pełny tekst źródłaData visualisation is an essential aspect of scientific research in many fields.It helps to understand observed or even simulated phenomena and to extract information from them for purposes such as experimental validations or solely for project review.The focus given in this thesis is on the visualisation of volume data in medical and biomedical imaging.The acquisition devices used to acquire the data generate scalar or vector fields represented in the form of regular 3D grids.The increasing accuracy of the acquisition devices implies an increasing size of the volume data.Therefore, it requires to adapt the visualisation algorithms in order to be able to manage such volumes.Moreover, visualisation mostly relies on the use of GPUs because they suit well to such problematics.However, they possess a very limited amount of memory compared to the generated volume data.The question then arises as to how to dissociate the calculation units, allowing visualisation, from those of storage.Algorithms based on the so-called "out-of-core" principle are the solutions for managing large volume data sets.In this thesis, we propose a complete GPU-based pipeline allowing real-time visualisation and processing of volume data that are significantly larger than the CPU and GPU memory capacities.The pipeline interest comes from its GPU-based approach of an out-of-core addressing structure, allowing the data virtualisation, which is adequate for volume data management.We validate our approach using different real-time applications of visualisation and processing.First, we propose an interactive virtual microscope allowing 3D auto-stereoscopic visualisation of stacks of high-resolution images.Then, we verify the adaptability of our structure to all data types with a multimodal virtual microscope.Finally, we demonstrate the multi-role capabilities of our structure through a concurrent real-time visualisation and processing application
Dragan, Rodić. "Optimizacija procesa elektroerozivne obrade savremenih inženjerskih materijala". Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=110508&source=NDLTD&language=en.
Pełny tekst źródłaThe subject of the research of this dissertation is the improvement, modeling and optimization of the electrical discharge machining (EDM) of advanced engineering materials. First, two innovation methods are presented: EDM in powder mixed dielectric fluid and EDM with assisted electrode and that their combination. The method of response surface and artificial intelligence tools were applied to generate mathematical models. The optimization problems of determining the input parameters with single and multiple target functions are solved by the application of classical optimization methods. Finally, verification of the obtained models and optimal input parameters of electrical discharge machining was carried out.
Alzahrani, Areej A. "Production of High-quality Few-layer Graphene Flakes by Intercalation and Exfoliation". Thesis, 2017. http://hdl.handle.net/10754/626356.
Pełny tekst źródłaGUO, Jinjiang. "Contributions to objective and subjective visual quality assessment of 3d models". Thesis, 2016. http://www.theses.fr/2016LYSEI099/document.
Pełny tekst źródłaIn computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
Fang, Ming-Dar, i 方明達. "Preparation of mesocarbon microbeads for manufacturing high quality carbon/graphite blocks and lithium ion battery anodes". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/sf929w.
Pełny tekst źródła國立高雄應用科技大學
化學工程與材料工程系博碩士班
103
Mesocarbon microbeads (MCMBs), prepared from coal tar pitch, are excellent precursors for carbon materials manufacturing as a result of their liquid crystal properties. As well their excellent compressibility: the β-resin contents, residing on the surface of the MCMBs, promote fluidity and the self-sintering-ability of the clumpy mixtures. Cold isostatic pressing, carbonization and graphitization of MCMBs can be used to prepare high strength, high density isotropic graphite-thereby enabling the direct modeling of objects, while saving on expenditure, due to a reduction in the number of milling/grinding steps needed. However, the techniques needed for the self-sintering of MCMBs are not yet fully developed, due to the various β-resin components on the MCMBs being subject to variation by changes in the preparation procedures. The shapes and particle sizes of the MCMBs affects non-homogeneous sintering reactions; consequently, factors affecting the fluidity of the clumpy mixtures and the production of high-quality of graphite-parts have not been fully clarified and therefore need further investigation. In this work, using variously sized MCMBs mixed with a high β-resin content solid-resin, fixed compositions of β-resin in different graphite products were prepared to explore the effect of the sintering reaction on the resulting products and the relationship between the contacting-pattern of the raw material mixtures and the β-resin contents. The results indicate that the self-sintering reactions of MCMBs have a significant relationship with the contacting-pattern of the raw material mixtures. While MCMBs with higher β-resin contents were found to improve the bending strength of the carbon products, the maximum allowable β-resin contents is still limited as excessive β-resin contents, in addition to giving a reduced product density, will cause the sample mold to burst during sintering. To prepare higher density graphite products, the optimum β-resin content of the MCMB mixtures needed is about 5.0 wt%. Sintered mesocarbon products have high densities and good friction characteristics, but their mechanical strength is still not acceptable. Higher β-resin contents can promote fluidity of the raw materials and also affect neck-formation between the MCMBs, and the product’s, resulting in alterations to the product’s strength. Therefore we need the maximum allowable β-resin contents to be increased. In this paper, carbon black (CB) and glycidyl methacrylate (GMA) were used as a joint reaction-promoter. The sintering behavior of the β-resin was successfully modified to permit an increase in the β-resin contents to 14.4-15.2 wt%, which allowed carbonized carbon blocks with high bending strengths (142 MPa) and high densities (1.87 g cm-3) to be prepared. In this study the organic additives: bisphenol A, GMA and CB were used as promoters to modify the sintering behavior of MCMBs with additional heat treatment (2800°C) to prepare high performance graphite blocks. The disadvantages of graphite-containing inorganic additives typically manifest as decline in the desired properties of the original graphite composite’s high thermal and electrical conductivity, low coefficient of friction and excellent stability. Pure graphite blocks with high electrical conductivities (892 S cm-1), high bending strengths (62 MPa) and high densities (2.156 g cm-3) were prepared in this study. This study showed an improvement in MCMBs, with low β-resin contents, when used as anode materials in high C-rate LIBs. The results showed that the mesophase soft carbon, made from MCMBs at 1300°C, has a wider interlayer spacing compared to mesophase graphite, made from MCMBs at 2800°C, and is highly oriented in comparison with commercial hard carbon, giving it the best high C-rate charge and discharge capacity. These facts indicate that the mesophase soft carbon produced at lower temperatures can be an economical choice for high C-rate LIBs anode materials.
Rioux, Gabrielle. "Contrôle stratigraphique et qualité minéralurgique des gîtes de graphite des lacs Guéret et Guinecourt, Terrane de Gagnon, Province de Grenville". Mémoire, 2008. http://www.archipel.uqam.ca/1460/1/M10428.pdf.
Pełny tekst źródłaChen, Chiu-Yuan, i 陳秋元. "A Study On The Quality In Polyurethane/Graphite Pad Applied For Chemical Mechanical Polishing Oxide Film Of Wafer". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/fy5ay6.
Pełny tekst źródła國立臺北科技大學
材料科學與工程研究所
99
In Chemical mechanical polishing (CMP) field, Polyurethane materials is widely use on polishing pad, because it have porous can save the slurry, but easy cause glazing, because the porous size is not average. In this study, we add two micron (60μm) of graphite powder in the polyurethane polymer, one is the natural graphite; the other for the hydrogenation of natural graphite after heat treatment, both in polymer addition level of 0%, 8%, 16 %, 24%, after forming polishing pad made of graphite-free pass (No porous Graphite Pad) and with the conventional diamond disk (DG329) on the wafer oxide, and discuss chemical mechanical polishing and mechanical properties. The experimental results showed that the Hydrogenated graphite pad is not removed more than highly hydrogenated graphite pad, and the wafer surface after removal of oxide layer uniformity are increasing. So we suggestion 8% addition of hydrogenated graphite oxide can achieve the best performance on the removal rate and uniformity.
Yeh, Chi-Chen, i 葉吉鎮. "The Relation between Reaction Chamber Design of In-mold Process and the quality of Furan Resin Mold Spheroidal Graphite Cast Iron Casting". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/66781533064891966167.
Pełny tekst źródła逢甲大學
機械工程學所
92
The objective of this study is to investigate the effect of the reaction chamber design (inlet and outlet cross section area ratio, nodulant and reaction chamber height ratio, reaction chamber shape, addition of the reaction agent etc., ) of in-mold process on the quality (tensile property, metallographic microstructure) of spheroidal graphite cast iron casting by furan resin mold. The hardness distribution and the repeatability of casting quality by the statistic method are also studied in this research. The results of this study indicate that:(1) when the reaction chamber inlet and outlet cross section area ratio is 1:0.8, the casting quality of spheroidal graphite cast iron casting is the best. The nodularity is 91.5%, the nodule counts is 315 counts/mm2, the tensile strength is 50.96kg/mm2, and the elongation is 21.4%. (2) When the nodulant to the reaction chamber height ratio is 2/3, the metallographic microstructure, nodularity, nodule counts, and the tensile strength of casting are better than the other two height ratios. (3) When the shape of the reaction chamber is designed as a cubic, the nodularity and the mechanical properties are the best. Either the modulus increased (cylindric) or decreased (rectangular), the metallographic microstructure and the mechanical properties were toward to worse. (4) When the inoculant (used as the cover agent) and the nodulant are added in the same time to manufacture the spheroidal graphite cast iron casting of the furan mold, the casting results indicate that the nodularity is bad, and the casting become flake graphite cast iron. (5) For the hardness distribution on the top plane of casting, the hardest spot is at the end side far away the riser and the softest spot is at the side near of riser. The sequence of the average hardness of each cross section planes is:section I>section II>section III. From the view point of hardness distribution, the trend of hardness matches the cooling sequence of the casting. (6) Use the statistic method of the Wilcoxon sign test to analyze the repeatability of casting quality (tensile strength), the results indicate the assumption is concluded i.e. the sample values (tensile strength) to the average values is so closely. In the other words, it has high confidence level. So it can be concluded that the repeatability of casting quality is good.
Wang, Yuan. "Theoretical and experimental study of cutting mechanics of graphite fiber reinforced preimpregnated composite (prepreg), and the effect of tool geometry on cutting quality and fibre deflection". Thesis, 1995. http://spectrum.library.concordia.ca/6192/1/MM01359.pdf.
Pełny tekst źródła