Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: SUH METHOD.

Rozprawy doktorskie na temat „SUH METHOD”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „SUH METHOD”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Chen, Feng. "Mixed quantum/classical dynamics of photodissociation of H2O and Ar-H2O". The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1095782886.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Niedoba, Pavel. "Bezsíťové metody ve výpočetní dynamice tekutin". Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230330.

Pełny tekst źródła
Streszczenie:
The thesis deals with meshfree methods, especially the SPH method. It is focused on the question of convergence near the boundary of the problem domain and its following solution in the form of using the so-called ghost particles as a boundary condition. There is also presented a suitable setting of parameters for a shock tube 2D problem based on many tests and software modifications.
Style APA, Harvard, Vancouver, ISO itp.
3

Aparicio, Jose Martin Lozano. "Ontology view : a new sub-ontology extraction method". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/119251.

Pełny tekst źródła
Streszczenie:
Hoje em dia, muitas empresas de petróleo estão adotando diferentes sistemas baseados em conhecimento com o objetivo de ter uma melhor predição de qualidade de reservatório. No entanto, existem obstáculos que não permitem geólogos com diferentes formações recuperar as informações sem a necessidade da ajuda de um especialista em tecnologia da informação. O principal problema é a heterogeneidade semântica dos usuários finais quando fazem consultas em um sistema de consulta visual (VQS). Isto pode ser pior quando há uma nova terminologia na base de conhecimentos que afetam a interação do usuário, especialmente para usuários novatos. Neste contexto, apresentamos contribuições teóricas e práticas que explora o sinergismo entre ontologia e interação homem-computador (HCI). Do lado da teoria, introduzimos o conceito de visão de ontologia bem fundamentada e a sua definição formal. Nós nos concentramos na extração de vista ontologia de uma ontologia bem fundamentada e completa, baseando-nos em meta-propriedades ontológicas e propusemos um algorítmo independente da linguagem para extração de sub-ontologia que é guiada por meta-propriedades ontológicas. No lado prático, baseado nos princípios de HCI e desenho de interação, propusemos um novo sistema de consulta visual que usa o enfoque de vistas de ontologias para guiar o processo de consulta. Também o nosso desenho inclui visualizações de dados que ajudarão geólogos a entender os dados recuperados. Além disso, avaliamos nosso desenho com um teste de usabilidade a-través de um questionário em experimento controlado. Cinco geólogos que trabalham na área de Geologia do Petróleo foram avaliados. O enfoque proposto é avaliado no domínio de petrografia tomando as comunidades de Diagênese e Microestrutural adotando o critério de precisão e revocação. Os resultados experimentais mostram que termos relevantes obtidos de documentos de uma comunidade varia entre 30 a 66% de precisão e 4.6 a 36% de revocação, dependendo do enfoque selecionado e da combinação de parâmetros. Além disso, os resultados mostram que, para toda combinação de parâmetros, a revocação obtidos de artigos de diagênese usando a sub-ontologia gerada para a comunidade de diagênese é maior que a revocação e f-measure usando a sub-ontologia gerada para a comunidade de microestrutural. Por outro lado, resultados para toda combinação de parâmetros mostram que a revocação e f-measure obtida de artigos de microestrutural usando a sub-ontologia gerada para a comunidade de microestrutural é maior que a revocação e o fmeasure usando a sub-ontologia gerada para a comunidade de diagêneses.
Nowadays many petroleum companies are adopting different knowledge-based systems aiming to have a better reservoir quality prediction. However, there are obstacles that not allow different background geologists to retrieve information without needing the help of an information technology expert. The main problem is the heterogeneity semantic of end users when doing queries in a visual query system (VQS). This can be worst when there is new terminology in the knowledge-base affecting the user interaction, particularly for novice users. In this context, we present theoretical and practical contributions that exploit the synergism between ontology and human computer interaction (HCI). On the theory side, we introduce the concept of ontology view for well-founded ontology and provide a formal definition and expressive power characterization. We focus in the ontology view extraction of a well-founded and complete ontology based on ontological meta-properties and propose a language independent algorithm for sub-ontology extraction, which is guided by ontological meta-properties. On the practical side, based on the principles of HCI and interaction design, we propose a new Visual Query System that uses the ontology view approach to guide the query process. Also, our design includes data visualizations that will help geologists to make sense of the retrieved data. Furthermore, we evaluated our interaction design with five users performing a usability testing through a questionnaire in a controlled experiment. The evaluation was performed over geologists that work in the area of petroleum geology. The approach proposed is evaluated on the petrography domain taking the communities of Diagenesis and MicroStructural adopting the well known criteria of precision and recall. Experimental results show that relevant terms obtained from the documents of a community varies from 30 to 66 % of precision and 4.6 to 36% of recall depending on the approach selected and the parameters combination. Furthermore, results show that almost for all the parameters combination that recall and f-measure obtained from diagenesis articles using the sub-ontology generated for the diagenesis community is greater than recall and f-measure using the sub-ontology generated for microstructural community. On the other hand, results for all the parameters combination that recall and f-measure obtained from microstructural articles using the sub-ontology generated for the microstructural community is greater than recall and f-measure using the subontology generated for diagenesis community.
Style APA, Harvard, Vancouver, ISO itp.
4

Smith, Eric F. "Development of an Image Noise Estimation Method and a Sub-Imaging Based Wiener Method". ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/1052.

Pełny tekst źródła
Streszczenie:
This research consists of three parts. The first part is an investigation of several popular image restoration techniques. The techniques are used to restore 2-D image data, f(x, y), that has been blurred by a known point spread function (PSF), b(x, y) and corrupted by an unknown amount of noise, n(x, y). Several sample images are restored using all of the techniques. Of the methods investigated the one which produces the best restoration results was determined to be the Wiener deconvolution method. The determination of the best method is based on the quality of the restored image and the required restoration time. The second part of this research involves the development of a noise standard deviation ( n) estimation method. The method determines an estimate, e, of n based on the Morrison Noise Reduction Method (MNRM) and is therefore an iterative method. The results of the noise, n, estimating method (SIGEST) developed are rather good. The error between n and e when average across several images all contaminated with a medium width or greater PSF and various amounts of noise, is less than 10 percent. Knowledge of n is important for the application of Wiener deconvolution. All noise in this research is assumed to be uncorrelated noise. The third part of this research involves the development of the Sub-Imaging Method, SIM. In the third part of this research, the h2 and the hN of image data h is defined as follows: h2 = Image data h processed by two iterations of the MNRM hN = h – h2 SIM divides an image's hN into several rectangular parts, calculates e of each part by the method described previously, calculates the average of the e‘s and selects the part with a e which is closest to the average of all the e‘s. The part with a e closest to the average is defined to be the average sub-image (asi). The following assertions concerning SIM are investigated: 1. The asi of an image can be used in the place of the whole image to determine e of n and used to restore the whole image. Therefore, the noise in a piece of an image can represent the noise in the whole image (provided it is the asi of the image's hN). 2. SIM can be combined with the Wiener image restoration method to restore contaminated image data without the n of the data initially being known. In this research, image and numerical results are provided which validate the two claims about SIM. The wiener method and SIM are combined to develop the Sub-Image Wiener Method (SIWM). In this research, image and numerical results are provided to show that SIWM is an effective method of restoring blur and noise contaminated image data. Image and numerical data are provided comparing SIWM to the Matlab function Wiener2. The results show that SIWM is faster and yields better results than the Wiener2 method. .
Style APA, Harvard, Vancouver, ISO itp.
5

Polwaththe, Gallage Hasitha Nayanajith. "Numerical modelling of deformation behaviour of red blood cells in microvessels using the coupled SPH-DEM method". Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/91719/1/Hasitha%20Nayanajith_Polwaththe%20Gallage_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis developed an advanced computational model to investigate the motion and deformation properties of red blood cells in capillaries. The novel model is based on the meshfree particle methods and is capable of modelling the large deformation of red blood cells moving through blood vessels. The developed model was employed to simulate the deformation behaviour of healthy and malaria infected red blood cells as well as the motion of red blood cells in stenosed capillaries.
Style APA, Harvard, Vancouver, ISO itp.
6

Le, Goff Jean-Marie. "Navier-Stokes modellingof offshore wind turbinesusing the SPH method". Thesis, KTH, Skolan för industriell teknik och management (ITM), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-216161.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Green, Mashy David. "Sloshing simulations with the smoothed particle hydrodynamics (SPH) method". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/45367.

Pełny tekst źródła
Streszczenie:
The main aims of this work are to identify, verify, and validate a smoothed particle hydrodynamics (SPH) method for simulating long duration transient and steady- state fluid sloshing in complex geometries. The validation will be carried out by comparing the SPH simulations against experimental data provided by ESA/ESTEC for transient and steady-state sloshing in a rectangular tank with a low filling ratio and of transient sloshing in a pill-shaped tank that exhibits transition from swaying to swirling waves. The experimental tests proved to be extremely challenging due to the low fill ratio of the rectangular tank and the long duration of both experiments. The main challenge is to devise a SPH scheme that balances spatial and temporal accuracy with an efficient computer implementation to produce accurate simulations at a reasonable computing cost. The investigation highlighted three issues of critical importance: the treatment of solid boundaries in order to limit the introduction numerical errors into the system; the application of a correct numerical dissipation scheme to reduce existing numerical errors; and the need for a massively parallel implementation. Careful examination of the most suitable techniques led to the adoption a cor- rected δ-SPH scheme that provides numerical dissipation to reduce spurious pressure oscillations, and a fixed ghost particle boundary condition to accurately impose wall boundary conditions. The proposed SPH methods were coded in the open source parallel code DualSPHysics. The implementation showed significant improvements in energy conservation and solution accuracy when compared to state-of-the-art SPH methods, and accurately reproduced known analytical solutions to linear sloshing. The validation against the ESA/ESTEC experimental data showed excellent agreement between the SPH simulations and experiments, accurately reproducing the time history of wave heights and sloshing forces as well as capturing the full free-surface shapes. Only the careful selection of appropriate boundary conditions, artificial dissipation and a massively parallel GPU architecture allowed to accurately simulate these experiments.
Style APA, Harvard, Vancouver, ISO itp.
8

KARKI, BHISHMA R. "SURFACE RESISTANCE OF HIGH TEMPERATURE SUPERCONDUCTOR BY THE RESONANT CAVITY METHOD". University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1084380437.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Steffen, Gerusa Pauli Kist. "Diversidade de minhocas e sua relação com ecossistemas naturais e alterados no estado do Rio Grande do Sul". Universidade Federal de Santa Maria, 2012. http://repositorio.ufsm.br/handle/1/3334.

Pełny tekst źródła
Streszczenie:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Knowledge on earthworms diversity in Rio Grande do Sul (RS) State, as well as in Brazil, is lower than the range estimated by taxonomists. The study aimed to: 1) evaluate the diversity of earthworms present in ecosystems of three regions of the RS State; 2) characterize the physical and chemical properties of soil, vegetation and land use; and 3) determine the potential use of a nontoxic solution to extract earthworms from the soil, in order to reduce the environmental impacts on ecosystems assessed. A qualitative survey of earthworms was conducted by the withdrawal of monoliths and manual screening in 15 different ecosystems. Samples were collected in 29 municipalities of the northwestern, central and southwestern regions of the RS State, comprising 77 sampling sites. Species identification was based on morphological parameters and / or molecular. Twenty-one species of earthworms were found, belonging to the families Glossoscolecidae (10), Ocnerodrilidae (4), Megascolecidae (4), Acanthodrilidae (1), Lumbricidae (1) and Criodrilidae (1). Ten correspond to new records, belonging to the genus Glossoscolex (6), Fimoscolex (1), Kerriona (1), Eukerria (1) and a new specie of the Criodrilidae family. The occurrence of earthworms species was correlated with the type of ecosystem. The highest diversity was observed in sites of native forest fragment and native grassland. Most native species (Urobenus brasiliensis, Fimoscolex n. sp. and Glossoscolex sp.) predominated in ecosystems altered by human activities, while the exotic species (Amynthas gracilis, Amynthas rodericensis, Metaphire californica, Aporrectodea trapezoides) and pilgrim (Pontoscolex corethrurus) predominate in sites with highest degree of human disturbance. The degree of disturbance of ecosystems and land use influence the presence of earthworms, followed by physical and chemical characteristics of soil. The nuclear gene 28S rDNA, as well as mitochondrial genes 16S and subunit I of cytochrome c oxidase were important tools for the molecular characterization of earthworms. Assessments of the potential of onion extract as the extraction solution for soil earthworms showed that the concentration of 175 g L-1 extract shows capacity comparable to standard extraction solution (formaldehyde 0.5%) in the extraction of earthworms in clay and sandy soils. The results of this study indicated that the Rio Grande do Sul State has greater earthworms diversity than the currently known, justifying the importance of studies of the diversity of these soil organisms.
O conhecimento sobre a diversidade de minhocas no Estado do Rio Grande do Sul (RS), assim como no Brasil, está muito aquém da diversidade estimada pelos taxonomistas. O trabalho teve como objetivos: 1) avaliar a diversidade de minhocas em ecossistemas de três regiões do Estado do RS; 2) caracterizar propriedades físicas e químicas do solo, tipo de vegetação e uso da terra; e 3) determinar o potencial de uso de uma solução atóxica para extração de minhocas do solo, como forma de reduzir os impactos ambientais sobre os ecossistemas avaliados. Realizou-se levantamento qualitativo de minhocas, através da retirada de monólitos e triagem manual, em 15 diferentes ecossistemas. As coletas foram realizadas em 29 municípios das regiões noroeste, central e sudoeste do Estado do RS, totalizando 77 locais amostrados. A identificação das espécies foi realizada com base em parâmetros morfológicos e/ou moleculares. Vinte e uma espécies de minhocas foram encontradas, pertencentes às famílias Glossoscolecidae (10), Ocnerodrilidae (4), Megascolecidae (4), Acanthodrilidae (1), Lumbricidae (1) e Criodrilidae (1). Destas, dez corresponderam a novos registros, pertencentes aos gêneros Glossoscolex (6), Fimoscolex (1), Kerriona (1), Eukerria (1) e uma nova espécie da família Criodrilidae. A ocorrência das espécies de minhocas apresentou relação com o tipo de ecossistema avaliado, sendo observada maior diversidade em áreas de fragmento de mata nativa e campo nativo. A maioria das espécies nativas (Urobenus brasiliensis, Fimoscolex n. sp. 1 e Glossoscolex sp.) predominou em ecossistemas pouco alterados pelo homem, enquanto que as espécies exóticas (Amynthas gracilis, Amynthas rodericensis, Metaphire californica, Aporrectodea trapezoides) e peregrina (Pontoscolex corethrurus) predominam em áreas com maior grau de antropização. O grau de perturbação dos ecossistemas e o uso do solo interferem na presença de minhocas, seguido pelas características físicas e químicas do solo. O gene nuclear 28S rDNA, assim como os genes mitocondriais 16S e subunidade I da citocromo c oxidase foram ferramentas importantes para caracterização molecular das minhocas. As avaliações do potencial do extrato de cebola como solução extratora de minhocas do solo demonstraram que a concentração de 175 g L-1 de extrato apresenta capacidade semelhante à solução extratora padrão (formaldeído 0,5%) na extração de minhocas em solos de textura argilosa e arenosa. Os resultados obtidos neste trabalho evidenciaram que o Estado do Rio Grande do Sul apresenta diversidade de minhocas superior à conhecida atualmente, justificando a importância de estudos da diversidade de organismos do solo em ecossistemas.
Style APA, Harvard, Vancouver, ISO itp.
10

NAKAMURA, FABIO ISSAO. "FLUID INTERACTIVE ANIMATION BASED ON PARTICLE SYSTEM USING SPH METHOD". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10087@1.

Pełny tekst źródła
Streszczenie:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Neste trabalho foi feito um estudo investigativo sobre animação de fluidos utilizando sistemas de partículas. Baseado nas propostas apresentadas por Muller et al., esta dissertação objetiva investigar e compreender o uso do método Lagrangeano baseado em partículas, conhecido como Smoothed Particle Hydrodynamics (SPH), para simulação de fluidos. A validação do método foi feita através da implementação de uma biblioteca capaz de animar fluidos a taxas interativas. Para testar a eficácia e eficiência do método, a biblioteca desenvolvida permite a instanciação de diferentes configurações, incluindo o tratamento de colisões do fluido com obstáculos, o tratamento da interação entre dois fluidos distintos e o tratamento de forças externas exercidas pelo usuário via um mecanismo de interação.
This work investigates the use of particle-based system for fluid animation. Based on proposals presented by Müller et al., the goal of this dissertation is to investigate and fully understand the use of a Lagrangian method known as Smoothed Particle Hydrodynamics (SPH) for fluid simulations. A library has been implemented in order to validate the method for fluid animation at interactive rate. To demonstrate the method effectiveness and efficiency, the resulting library allows the instantiation of different configurations, including the treatment of fluid-obstacle collisions, interaction between two distinct fluids, and fluid-user interaction.
Style APA, Harvard, Vancouver, ISO itp.
11

Ahlbert, Gabriella. "Method Evaluation of Global-Local Finite Element Analysis". Thesis, Linköpings universitet, Hållfasthetslära, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-78103.

Pełny tekst źródła
Streszczenie:
When doing finite element analysis upon the structure of Saab’s aeroplanes a coarse global model of mainly shell elements is used to determine the load distribution for sizing the structure. At some parts of the aeroplane it is however desirable to implement a more detailed analysis. These areas are usually modelled with solid elements; the problem of connecting the fine local solid elements to the coarse global model will shell elements then arises.   This master thesis is preformed to investigate possible Global-Local methods to use for the structural analysis on Gripen. First a literature study of current methods on the market is made, thereafter a few methods are implemented on a generic test structure and later on also tested on a real detail of Gripen VU. The methods tested in this thesis are Mesh refinement in HyperWorks, RBE3 in HyperWorks, Glue in MSC Patran/Nastran and DMIG in MSC Nastran. The software is however not evaluated in this thesis, and a further investigation is recommended to find the most fitting software for this purpose. All analysis are performed with linear assumptions.   Mesh refinement is an integrated technique where the elements are gradually decreasing in size. Per definition, this technique cannot handle gaps, but it has almost identical results to the fine reference model.   RBE3 is a type of rigid body elements with zero stiffness, and is used as an interface element. RBE3 is possible to use to connect both Shell-To-Shell and Shell-To-Solid, and can handle offsets and gaps in the boundary between the global and local model.   Glue is a contact definition and is also available in other software under other names. The global respectively the local model is defined as contact bodies and a contact table is used to control the coupling. Glue works for both Shell-To-Shell and Shell-To-Solid couplings, but has problem dealing with offsets and gaps in the boundary between the global and local model.   DMIG is a superelement technique where the global model is divided into smaller sub-models which are mathematically connected. DMIG is only possible to use when the nodes on the boundary on the local model have the same position as the nodes at the boundary of the global model. Thus, it is not possible to only use DMIG as a Global-Local method, but can advantageously be combined with other methods.   The results indicate that the preferable method to use for Global-Local analysis is RBE3. To decrease the size of the files and demand of computational power, RBE3 can be combined with a superelement technique, for example DMIG.   Finally, it is important to consider the size of the local model. There will inevitably be boundary effect when performing a Global-Local analysis of the suggested type, and it is therefore important to make the local model big enough so that the boundary effects have faded before reaching the area of interest.
Style APA, Harvard, Vancouver, ISO itp.
12

Gregory, Victor Paul. "Monte Carlo computer simulation of sub-critical Lennard-Jones particles". Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-11242009-020125/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

MOMM, GUSTAVO GARCIA. "ASSESSMENT OF SLAMMING LOADS ON SUBSEA STRUCTURES USING THE SPH METHOD". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=29340@1.

Pełny tekst źródła
Streszczenie:
PETRÓLEO BRASILEIRO S. A.
Estruturas submarinas utilizadas nos sistemas de produção de óleo e gás offshore são normalmente projetadas para permanecerem no leito marinho por décadas. Para a grande maioria dessas estruturas a instalação é uma etapa crítica que pode requerer recursos dispendiosos e significativos esforços de engenharia. A descida de estruturas submarinas em regiões de ondas marinhas é uma operação complexa, uma vez que envolve acelerações desses corpos induzidas pelos movimentos das embarcações que, associados com os deslocamentos da superfície do mar, podem levar a significativas cargas de impacto nessas estruturas durante a entrada na água. O estágio inicial do impacto durante a entrada na água tem sido tema de muita pesquisa no último século, desde os trabalhos pioneiros de von Kármán e Wagner sobre a hidrodinâmica do pouso de hidroaviões. O cenário do impacto da proa de navios na superfície do mar também tem sido objeto de estudo, uma vez que pode levar a danos localizados ou mesmo catastróficos ao casco. Diferentes métodos numéricos têm sido aplicados para análise desse problema. A principal contribuição desse trabalho é a utilização do método numérico Smoothed Particle Hydrodynamics (SPH) para estimar as cargas de slamming em corpos rígidos durante a entrada na água considerando superfícies em repouso e sob o efeito de ondas. Inicialmente é introduzida a fundamentação teórica básica sobre o impacto hidrodinâmico, seguida da descrição do método SPH. Aplicações do SPH para simular a entrada na água de corpos rígidos são apresentadas considerando casos em queda livre e com velocidade constante e os resultados são comparados com experimentos e simulações numéricas obtidos na literatura. A presença de ondas regulares durante a entrada na água com velocidade constante também é considerada. Os resultados numéricos obtidos neste trabalho demonstram a viabilidade da abordagem proposta para estimar as cargas de slamming em estruturas submarinas durante a entrada na água.
Subsea structures employed on offshore oil and gas production systems are commonly designed to be laid on seafloor for decades. For most of these structures the installation is a critical stage and may require costly resources and significant engineering effort. Lowering subsea structures through the wave zone is a complex operation as it involves accelerations of these bodies induced by the vessel motion which, associated to the sea surface displacements, may lead to significant impact loads on these structures during water entry. The initial stage of impact during water entry has been a subject of many researches over the past century since the pioneering work of von Kármán and Wagner on the hydrodynamics of an alighting sea plane. The scenario of impact of the forebody of a ship on the sea surface has also been subject of studies, as it may cause localized and eventually catastrophic damage to the hull. Different numerical methods have been applied to the analysis of this problem. The main contribution of this work is the use of the Smoothed Particle Hydrodynamics (SPH) method to estimate slamming loads on rigid bodies during water entry considering both calm and wavy surfaces. A basic theoretical background on hydrodynamic impact load is firstly introduced, followed by the description of the SPH method. Applications of SPH to simulate water entry of rigid bodies considering both free fall and constant velocity cases are presented and results are compared with experiments and numerical simulations from the literature. The presence of regular waves during constant velocity water entry is also considered. The numerical results obtained here demonstrate the effectiveness of the proposed approach to estimate slamming loads on subsea structures during water entry.
Style APA, Harvard, Vancouver, ISO itp.
14

Gui, Qinqin. "Improved incompressible SPH method for predicting wave impacts on coastal structures". Thesis, University of Dundee, 2014. https://discovery.dundee.ac.uk/en/studentTheses/efa775e0-fd40-4b96-b0d4-98ff9957e819.

Pełny tekst źródła
Streszczenie:
Smoothed Particle Hydrodynamics (SPH) is a simple and attractive meshless Lagrangian particle method for simulating free surface flows and has been widely applied in predicting wave impacts on coastal structures. However, despite the superior theoretical basis the performance of the existing Incompressible SPH models based on either a density invariant or a velocity divergence-free formulation is often not better than the recently improved Weakly Compressible SPH models. This could be largely caused by the particular formulations of the Pressure Poisson Equation (PPE) source term in the existing ISPH models and a better formulation of this source term can be expected to significantly improve the accuracy of the ISPH models This thesis presents an improved incompressible smoothed particle hydrodynamics (ISPH) method for wave impact applications by combining the density invariant and velocity divergence free formulations in a weighted average manner to form a general source term. The model is then applied to two problems: (1) dam-breaking wave impact on a vertical wall and (2) solitary wave run-up and impact on a coastal structure. The computational results have indicated that the new source term treatment can predict the wave impact pressure and force more accurately compared with using either density invariant or a velocity divergence-free formulation alone. It was further found that depending on the application case, the influence of the density invariant and velocity divergence-free parts could be quite different. A simple parameterisation that relates the weighting coefficient a in the mixed pressure source term to the ratio of the characteristic height to length scales of the flow system is proposed and evaluated. In order to gain further insight into the effects of the source term formulations on the impact pressure prediction, three more benchmark fluid impact problems including two dam break flows and one solitary wave impact are investigated using the three different ISPH numerical schemes, respectively. The computational results are validated against either the experimental data or numerical data based on the WCSPH. The in-depth numerical analysis has revealed that the pure density-invariant formulation can lead to relatively large divergence errors while the velocity divergence-free formulation may cause relatively large density errors. As compared with these two approaches the mixed source term formulation performs much better having the minimum total errors in all test cases. Finally, the SPH model was applied to study the wave interaction with porous structure to investigate the flow motion in and around the porous structure. In order to describe correctly the flow through the interface between the porous region and pure fluid region within the SPH framework a heuristic and boundary treatment method was proposed. The SPH model was validated against the theoretical data of wave propagating over a porous bed and further investigated by comparing the predicted wave surface profile and velocity results with the experiment data for a typical case of flow motion inside of a submerged the porous structure. A good agreement is obtained between the numerical results and experiment data. All these demonstrate that the improved ISPH model developed in this work is capable of modelling the wave interaction with porous structure.
Style APA, Harvard, Vancouver, ISO itp.
15

Sekar, Sanjana. "Logic Encryption Methods for Hardware Security". University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1505124923353686.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Bilokon, O. M., i N. K. Stratiienko. "Automation of the process of forming of an IT-project team based on competency model (using logistics network project development as an example)". Thesis, NTU "KhPI", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/38252.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Harayama, Tomohiro. "A method of Weil sum in multivariate quadratic cryptosystem". Texas A&M University, 2003. http://hdl.handle.net/1969.1/5938.

Pełny tekst źródła
Streszczenie:
A new cryptanalytic application is proposed for a number theoretic tool Weil sum to the birthday attack against multivariate quadratic trapdoor function. This new customization of the birthday attack is developed by evaluating the explicit Weil sum of the underlying univariate polynomial and the exact number of solutions of the associated bivariate equation. I designed and implemented new algorithms for computing Weil sum values so that I could explicitly identify some class of weak Dembowski- Ostrom polynomials and the equivalent forms in the multivariate quadratic trapdoor function. This customized attack, also regarded as an equation solving algorithm for the system of some special quadratic equations over finite fields, is fundamentally different from the Grobner basis methods. The theoretical observations and experiments show that the required computational complexity of the attack on these weak polynomial instances can be asymptotically less than the square root complexity of the common birthday attack by a factor as large as 2^(n/8) in terms of the extension degree n of F2n. I also suggest a few open problems that any MQ-based short signature scheme must explicitly take into account for the basic design principles.
Style APA, Harvard, Vancouver, ISO itp.
18

SANTOS, ROBINSON A. dos. "Otimização da metodologia de preparação do cristal de brometo de tálio para sua aplicação como detector de radiação". reponame:Repositório Institucional do IPEN, 2012. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10085.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2014-10-09T12:34:35Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:00:07Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Style APA, Harvard, Vancouver, ISO itp.
19

Li, Xue. "A Novel Accurate Approximation Method of Lognormal Sum Random Variables". Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1229358144.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Srinivasan, Narayanan. "Pretreatment of Guayule Biomass Using Supercritical CO2-based Method for Use as Fermentation Feedstock". University of Akron / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=akron1289782016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Vickers, Samantha Grace. "2D PASS-CPMG: A New NMR Method for Quantifying Structure in Non-Crystalline Solids". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1238098414.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Moffat, Lucky. "Location of sub-fresnel scale mineral targets in the subsurface /". Internet access available to MUN users only, 2004. http://collections.mun.ca/u?/theses,62334.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Stakhursky, Vadim L. "Vibronic structure and rotational spectra of radicals in degenerate electronic state. Case of CH3 O and asymmetrically deuterated isotopomers (CHD2 O and CH2 DO)". The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1117634691.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Poon, Clarice Marie Hiu-Sze. "Recovery guarantees for generalized and sub-Nyquist sampling methods". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709301.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Birge, Jonathan R. (Jonathan Richards). "Methods for engineering sub-two-cycle mode-locked lasers". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53192.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 159-166).
We begin by presenting a method to efficiently solve for the steady-state solution of a nonlinear cavity, suitable for simulating a solid-state femtosecond laser. The algorithm directly solves the periodic boundary value problem by using a preconditioned Krylov-Newton shooting solver. The method can be applied to the design and study of mode-locked lasers, as well as the modeling of field enhancement cavities, such as those used in high harmonic generation. In contrast to the standard approach of dynamic simulation, which converges linearly, our algorithm converges quadratically to the stable solution, typically converging two to three orders of magnitude faster than the standard approach. The second major theme is the control of dispersion in mode-locked lasers. The predominant way to design dispersion compensating optics in the past has been a consideration of the integrated net group delay dispersion (GDD). We propose and implement an alternative spectral quantity based on the energy contained in phase distortions, which we term the Phase Distortion Ratio (PDR). Dispersion compensating mirrors optimized with respect to PDR generally perform significantly better than those where GDD is optimized. We demonstrate this in the design of a dispersion compensating mirror pair capable of compressing single-single pulses. In the final section, we deal with the unique challenges inherent to measuring sub-two-cycle pulses reliably and accurately. We have recently developed a technique, Two-Dimensional spectral Shearing Interferometry (2DSI), based on spectral shearing, which requires no calibration and does not disperse the pulse being measured.
(cont.) Our method intuitively encodes spectral group delay in a slowly changing fringe in a two-dimensional interferogram. This maximizes use of spectrometer resolution, allowing for complex phase spectra to be measured with high accuracy over extremely large bandwidths, potentially exceeding an octave. We believe that 2DSI is a uniquely cost effective and efficient method for accurately and reliably measuring few- and even single-cycle pulses. While the method is relatively recent, it is well tested and has been successfully demonstrated on several different lasers in two different groups, including one producing 4.9 fs pulses.
by Jonathan R. Birge.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
26

Koh, Boon Ping. "Enhancement of device and sub-cellular structure modelling in the FDTD method". Thesis, University of Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288270.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Wang, Guangyu. "An MD-SPH Coupled Method for the Simulation of Reactive Energetic Materials". University of Cincinnati / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1491559185266293.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Goodloe, John Bennett. "STANDARDIZED SUB-SCALE DYNAMOMETER SCALING METHOD FOR TRANSIT AND FREIGHT TRAIN APPLICATIONS". OpenSIUC, 2016. https://opensiuc.lib.siu.edu/theses/1899.

Pełny tekst źródła
Streszczenie:
AN ABSTRACT OF THE THESIS OF John Goodloe, for the Master of Science degree in Mechanical Engineering, presented on April 13, 2016, at Southern Illinois University Carbondale. TITLE: STANDARDIZED SUB-SCALE DYNAMOMETER SCALING METHOD FOR TRANSIT AND FREIGHT TRAIN APPLICATIONS MAJOR PROFESSOR: Dr. Peter Filip Dynamometers are machines that are used in several industries for measuring force, torque, or power of a mechanism. These devices are in fact very useful in the friction material industry. Friction materials are created and then tested on dynamometers to analyze physical properties such as the dynamic coefficient of friction of the material based upon its retarding force against the wheel or disc, which is mounted to the dynamometer drive shaft. Dynamometer testing is expensive and often time consuming. Sub-scale dynamometers may be used to reduce cost, time, and material use while providing similar test results by implementing a proper scaling method. There are several scaling methods, but this approach will use surface analysis and the energy dispersed per surface contact area strategy to verify the testing conditions of both sub-scale and full scale testing. Since lab analysis costs are expensive, the project budget is restricted to analyzing the maximum of 1 full-scale disc and pad specimen and 2 subscale disc and pad sets. The test results are expected to prove that when the surface conditions of the analyzed specimens agree to each other, the dynamometer test results will also agree. Due to restrictions with budget and time the fastest and most effective way to test this hypothesis is by creating the baseline on the full-scale and then adjusting the scaling on the subscale dynamometer until similar results are given. Once similar dynamometer test results are obtained, the material specimens can be analyzed in the lab. Testing will continue as long as necessary, and if the expected results are not obtained, the results will still be tested for analysis and compared to the baseline. The results are expected to show that two separate machines may provide similar surface conditions for testing, as well as similar dynamometer test results for any given friction material. However, if the expected results cannot be obtained, then it may still prove that without matching the surface layer conditions while testing, the dynamometers recorded test results will not match either, which is in agreeance with the hypothesis.
Style APA, Harvard, Vancouver, ISO itp.
29

Zhu, Wei. "Molecular dynamics simulation of electrolyte solution flow in nanochannels and Monte Carlo simulation of low density CH 3 Cl monolayer on graphite". Columbus, Ohio : Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1072284612.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xiv, 90 p.; also includes graphics. Includes abstract and vita. Advisor: Sherwin J. Singer, Dept. of Chemistry. Includes bibliographical references (p. 86-90).
Style APA, Harvard, Vancouver, ISO itp.
30

Cayan, Fatma Nihan. "The Method Of Lines Solution Of Discrete Ordinates Method For Nongray Media". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607401/index.pdf.

Pełny tekst źródła
Streszczenie:
A radiation code based on method of lines (MOL) solution of discrete ordinates method (DOM) for the prediction of radiative heat transfer in nongray absorbing-emitting media was developed by incorporation of two different gas spectral radiative property models, namely wide band correlated-k (WBCK) and spectral line-based weighted sum of gray gases (SLW) models. Predictive accuracy and computational efficiency of the developed code were assessed by applying it to the predictions of source term distributions and net wall radiative heat fluxes in several one- and two-dimensional test problems including isothermal/non-isothermal and homogeneous/non-homogeneous media of water vapor, carbon dioxide or mixture of both, and benchmarking its steady-state predictions against line-by-line (LBL) solutions and measurements available in the literature. In order to demonstrate the improvements brought about by these two spectral models over and above the ones obtained by gray gas approximation, predictions obtained by these spectral models were also compared with those of gray gas model. Comparisons reveal that MOL solution of DOM with SLW model produces the most accurate results for radiative heat fluxes and source terms at the expense of computation time when compared with MOL solution of DOM with WBCK and gray gas models. In an attempt to gain an insight into the conditions under which the source term predictions obtained with gray gas model produce acceptable accuracy for engineering applications when compared with those of gas spectral radiative property models, a parametric study was also performed. Comparisons reveal reasonable agreement for problems containing low concentration of absorbing-emitting media at low temperatures. Overall evaluation of the performance of the radiation code developed in this study points out that it provides accurate solutions with SLW model and can be used with confidence in conjunction with computational fluid dynamics (CFD) codes based on the same approach.
Style APA, Harvard, Vancouver, ISO itp.
31

Wanitschke, Matthias. "Methoden und Menschenbild des Ministeriums für Staatssicherheit der DDR /". Köln [u.a.] : Böhlau, 2001. http://www.gbv.de/dms/sub-hamburg/33430346X.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Fiala, Dalibor. "Web mining methods for the detection of authoritative sources". Université Louis Pasteur (Strasbourg) (1971-2008), 2007. https://publication-theses.unistra.fr/public/theses_doctorat/2007/FIALA_Dalibor_2007.pdf.

Pełny tekst źródła
Streszczenie:
La partie innovante de cette thèse porte sur les définitions, les explications et teste des modifications de la formule standard de PageRank adaptée aux réseaux bibliographiques. Les nouvelles versions de PageRank tiennent compte non seulement du graphe de citations mais aussi du graphe de collaboration. On vérifie l’applicabilité des nouveaux algorithmes en traitant des données issues de la bibliothèque numérique DBLP et en comparant les rangs des lauréats du prix « ACM SIGMOD E. F. Codd Innovations Award ». Les classements reposant sur les informations concernant à la fois les citations et les collaborations s’avèrent meilleurs que les classements générés par PageRank standard. Dans un autre chapitre de la thèse, on présente une méthodologie et deux études de cas concernant la recherche des chercheurs faisant autorité en analysant des sites Web académiques
The innovative portion of this doctoral thesis deals with the definitions, explanations and testing of modifications of the standard PageRank formula adapted for bibliographic networks. The new versions of PageRank take into account not only the citation but also the co-authorship graph. We verify the viability of the new algorithms by applying them to the data from the DBLP digital library and by comparing the resulting ranks of the winners of the ACM SIGMOD E. F. Codd Innovations Award. The rankings based on both the citation and co-authorship information turn out to be better than the standard PageRank ranking. In another part of the disseration, we present a methodology and two case studies for finding authoritative researchers by analyzing academic Web sites
Rozvoj informační společnosti v posledních desetiletích umožňuje shromažďovat, filtrovat a ukládat obrovská množství dat. Abychom z nich získali cenné informace a znalosti, musejí se tato data dále zpracovávat. Vědecký obor zabývající se získáváním informací a znalostí z dat se překotně vyvíjí, aby zachytil vysoké tempo nárůstu zdrojů informací, jejichž počet se po vzniku celosvětové pavučiny (webu) zvyšuje geometrickou řadou. Všechny tradiční přístupy z oblasti získávání informací, dobývání znalostí a dolování z dat se musejí přizpůsobit dynamickým, heterogenním a nestrukturovaným datům z webu. Dolování z webu (web mining) se stal plnohodnotnou vědeckou disciplínou. Web má mnoho speciálních vlastností. Tou nejvýznačnější je jeho struktura odkazů mezi stránkami. Web je dynamickou, propojenou sítí. Webové stránky obsahují odkazy na jiné stránky s podobným obsahem nebo na zajímavé či jinak spřízněné dokumenty. Velmi brzy se zjistilo, že webová struktura odkazů je ohromným zdrojem informací a že představuje rozsáhlé pole aplikací z oboru sociálních sítí a matematické teorie grafů. Brin a Page podrobili propojení webu intenzivnímu výzkumu a v roce 1998 vydali dnes už slavný článek „The anatomy of a large-scale hypertextual Web search engine“, v němž světu představili Google – webový vyhledávač pro každého. Úspěch Googlu spočívá především v algoritmu pro hodnocení webových stránek nazvaném PageRank. Ten využívá struktury webu k tomu, aby v něm rekurzivní metodou nalezl populární, důležité, významné a autoritativní zdroje. Technický popis PageRanku byl publikován a měl za následek doslova příval dalších odborných článků o metodách založených na propojení uzlů sítě, které nakonec daly vzniknout úplně nové skupině algoritmů – hodnoticím (ranking) algoritmům. Každá metoda má své zvláštnosti a umí se vypořádat s určitými problémy. Ačkoliv byly hodnoticí algoritmy původně vymyšleny pro web, jsou použitelné v každém prostředí, které lze modelovat grafem. Inovativní část této doktorské práce se zabývá definicemi, vysvětlením a testováním modifikací standardního vzorce PageRanku uzpůsobeného pro bibliografické sítě. Takto vzniklé nové verze PageRanku berou v úvahu nejen citační graf, ale i graf spoluautorství. Použitelnost nových algoritmů ověřujeme jejich aplikací na data z digitální knihovny DBLP. Získané žebříčky významných autorů porovnáváme s držiteli ocenění ACM SIGMOD E. F. Codd Innovations Award. Ukazujeme, že hodnocení založené jak na citacích, tak na spolupracích dává lepší výsledky než standardní PageRank. V jiné části disertace představujeme metodologii a dvě případové studie vyhledávání autoritativních vědců analyzováním univerzitních webů. První studie se zaměřuje na množinu webových stránek českých kateder informatiky. Zkoumáme zde propojení mezi jednotlivými katedrami a několika běžnými hodnoticími metodami označujeme ty nejdůležitější. Poté analyzujeme obsah odborných publikací nalezených na daných stránkách a určujeme nejvýznačnější české autory. V druhé případové studii provádíme ten samý postup s francouzskými univerzitními weby pro nalezení nejvýznamnějších francouzských výzkumníků v oboru informatiky. Rovněž se zmiňujeme o slabých stránkách našeho přístupu a navrhujeme několik budoucích vylepšení. Na základě našich znalostí konstatujeme, že výše uvedené studie jsou jediným dosud publikovaným pokusem o vyhledávání autoritativních vědců z obou zemí přímým dolováním z webových dat
Style APA, Harvard, Vancouver, ISO itp.
33

Dobson, Andrew. "Seismic modelling for the sub-basalt imaging problem including an analysis and development of the boundary element method". Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/765.

Pełny tekst źródła
Streszczenie:
The north-east Atlantic margin (NEAM) is important for hydrocarbon exploration because of the growing evidence of hydrocarbon reserves in the region. However, seismic exploration of the sub-surface is hampered by large deposits of flood basalts, which cover possible hydrocarbon-bearing reservoirs underneath. There are several hypotheses as to why imaging beneath basalt is a problem. These include: the high impedance contrast between the basalt and the layers above; the thin-layering of the basalt due to the many flows which make up a basalt succession; and the rough interfaces on the top-basalt interface caused by weathering and emplacement mechanisms. I perform forward modelling to assess the relative importance of these factors for imaging of sub-basalt reflections. The boundary element method (BEM) is used for the rough-interface modelling. The method was selected because only the interfaces between layers need to be discretized, in contrast to grid methods such as finite difference for which the whole model needs to be discretized, and so should lead to fast generation of shot gathers for models which have only a few homogeneous layers. I have had to develop criteria for accurate modelling with the boundary element method and have considered the following: source near an interface, two interfaces close together, removal of model edge effects and precise modelling of a transparent interface. I have improved efficiency of my code by: resampling the model so that fewer discretization elements are required at low frequencies, and suppressing wrap-around so that the time window length can be reduced. I introduce a new scheme which combines domain decomposition and a far-field approximation to improve the efficiency of the boundary element code further. I compare performance with a standard finite difference code. I show that the BEM is well suited to seismic modelling in an exploration environment when there are only a few layers in the model and when a seismic profile containing many shot gathers for one model is required. For many other cases the finite difference code is still the best option. The input models for the forward modelling are based on real seismic data which were acquired in the Faeroe-Shetland Channel in 2001. The modelling shows that roughness on the surface of the basalt has little effect on the imaging in this particular area of the NEAM. The thin layers in the basalt act as a low-pass filter to the seismic wave. For the real-data acquisition, even the topbasalt reflection is a low frequency event. This is most likely to be due to high attenuation in the layers above the basalt. I show that sea-surface multiple energy is considerable and that it could mask possible sub-basalt events on a seismic shot gather, but any shallow sub-basalt events should still be visible even with the presence of multiple energy. This leaves the possibility that there is only one major stratigraphic unit between the base of the basalt and the crystalline basement. The implication of the forward modelling and real data analysis for acquisition is that the acquisition parameters must emphasize the low frequencies, since the high frequencies are attenuated before they even reach the top-basalt interface. The implication for processing is that multiple removal is of prime importance.
Style APA, Harvard, Vancouver, ISO itp.
34

Karunasena, H. C. P. "Numerical simulation of micro-scale morphological changes of plant food materials during drying: A meshfree approach". Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/76526/1/H.C.P.%20Karunasena%20Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis developed a high preforming alternative numerical technique to investigate microscale morphological changes of plant food materials during drying. The technique is based on a novel meshfree method, and is more capable of modeling large deformations of multiphase problem domains, when compared with conventional grid-based numerical modeling techniques. The developed cellular model can effectively replicate dried tissue morphological changes such as shrinkage and cell wall wrinkling, as influenced by moisture reduction and turgor loss.
Style APA, Harvard, Vancouver, ISO itp.
35

Freitas, Paula Renata de Morais Gomes. "Uma apresentação dos métodos de Pontos Interiores na radioterapia e sua comparação com o método Simplex". Universidade Federal de São Carlos, 2017. https://repositorio.ufscar.br/handle/ufscar/9252.

Pełny tekst źródła
Streszczenie:
Submitted by Paula Freitas (prmoraisg@yahoo.com.br) on 2018-01-15T13:32:04Z No. of bitstreams: 1 Uma Apresentação dos Métodos de Pontos Interiores na Radioterapia e sua Comparação com o Método Simplex.pdf: 2003833 bytes, checksum: de8a8d33b1cd0e5b57ee871d1e24875c (MD5)
Rejected by Milena Rubi ( ri.bso@ufscar.br), reason: Bom dia! Além da dissertação, você deve submeter também a carta comprovante devidamente preenchida e assinada pelo orientador. O modelo da carta encontra-se na página inicial do site do Repositório Institucional. Att., Milena P. Rubi Bibliotecária CRB8-6635 Biblioteca Campus Sorocaba on 2018-01-16T13:23:21Z (GMT)
Submitted by Paula Freitas (prmoraisg@yahoo.com.br) on 2018-01-17T12:08:51Z No. of bitstreams: 2 Uma Apresentação dos Métodos de Pontos Interiores na Radioterapia e sua Comparação com o Método Simplex.pdf: 2003833 bytes, checksum: de8a8d33b1cd0e5b57ee871d1e24875c (MD5) modelo-carta-comprovantes.pdf: 221540 bytes, checksum: a0d5d955c3f58ba53124cda70b38db35 (MD5)
Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2018-01-17T12:19:27Z (GMT) No. of bitstreams: 2 Uma Apresentação dos Métodos de Pontos Interiores na Radioterapia e sua Comparação com o Método Simplex.pdf: 2003833 bytes, checksum: de8a8d33b1cd0e5b57ee871d1e24875c (MD5) modelo-carta-comprovantes.pdf: 221540 bytes, checksum: a0d5d955c3f58ba53124cda70b38db35 (MD5)
Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2018-01-17T12:19:37Z (GMT) No. of bitstreams: 2 Uma Apresentação dos Métodos de Pontos Interiores na Radioterapia e sua Comparação com o Método Simplex.pdf: 2003833 bytes, checksum: de8a8d33b1cd0e5b57ee871d1e24875c (MD5) modelo-carta-comprovantes.pdf: 221540 bytes, checksum: a0d5d955c3f58ba53124cda70b38db35 (MD5)
Made available in DSpace on 2018-01-17T12:19:57Z (GMT). No. of bitstreams: 2 Uma Apresentação dos Métodos de Pontos Interiores na Radioterapia e sua Comparação com o Método Simplex.pdf: 2003833 bytes, checksum: de8a8d33b1cd0e5b57ee871d1e24875c (MD5) modelo-carta-comprovantes.pdf: 221540 bytes, checksum: a0d5d955c3f58ba53124cda70b38db35 (MD5) Previous issue date: 2017-12-15
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
This work aims to present the Internal Points Methods and to compare the Simplex Method, when applied in the resolution of problems related to the optimal concentration of radiation in the treatment of cancer through radiotherapy. The optimum concentration is related to the higher intensity of radiation associated with less damage to the vital organs. This dissertation was based on works on radiotherapy treatment, aiming to make a comparison between two methods widely used to find an optimal concentration.
Este trabalho visa apresentar os Métodos de Pontos Interiores e fazer uma comparação com o Método Simplex, quando aplicados na resolução de problemas relacionados à concentração ótima de radiação no tratamento de câncer via radioterapia. A concentração ótima está relacionada à maior intensidade de radiação associada ao menor prejuízo aos órgãos vitais. Esta dissertação foi embasada em trabalhos sobre o tratamento por radioterapia, visando realizar uma comparação entre dois métodos muito utilizados para encontrar uma concentração ótima.
CAPES: 5564161
Style APA, Harvard, Vancouver, ISO itp.
36

Pich­é, Steffanie. "Numerical Modeling of Tsunami Bore Attenuation and Extreme Hydrodynamic Impact Forces Using the SPH Method". Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/30456.

Pełny tekst źródła
Streszczenie:
Understanding the impact of coastal forests on the propagation of rapidly advancing onshore tsunami bores is difficult due to complexity of this phenomenon and the large amount of parameters which must be considered. The research presented in the thesis focuses on understanding the protective effect of the coastal forest on the forces generated by the tsunami and its ability to reduce the propagation and velocity of the incoming tsunami bore. Concern for this method of protecting the coast from tsunamis is based on the effectiveness of the forest and its ability to withstand the impact forces caused by both the bore and the debris carried along by it. The devastation caused by the tsunami has been investigated in recent examples such as the 2011 Tohoku Tsunami in Japan and the Indian Ocean Tsunami which occurred in 2004. This research examines the reduction of the spatial extent of the tsunami bore inundation and runup due to the presence of the coastal forest, and attempts to quantify the impact forces induced by the tsunami bores and debris impact on the structures. This research work was performed using a numerical model based on the Smoothed Particle Hydrodynamics (SPH) method which is a single-phase three-dimensional model. The simulations performed in this study were separated into three sections. The first section focused on the reduction of the extent of the tsunami inundation and the magnitude of the bore velocity by the coastal forest. This section included the analysis of the hydrodynamic forces acting on the individual trees. The second section involved the numerical modeling of some of the physical laboratory experiments performed by researchers at the University of Ottawa, in cooperation with colleagues from the Ocean, Coastal and River Engineering Lab at the National Research Council, Ottawa, in an attempt to validate the movement and impact forces of floating driftwood on a column. The final section modeled the movement and impact of floating debris traveling through a large-scale model of a coastal forest.
Style APA, Harvard, Vancouver, ISO itp.
37

Holden, Seamus J. "Improved methods for sub-diffraction-limit single-molecule fluorescence measurements". Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543548.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

LACOME, JEAN-LUC. "Analyse de la methode particulaire sph applications a la detonique". Toulouse, INSA, 1998. http://www.theses.fr/1998ISAT0006.

Pełny tekst źródła
Streszczenie:
On propose une analyse de la methode particulaire sph (smoothed particle hydrodynamics). L'absence de grille consuit a quelques caracteristiques seduisantes de la methode comme la capacite de traiter les grandes deformations dans un environnement lagrangien pur. On presente tout d'abord des resultats de consistance pour les differentes approximations scatter et gather. On propose ensuite un nouveau modele d'approximation particulaire conservatif. Les applications aux equations d'euler et de la mecanique des milieux continus sont enfin presentees. Certains problemes de mecaniques des milieux continus presentant de fortes anisotropies, on developpe une longueur de lissage tensorielle. Des tests numeriques viennent achever l'analyse theorique de la methode.
Style APA, Harvard, Vancouver, ISO itp.
39

Yang, Ya-Mei. "Statistical Methods for Integrating Multiple CO2 Leak Detection Techniques at Geologic Sequestration Sites". Research Showcase @ CMU, 2011. http://repository.cmu.edu/dissertations/25.

Pełny tekst źródła
Streszczenie:
Near-surface monitoring is an essential component of leak detection at geologic CO2 sequestration sites. With different strengths and weaknesses for every monitoring technique, an integrated system of leak detection monitoring methods is needed to combine the information provided by different techniques deployed at a site, and no current methodology exists that allows one to quantitatively combine the results from different monitoring technologies and optimize their design. More importantly, an evaluation that is able to provide the assessment of possible size of a leak based on the multiple monitoring results further helps the managers and decision makers to know whether the unexpected leakage event is smaller than the required annual seepage rate for effective long-term storage. The proposed methodology for this application is the development and use of a Bayesian belief network (BBN) for combining measurements from multiple leak detection technologies at a site. The Bayesian Belief Network for CO2 leak detection is built through an integrated application of a subsurface model for CO2 migration under different site conditions; field-generated background information on several monitoring techniques; and statistical methods for processing the field background data to infer the leak detection threshold for each monitoring technique and the conditional probability values used in the BBN. Several statistical methods are applied to estimate the detection thresholds and the conditional probabilities, including (1) Bayesian methods for characterizing the natural background (pre-injection) conditions of the techniques for leak detection, (2) the combination of the characterization of the background monitoring results and the simulated CO2 migration for estimating the probability of leak detection for each monitoring technique given the size of leak, (3) a probabilistic design of CO2 leak detection for estimating the detection probability of a monitoring technique under different site conditions and monitoring densities, (4) a Bayesian belief network for combining measurements from multiple leak detection technologies at an actual test site, with the site conditions and the probability distributions of leak detection and leakage rate estimated for the site. The BBN model is built for the Zero Emissions Research and Technology (ZERT) test site in Montana. The monitoring techniques considered in this dissertation include soil CO2 flux measurement and PFC tracer monitoring. The possible near-surface CO2 and PFC tracer flux rates as a function of distance from a leakage point are simulated by TOUGH2, given different leakage rates and permeabilities. The natural near-surface CO2 flux and background PFC tracer concentration measured at the ZERT site are used to determine critical values for leak inference and to calculate the probabilities of leak detection given a monitoring network. A BBN of leak detection is established by combing the TOUGH2 simulations and the background characterization of near-surface CO2 flux and PFC tracer at the sequestration site. The BBN model can be used as an integrated leak detection tool at a geologic sequestration site, increasing the predicted precision and inferring the possible leak distribution by combining the information from multiple leak detection techniques. Moreover, the BBN model can also be used for evaluating each monitoring technique deployed at a site and for determining the performance of a proposed monitoring network design by a single or multiple techniques.
Style APA, Harvard, Vancouver, ISO itp.
40

Ichino, Y., R. Honda, M. Miura, M. Itoh, Y. Yoshida, Y. Takai, K. Matsumoto, M. Mukaida i A. Ichinose. "Microstructure and field angle dependence of critical current densities in REBa/sub 2/Cu/sub 3/O/sub y/ thin films prepared by PLD method". IEEE, 2005. http://hdl.handle.net/2237/6777.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Aras, Raghav. "Mathematical programming methods for decentralized POMDPs". Thesis, Nancy 1, 2008. http://www.theses.fr/2008NAN10092/document.

Pełny tekst źródła
Streszczenie:
Nous étudions le problème du contrôle optimale décentralisé d'un processus de Markoff partiellement observé sur un horizon fini. Mathématiquement, ce problème se défini comme un DEC-POMDP. Plusieurs problèmes des domaines de l'intélligence artificielles et recherche opérationelles se formalisent comme des DEC-POMDPs. Résoudre un DEC-POMDP dans une mannière exacte est un problème difficile (NEXP-dur). Pourtant, des algorithmes exactes sont importants du point de vue des algorithmes approximés pour résoudre des problèmes pratiques. Les algorithmes existants sont nettement inefficace même pour des DEC-POMDP d'une très petite taille. Dans cette thèse, nous proposons une nouvelle approche basée sur la programmation mathématique. En utilisant la forme séquentielle d'une politique, nous montrons que ce problème peut être formalisé comme un programme non-linéaire. De plus, nous montrons comment transformer ce programme nonl-linéaire un des programmes linéaire avec des variables bivalents et continus (0-1 MIPs). L'éxpérience computationelle sur quatres problèmes DEC-POMDP standards montrent que notre approche trouve une politique optimale beaucoup plus rapidement que des approches existantes. Le temps réduit des heures aux seconds ou minutes
In this thesis, we study the problem of the optimal decentralized control of a partially observed Markov process over a finite horizon. The mathematical model corresponding to the problem is a decentralized POMDP (DEC-POMDP). Many problems in practice from the domains of artificial intelligence and operations research can be modeled as DEC-POMDPs. However, solving a DEC-POMDP exactly is intractable (NEXP-hard). The development of exact algorithms is necessary in order to guide the development of approximate algorithms that can scale to practical sized problems. Existing algorithms are mainly inspired from POMDP research (dynamic programming and forward search) and require an inordinate amount of time for even very small DEC-POMDPs. In this thesis, we develop a new mathematical programming based approach for exactly solving a finite horizon DEC-POMDP. We use the sequence form of a control policy in this approach. Using the sequence form, we show how the problem can be formulated as a mathematical progam with a nonlinear object and linear constraints. We thereby show how this nonlinear program can be linearized to a 0-1 mixed integer linear program (MIP). We present two different 0-1 MIPs based on two different properties of a DEC-POMDP. The computational experience of the mathematical programs presented in the thesis on four benchmark problems (MABC, MA-Tiger, Grid Meeting, Fire Fighting) shows that the time taken to find an optimal joint policy is one or two orders or magnitude lesser than the exact existing algorithms. In the problems tested, the time taken drops from several hours to a few seconds or minutes
Style APA, Harvard, Vancouver, ISO itp.
42

Rockstuhl, Carsten. "Properties of light fields near sub-micro and nano-scale structures". Allensbach UFO, Atelier für Gestaltung und Verl, 2004. http://doc.rero.ch/lm.php?url=1000,40,4,20050413151549-MZ/1t̲heseR̲ockstuhlC.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Thiam, Abdoulaye. "A rigorous numerical method for the proof of Galaktionov-Svirshchevskii's conjecture". Master's thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26595.

Pełny tekst źródła
Streszczenie:
La théorie des systèmes dynamiques étudie les phénomènes qui évoluent au cours du temps. Plus précisément, un système dynamique est donné par : un espace de phase dont les points correspondent à des états possibles du système étudié et une loi d'évolution décrivant l'infinitésimal (pour le cas continu) pas à pas (pour le cas discret) les changements des états du système. Le but de la théorie est de comprendre l'évolution dans le long terme. Dans ce travail, nous présentons une nouvelle méthode pour la résolution des systèmes linéaires avec preuve assistée par ordinateur dans le cadre de modèles linéaires réalistes. Après une introduction de quelques propriétés de la théorie des équations différentielles ordinaires, on introduit une méthode de calcul rigoureux pour trouver la solution périodique de la conjecture de Galaktionov-Svirshchevskii. On reformule le problème comme un problème à valeur initiale, puis on calcule la solution périodique dans le domaine positif et on déduit l'autre solution par symétrie. Notre résultat énonce une partie de la conjecture 3:2 dans le livre de Victor A. Galaktionov & Sergey R. Svirshchevskii : Exact Solutions and Invariant Subspaces of Nonlinear Partial Differential Equations in Mechanics and Physics, [Chapman & Hall/CRC, applied mathematics and nonlinear science series, (2007)]. Mots clés. Conjecture de Galaktionov-Svirshchevskii, Analyse d'intervalle, Théorème de contraction de Banach, Polynômes de rayons.
The theory of dynamical systems studies phenomena which are evolving in time. More precisely, a dynamical system is given by the following data: a phase space whose points correspond to the possible states of the system under consideration and an evolution law describing the infinitesimal (for continuous time) or one-step (for discrete time) change in the state of the system. The goal of the theory is to understand the long term evolution of the system. In this work, we introduce a new method for solving piecewise linear systems with computer assisted proofs in the context of realistic linear models. After introducing some properties of the theory of ordinary differential equations, we provide a rigorous computational method for finding the periodic solution of Galaktionov-Svirshchevskii's conjecture. We reformulate the problem as an initial value problem, compute periodic solution in the positive domain and deduce the other solution by symmetry. Our result settles one part of the Conjecture 3:2 by Victor A. Galaktionov & Sergey R. Svirshchevskii: Exact Solutions and Invariant Subspaces of Nonlinear Partial Differential Equations in Mechanics and Physics, [Chapman & Hall/CRC, applied mathematics and nonlinear science series, (2007)]. Key words. Galaktionov-Svirshchevskii's conjecture, Interval analysis, Contraction mapping theorem, Radii polynomials.
Style APA, Harvard, Vancouver, ISO itp.
44

Nascimento, Ernani Carvalho do. "O percurso epistemológico de René Descartes em sua busca pela verdade". Universidade Federal do Espírito Santo, 2013. http://repositorio.ufes.br/handle/10/6271.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2016-12-23T14:09:37Z (GMT). No. of bitstreams: 1 Ernani Carvalho do Nascimento.pdf: 767263 bytes, checksum: e7140c530048957dc84a6e242062a35a (MD5) Previous issue date: 2013-03-28
This work aimed to investigate the intellectual route of modern philosopher René Descartes to support a new theory of knowledge in the face of the emergence of modernity. Descartes sought to formulate philosophical basis to secure the understanding and practice of experimental science of the XVII century. To accomplish the intent of this research was to carry out an analysis of Cartesian thinking in terms of its historical context. The philosopher then sets out the need to find the structure of the thinking subject, (res cogitans), and later in proposing a truthful God, the rational capacity to ensure truth in scientific method. Our conclusion is that the Cartesian philosophy has, through rational and analytical function to make clear what is confusing. This research is carried from the assumptions contained in the Meditations metaphysical propositions as secure bases and philosophical foundations sufficient for the knowledge and practice of modern science
A presente dissertação buscou investigar o percurso intelectual do filósofo moderno René Descartes para fundamentar uma nova teoria do conhecimento em face da emergência da modernidade. Descartes pretendeu formular bases filosóficas seguras para o entendimento e a prática das ciências experimentais do século XVII. Para realizarmos o intento desta pesquisa nos foi necessário fazer uma análise do pensamento cartesiano em função do seu contexto histórico. O filósofo estabelece, então, a necessidade de encontrar, na estrutura do sujeito pensante, (res cogitans), e, posteriormente, na proposição de um Deus veraz e autoral, a capacidade racional de garantir a verdade no método científico. Nossa conclusão é a de que a filosofia cartesiana possui, por meio do método racional e analítico, a função de tornar claro o que é confuso. Essa pesquisa se realiza a partir dos pressupostos contidos nas Meditações metafísicas, como proposições das bases seguras e dos fundamentos filosóficos suficientes para o conhecimento e a prática da ciência moderna
Style APA, Harvard, Vancouver, ISO itp.
45

Grobotek, Daniel [Verfasser]. "Lacktrocknungssimulation mit Kalibrationsmethoden : Modifikation von Düsensichtfaktoren mittels SPH-Methode / Daniel Grobotek". Aachen : Shaker, 2015. http://d-nb.info/1069046817/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Messelink, W. A. C. M. "Numerical methods for the manufacture of optics using sub-aperture tools". Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1471480/.

Pełny tekst źródła
Streszczenie:
Moore's law, predicting a doubling of transistor count per microprocessor every two years, remains valid, demonstrating exponential growth of computing power. This thesis examines the application of numerical methods to aid optical manufacturing for a number of case-studies related to the use of sub-aperture tools. One class of sub-aperture tools consists of rigid tools which are well suited to smooth surfaces. Their rigidity leads to mismatch between the surfaces of tool and aspheric workpieces. A novel, numerical method is introduced to analyse the mismatch qualitatively and quantitatively, with the advantage that it can readily be applied to aspheric or free-form surfaces for which an analytical approach is difficult or impossible. Furthermore, rigid tools exhibit an edge-effect due to the change in pressure between tool and workpiece when the tool hangs over the edge. An FEA model is introduced that simulates the tool and workpiece as separate entities, and models the contact between them; in contrast to the non-contact, single entity model reported in literature. This model is compared to experimental results. Another class of sub-aperture processes does not use physical tools to press abrasives onto the surface. A numerical analysis of one such process, Fluid Jet Polishing, is presented - work in collaboration with Chubu University. Numerical design of surfaces, required for generating tool-paths, is investigated, along with validation techniques for two test-cases, E-ELT mirror segments and IXO mirror segment slumping moulds. Conformal tools are not well suited to correct surface-errors with dimensions smaller than the contact area between tool and workpiece. A method with considerable potential is developed to analyse spatial-frequency error-content, and used to change the size of the contact area during a process run, as opposed to the constant-sized contact area that is state-of-the-art. These numerical methods reduce dependence on empirical data and operator experience, constituting important steps towards the ultimate and ambitious goal of fully-integrated process-automation.
Style APA, Harvard, Vancouver, ISO itp.
47

Fougeron, Gabriel. "Contribution to the improvement of meshless methods applied to continuum mechanics". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC068/document.

Pełny tekst źródła
Streszczenie:
Cette thèse présente un cadre général pour l’étude de schémas de discrétisation nodaux sans maillageformulé en termes d’opérateurs discrets définis sur un nuage de points : intégration volumique et de bord, gradientet opérateur de reconstruction. Ces définitions dotent le nuage de points d’une structure plus faible que celledéfinie par un maillage, mais partageant avec elle certain concepts fondamentaux. Le plus important d’entre euxest la condition de compatibilité intégro-différentielle. Avec la consistance linéaire du gradient discret, cet analoguediscret de la formule de Stokes constitue une condition nécessaire à la consistance linéaire des opérateurs elliptiquesen formulation faible. Sa vérification, au moins de manière approchée, permet d’écrire des discrétisations dont le tauxde convergence est optimal. La construction d’opérateurs discrets compatibles est si difficile que nous conjecturons– sans parvenir à le démontrer – qu’elle nécessite soit la résolution d’un système linéaire global, soit la constructiond’un maillage : c’est "la malédiction sans-maillage". Trois grandes approches pour la construction d’opérateursdiscrets compatibles sont étudiées. Premièrement, nous proposons une méthode de correction permettant de calculerl’opérateur gradient compatible le plus proche – au sens des moindres carrés – sans mettre à mal la consistancelinéaire. Dans le cas particulier des gradients DMLS, nous montrons que le gradient corrigé est en réalité globalementoptimal. Deuxièmement, nous adaptons l’approche SFEM au cadre opérateur et constatons qu’elle définit desopérateurs consistants à l’ordre un et compatibles. Nous proposons une méthode d’intégration discrète exploitantla relation topologique entre les cellules et les faces d’un maillage qui préserve ces caractéristiques. Troisièmement,nous montrons qu’il est possible de générer tous les opérateurs sans maillage rien qu’avec la donnée d’une formuled’intégration volumique nodale en exploitant la dépendance fonctionnelle des poids volumiques nodaux par rapportà la position des noeuds du nuage, l’espace continu sous-jacent et le nombre de noeuds. Les notions de consistance desdifférents opérateurs sont caractérisées en termes des poids volumiques initiaux, formant un jeu de recommandationpour la mise au point de bonnes formules d’intégration. Dans ce cadre, nous réinterprétons les méthodes classiquesde stabilisation de la communauté SPH comme cherchant à annuler l’erreur sur la formule de Stokes discrète.L’exemple des opérateurs SFEM trouve un équivalent en formulation volume, ainsi que la méthode d’intégrationdiscrète s’appuyant sur un maillage. Son écriture nécessite toutefois une description très précise de la géométriedes cellules du maillage, en particulier dans le cas où les faces ne sont pas planes. Nous menons donc à bienune caractérisation complète de la forme de telles cellules uniquement en fonction de la position des noeuds dumaillage et des relations topologiques entre les cellules, permettant une définition sans ambigüité de leur volume etcentre de gravité. Enfin, nous décrivons des schémas de discrétisation d’équations elliptiques utilisant les opérateurssans-maillage et proposons plusieurs possibilités pour traiter les conditions au bord tout en imposant le moinsde contraintes sur la position des noeuds du nuage de points. Nous donnons des résultats numériques confirmantl’importance capitale de vérifier les conditions de compatibilité, au moins de manière approchée. Cette simple recommandation permet dans tous les cas d’obtenir des discrétisations dont le taux de convergence est optimal
This thesis introduces a general framework for the study of nodal meshless discretization schemes. Itsfundamental objects are the discrete operators defined on a point cloud : volume and boundary integration, discretegradient and reconstruction operator. These definitions endow the point cloud with a weaker structure than thatdefined by a mesh, but share several fundamental concepts with it, the most important of them being integrationdifferentiationcompatibility. Along with linear consistency of the discrete gradient, this discrete analogue of Stokes’sformula is a necessary condition to the linear consistency of weakly discretized elliptic operators. Its satisfaction, atleast in an approximate fashion, yields optimally convergent discretizations. However, building compatible discreteoperators is so difficult that we conjecture – without managing to prove it – that it either requires to solve a globallinear system, or to build a mesh. We dub this conjecture the "meshless curse". Three main approaches for theconstruction of discrete meshless operators are studied. Firstly, we propose a correction method seeking the closestcompatible gradient – in the least squares sense – that does not hurt linear consistency. In the special case ofMLS gradients, we show that the corrected gradient is globally optimal. Secondly, we adapt the SFEM approachto our meshless framework and notice that it defines first order consistent compatible operators. We propose adiscrete integration method exploiting the topological relation between cells and faces of a mesh preserving thesecharacteristics. Thirdly, we show that it is possible to generate each of the meshless operators from a nodal discretevolume integration formula. This is made possible with the exploitation of the functional dependency of nodal volumeweights with respect to node positions, the continuous underlying space and the total number of nodes. Consistencyof the operators is characterized in terms of the initial volume weights, effectively constituting guidelines for thedesign of proper integration formulae. In this framework, we re-interpret the classical stabilization methods of theSPH community as actually seeking to cancel the error on the discrete version of Stokes’s formula. The example ofSFEM operators has a volume-based equivalent, and so does its discrete mesh-based integration. Actually computingit requires a very precise description of the geometry of cells of the mesh, in particular in the case where its facesare not planar. We thus fully characterize the shape of such cells, only as a function of nodes of the mesh andtopological relations between cells, allowing unambiguous definition of their volumes and centroids. Finally, wedescribe meshless discretization schemes of elliptic partial differential equations. We propose several alternatives forthe treatment of boundary conditions with the concern of imposing as few constraints on nodes of the point cloudas possible. We give numerical results confirming the crucial importance of verifying the compatibility conditions,at least in an approximate fashion. This simple guideline systematically allows the recovery of optimal convergencerates of the studied discretizations
Style APA, Harvard, Vancouver, ISO itp.
48

Reddi, Sashank Jakkam. "New Optimization Methods for Modern Machine Learning". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.

Pełny tekst źródła
Streszczenie:
Modern machine learning systems pose several new statistical, scalability, privacy and ethical challenges. With the advent of massive datasets and increasingly complex tasks, scalability has especially become a critical issue in these systems. In this thesis, we focus on fundamental challenges related to scalability, such as computational and communication efficiency, in modern machine learning applications. The underlying central message of this thesis is that classical statistical thinking leads to highly effective optimization methods for modern big data applications. The first part of the thesis investigates optimization methods for solving large-scale nonconvex Empirical Risk Minimization (ERM) problems. Such problems have surged into prominence, notably through deep learning, and have led to exciting progress. However, our understanding of optimization methods suitable for these problems is still very limited. We develop and analyze a new line of optimization methods for nonconvex ERM problems, based on the principle of variance reduction. We show that our methods exhibit fast convergence to stationary points and improve the state-of-the-art in several nonconvex ERM settings, including nonsmooth and constrained ERM. Using similar principles, we also develop novel optimization methods that provably converge to second-order stationary points. Finally, we show that the key principles behind our methods can be generalized to overcome challenges in other important problems such as Bayesian inference. The second part of the thesis studies two critical aspects of modern distributed machine learning systems — asynchronicity and communication efficiency of optimization methods. We study various asynchronous stochastic algorithms with fast convergence for convex ERM problems and show that these methods achieve near-linear speedups in sparse settings common to machine learning. Another key factor governing the overall performance of a distributed system is its communication efficiency. Traditional optimization algorithms used in machine learning are often ill-suited for distributed environments with high communication cost. To address this issue, we dis- cuss two different paradigms to achieve communication efficiency of algorithms in distributed environments and explore new algorithms with better communication complexity.
Style APA, Harvard, Vancouver, ISO itp.
49

Lindvall, Mattias. "Studies towards a general method for attachment of a nuclear import signal. Stabilization of the m3G-Cap". Thesis, Mälardalen University, School of Sustainable Development of Society and Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-9728.

Pełny tekst źródła
Streszczenie:

A synthetic pathway towards the cap-structure of 2,2,7-trimethylguanosine containing a methylene modified triphosphate bridge have been investigated. The modification to the triphosphate bridge is hoped to slow down cap degradation and give the connected  oligunucleotide an increased lifetime. This could result in an better understanding of nuclear transport of oligonucleotides and could thereby helping to develop new treatments for different diseases. The synthesis relies on a coupling reaction between the 2,2,7-trimethylguanosine 5’phosphate and 2’-O-methyladenosine with a 5’-pyrophosphate where the central oxygen has been replaced by a methylene group. The reaction pathway consists of 9 steps of which 8 steps have been successfully performed. The last step, which includes a coupling reaction, was attempted but without successful identification and isolation of the cap-structure, and will need further attention. The reaction has been performed in a milligram scale with various yields.


Presentation utförd
Style APA, Harvard, Vancouver, ISO itp.
50

BALL, SABINE. "Evaluation d'un nouveau procede applique a la recherche d'agglutinines irregulieres : la filtration sur microbilles ; comparaison avec la filtration en gel". Strasbourg 1, 1992. http://www.theses.fr/1992STR15069.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii