Dissertations / Theses on the topic 'Lattice naturale'

To see the other types of publications on this topic, follow the link: Lattice naturale.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Lattice naturale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MASCIA, CLAUDIA. "Studio di componenti del lattice di Euphorbia characias." Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266809.

Full text
Abstract:
The aim of the present study was to contribute to the knowledge of the components of the latex, a milky fluid exudated from the Mediterranean shrub Euphorbia characias. The antioxidant and acetylcholinesterase inhibitory activities were explored in E. characias. The latex was extracted using several conventional methods and a new procedure involving the use of the trichloroacetic acid. The results of this research indicate that E. characias latex exhibits antioxidant activities, is rich in total polyphenolic and flavonoids content and exhibited substantial inhibition of acetylcholinesterase activity. A natural rubber was identified and characterized for the first time in the latex of E. characias. Four different methods, i.e. acetone, acetic acid, trichloroacetic acid and Triton® X-100, followed by successive treatments with cyclohexane/ethanol, were employed to extract the natural rubber. The higher yield (14%) was achieved after extraction with acetic acid. E. characias rubber showed a molecular weight of 93000 with a Mw/Mn of 2.9. 1HNMR, 13C NMR and FTIR analysis revealed the characteristic of the cis- 1,4-polyisoprene typical of natural rubber. The enzyme catalyzing the rubber molecule elongation is cisprenyltransferase called “rubber transferase”. We cloned the cDNA encoding for the E. characias rubber transferase by RT-PCR. The cDNA contains an open reading frame of 810 bp encoding a protein of 270 amino acids (GenBank JX564541). The cDNA nucleotide sequence and the deduced amino acid sequence appear to be highly homologous to the sequence of several plant cis-prenyltransferases. The results provided novel insight into latex components and will ultimately benefit the broader understanding of E. characias latex composition.
APA, Harvard, Vancouver, ISO, and other styles
2

TAGLIARO, IRENE. "NOVEL COLLOIDAL APPROACH TO PREPARE HIGHLY-LOADED SILICA-BASED ELASTOMERIC NANOCOMPOSITES." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2019. http://hdl.handle.net/10281/241175.

Full text
Abstract:
L’industria degli pneumatici si prefigge di indagare possibili strategie sintetiche per ridurre l’impatto ambientale durante tutto il ciclo di vita dello pneumatico, attraverso l’uso di materiali sostenibili e lo sviluppo di tecniche innovative che riducano il consumo di energia e le emissioni di CO2. In questo contesto, questa progetto di dottorato è focalizzato sulla preparazione di nanocompositi eco-sostenibili attraverso l’uso di un approccio colloidale per aumentare la dispersione di filler idrofilici, in linea con i nuovi requisiti di sostenibilità delle politiche europee. L’approccio colloidale punta a produrre nanocompositi con filler idrofilici, la cui efficiente dispersione attraverso tecniche di miscelazione tradizionali rimane difficoltosa a causa della scarsa compatibilità con la matrice organica. Questa tecnica si focalizza sull’incremento della dispersione di filler senza alcuna modifica superficiale, con l’eliminazione delle poveri prodotte durante il mescolamento con significativi benefici per l’ambiente e i lavoratori. Due diversi approcci colloidali sono stati utilizzati: i) una tecnica di miscelazione in lattice e ii) una in situ polimerizzazione in emulsione per la produzione di nanocompositi altamente caricati contenenti filler come silice e sepiolite (Sep), questi ultimi sono considerati filler promettenti nell’ambito del rinforzo dei polimeri grazie alla loro struttura fibrosa e all’elevato rapporto di forma. La concentrazione, la carica, la forma dei nanofiller a base silicea sono stati studiati come parametri rilevanti per la stabilizzazione e destabilizzazione di lattici a base di poliisoprene naturale e sintetico. Una efficiente procedura di miscelazione in lattice è stata messa a punto per produrre compositi eco-sostenibili, chiamati masterbatches (MBs), attraverso l’incorporazione di silice o Sep nel lattice di gomma naturale (emulsione in acqua di cis-1,4-poliisoprene), attraverso la flocculazione (aggregazione risultante dalla coesione di particelle di polimero) di una miscela acquosa di nanofiller a base silicea e gomma. La tecnica di miscelazione in lattice ha dimostrato di favorire una omogenea dispersione di fibre di sepiolite idrofilica in matrice di gomma. La principali caratteristiche fisico-chimiche che controllano i processi di aggregazione in acqua come il pH, il potenziale Z, la concentrazione, assieme alle caratteristiche morfologiche del MB Sep-gomma naturale, sono state prese in considerazione allo scopo di investigare le interazioni Sep-gomma naturale. E’ stato proposto un meccanismo di flocculazione basato su attrazioni elettrostatiche e depletive, connesso all’elevato contenuto di filler (50% in peso) e alla peculiare anisotropia delle particelle di Sep. Inoltre, i MBs sono stati utilizzati per preparare compositi sostenibili attraverso la combinazione di miscelazione in lattice e mescolazione meccanica. Questo approccio combinato sfrutta la buona distribuzione del filler e previene il rilascio di polveri durante il processo. Una polimerizzazione Pickering in situ è stata considerata come alternativa per la produzione di nanocompositi eco-sostenibili. Particelle poliisoprene/filler a base silicea sono state sintetizzate sfruttando dell’effetto stabilizzante di filler inorganici che agiscono come tensioattivi riducendo la tensione superficiale e stabilizzando l’emulsione. Sulla base dei nostri risultati, viene suggerito un possibile meccanismo di polimerizzazione in emulsione stabilizzata da particelle solide. In conclusione, l’approccio colloidale, basato su miscelazione in lattice e polimerizzazione Pickering in situ, può essere considerato un metodo sostenibile, semplice ed efficace adatto per applicazioni tecnologiche altamente performanti. I risultati indicano che le strategie utilizzate sono adatte per produrre nanocompositi altamente caricati di filler a base silicea.
Sustainability has become a field of great interest in the world industry. For the scientific community the challenge lies in the identification of green synthetic approaches and new alternatives to petroleum-based materials. In the case of the tyre industry, the challenge is to identify possible design strategies and alternatives to reduce the environmental impact throughout the life cycle of tyres, by means of both the use of environmentally friendly materials and the development of innovative products, having reduced energy consumption and CO2 emissions. In this context, this PhD thesis is focused on the preparation of eco-friendly silica-based nanocomposites by using a colloidal approach to increase the dispersion of hydrophilic fillers in line with the new requirements of sustainability from the EU policies. The colloidal approach aims at compounding nanocomposites with hydrophilic fillers, whose efficient dispersion through traditional mixing still remains a challenging issue, due to their poor compatibility with the organic matrix. This technique aims at increasing the filler dispersion without any expensive surface modification, with the elimination of the volatile component released during mixing, producing significant benefits for environment and workers. Two different colloidal approaches were applied: i) latex compounding technique (LCT) and ii) in situ emulsion polymerization to prepare highly-loaded nanocomposite rubber materials containing silica-based fillers, silica and sepiolite (Sep) clay, considered a promising filler candidate for the polymer strengthening due to its fibrous structure and high particle aspect ratio (AR). The concentration, the charge and the shape of silica-based nanofillers were studied as relevant parameters on stabilization and destabilization of natural and synthetic polyisoprene latexes. An effective LCT procedure was established to produce eco-friendly composites, namely masterbaches (MBs), by incorporating silica or Sep into natural rubber latex (i.e. emulsion in water of cis-1,4-polyisoprene), through the flocculation (i.e. aggregation resulting from the bridging of polymer particles) of the silica-based nanofillers/rubber mixed aqueous system. LCT showed to favour a homogeneous dispersion of hydrophilic Sep fibers in the rubber matrix. The main physicochemical parameters which control aggregation processes in the aqueous medium, i.e. pH, -potential, concentration, as well as the morphological features of the final Sep-natural rubber MBs, were comprehensively investigated helping to figure out the Sep-NR interactions and to propose a flocculation mechanism, based on electrostatic and depletion attraction forces, remarkably connected both to the high content (50 wt.%) and to the peculiar anisotropy of Sep fibers. Furthermore, the MBs with high filler loadings were used to produce environmentally friendly composites, by combining LTC and melt mixing. This combined approach could take advantage of the good filler distribution and prevents dust from floating in the air during processing. In situ Pickering polymerization was considered as an alternative colloidal approach to produce eco-friendly nanocomposites. Polyisoprene/silica-based structured particles were synthesized on the base of the stabilizing effects of inorganic fillers which act like surfactants lowering the interfacial tension and stabilizing the emulsion. On the basis of our results, we suggested a possible mechanism for emulsion polymerizations stabilized by solid particles. In conclusion, the colloidal approach, based on both LTC and in situ Pickering emulsion polymerization, can be considered as green, simple and effective method suitable for high-performance technological applications. The outcomes indicate the suitability of the adopted strategies as a sustainable procedure for the production of high-loaded silica based-rubber nanocomposites.
APA, Harvard, Vancouver, ISO, and other styles
3

Wegener, Claudia B. "Natural dualities for varieties generated by lattice-structured algebras." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hashemi, Mohammad. "Lattice Boltzmann Simulation of Natural Convection During Dendritic Growth." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1459444594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hardter, Eric. "Modifying the Natural State of Nucleic Acids: Three-Dimensional DNA Lattices and Extended Deoxyribonucleosides." Thesis, Boston College, 2014. http://hdl.handle.net/2345/bc-ir:103610.

Full text
Abstract:
Thesis advisor: Larry W. McLaughlin
By virtue of encoding and transferring hereditary information, nucleic acids effectively represent the blueprint for life as we know it. Given the biological relevance of this class of polymers, it comes as no surprise that scientists are constantly striving to reach a greater understanding of the innumerable genetic corridors contained within the human genome. This has led to the rational design and synthesis of numerous nucleoside analogues in an attempt to alter and subsequently control native nucleic acid structure and function. The first attempts at harnessing the latent abilities of DNA are described in Chapter 2. Multiple tetrahedral branching "hubs" were designed, synthesized and characterized, at which point single-stranded DNA could be elongated from each of the four points of origin. Ensuing hybridization studies were performed with the goal that the binding traits of these elongated tetrahedral lattices could be monitored, and that fully formed lattices could potentially function as means of drug encapsulation or molecular tethering. Chapter 3 describes direct alteration of the standard DNA backbone. Successive synthetic efforts towards creating a 6'-extended deoxyadenosine molecule are detailed, and its effects on the stability of duplexed DNA (along with sister molecules 6'-deoxythymidine and an elongated 3'-deoxythymidine) are also defined. Upon insertion into DNA, this class of extended nucleosides could ultimately lead to a new duplex structure, as well as novel binding properties
Thesis (PhD) — Boston College, 2014
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Chemistry
APA, Harvard, Vancouver, ISO, and other styles
6

Craig, Andrew Philip Knott. "Canonical extensions of bounded lattices and natural duality for default bilattices." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:c7137abe-b076-42ad-9691-bddfca36967e.

Full text
Abstract:
This thesis presents results concerning canonical extensions of bounded lattices and natural dualities for quasivarieties of default bilattices. Part I is dedicated to canonical extensions, while Part II focuses on natural duality for default bilattices. A canonical extension of a lattice-based algebra consists of a completion of the underlying lattice and extensions of the additional operations to the completion. Canonical extensions find rich application in providing an algebraic method for obtaining relational semantics for non-classical logics. Part I gives a new construction of the canonical extension of a bounded lattice. The construction is done via successive applications of functors and thus provides an elegant exposition of the fact that the canonical extension is functorial. Many existing constructions are described via representation and duality theorems. We demonstrate precisely how our new formulation relates to existing constructions as well as proving new results about complete lattices constructed from graphs. Part I ends with an analysis of the untopologised structures used in two methods of construction of canonical extensions of bounded lattices: the untopologised graphs used in our new construction, and the so-called `intermediate structure'. We provide sufficient conditions for the intermediate structure to be a lattice and, for the case of finite lattices, identify when the dual graph is not a minimal representation of the lattice. Part II applies techniques from natural duality theory to obtain dualities for quasivarieties of bilattices used in default logic. Bilattices are doubly-ordered algebraic structures which find application in reasoning about inconsistent and incomplete information. This account is the first attempt to provide dualities or representations when there is little interaction required between the two orders. Our investigations begin by using computer programs to calculate dualities for specific examples, before using purely theoretical techniques to obtain dualities for more general cases. The results obtained are extremely revealing, demonstrating how one of the lattice orders from the original algebra is encoded in the dual structure. We conclude Part II by describing a new class of default bilattices. These provide an alternative way of interpreting contradictory information. We obtain dualities for two newly-described quasivarieties and provide insights into how these dual structures relate to previously described classes of dual structures for bilattices.
APA, Harvard, Vancouver, ISO, and other styles
7

BOCANEGRA, CIFUENTES JOHAN AUGUSTO. "Lattice Boltzmann Method: applications to thermal fluid dynamics and energy systems." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1060259.

Full text
Abstract:
In many energy systems fluids play a fundamental role, and computational simulations are a valuable tool to study their complex dynamics. The Lattice Boltzmann Method (LBM) is a relatively new numerical method for computational fluid dynamics, but its applications can be extended to physical phenomena beyond fluid flows. This thesis presents applications of the LBM to thermal fluid dynamics and energy systems. Specific applications considered are: application to nuclear reactor engineering problems; thermal fluid dynamic behavior of a Natural Circulation Loop; nanoparticles gravitational sedimentation; acoustical problems. The main original contributions derived from this work are: first, the systematic description of the current status of LBM applications to nuclear reactors problems, including test cases and benchmark simulations; second, the development and validation of a LBM model for a single-phase natural circulation loop; third, the development and validation of a LBM model for gravitational sedimentation of nanoparticles, and fourth, the systematic description of the current status of LBM applications to acoustics, including simulations of test cases. The development of this thesis was not limited to simulations; experimental studies in parallel connected natural circulation loops of small inner diameter were conducted, showing the wide applicability of the one-dimensional theoretical models used to validate the LBM results. Additional contributions derived from this work: 1. the applicability of the method to study neutron transport and nuclear waste disposal using porous materials was shown. 2. changes in the thermophysical performance of the natural circulation loop when the loop reached a non-laminar (transition) regime were found at a Reynolds number lower than the typical range. 3. variable diffusion and sedimentation parameters were effective to model the experimental sedimentation curves. In conclusion, this work shows that the LBM is a versatile and powerful computational tool that can be used beyond the common Computational Fluid Dynamics applications.
In many energy systems fluids play a fundamental role, and computational simulations are a valuable tool to study their complex dynamics. The Lattice Boltzmann Method (LBM) is a relatively new numerical method for computational fluid dynamics, but its applications can be extended to physical phenomena beyond fluid flows. This thesis presents applications of the LBM to thermal fluid dynamics and energy systems. Specific applications considered are: application to nuclear reactor engineering problems; thermal fluid dynamic behavior of a Natural Circulation Loop; nanoparticles gravitational sedimentation; acoustical problems. The main original contributions derived from this work are: first, the systematic description of the current status of LBM applications to nuclear reactors problems, including test cases and benchmark simulations; second, the development and validation of a LBM model for a single-phase natural circulation loop; third, the development and validation of a LBM model for gravitational sedimentation of nanoparticles, and fourth, the systematic description of the current status of LBM applications to acoustics, including simulations of test cases. The development of this thesis was not limited to simulations; experimental studies in parallel connected natural circulation loops of small inner diameter were conducted, showing the wide applicability of the one-dimensional theoretical models used to validate the LBM results. Additional contributions derived from this work: 1. the applicability of the method to study neutron transport and nuclear waste disposal using porous materials was shown. 2. changes in the thermophysical performance of the natural circulation loop when the loop reached a non-laminar (transition) regime were found at a Reynolds number lower than the typical range. 3. variable diffusion and sedimentation parameters were effective to model the experimental sedimentation curves. In conclusion, this work shows that the LBM is a versatile and powerful computational tool that can be used beyond the common Computational Fluid Dynamics applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Bray, Christina L., Robert G. Bryant, M. J. Cox, Gianni Ferrante, Y. Goddard, Sandip Sur, and Joseph P. Hornack. "The proton Nuclear Magnetic Resonance spin-lattice relaxation rate of some hydrated synthetic and natural sands." Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-192008.

Full text
Abstract:
The proton nuclear magnetic resonance (NMR) spin-lattice relaxation rate (R1) of hydrated sands is often used to determine porosity characteristics of near-surface aquifers using magnetic resonance sounding. Large variations in R1 have been reported in laboratory measurements on hydrated sands. To understand these variations, the R1 values of several fully hydrated sands were studied as a function of grain diameter (d) and magnetic field strength (BB0). We conclude the variations are a consequence of trace paramagnetic metals in the sand grains. R1 values from magnetic resonance sounding data should not be used to predict void size in aquifers unless the exact chemical composition of the grains is known.
APA, Harvard, Vancouver, ISO, and other styles
9

Bray, Christina L., Robert G. Bryant, M. J. Cox, Gianni Ferrante, Y. Goddard, Sandip Sur, and Joseph P. Hornack. "The proton Nuclear Magnetic Resonance spin-lattice relaxation rate of some hydrated synthetic and natural sands." Diffusion fundamentals 10 (2009) 8, S. 1-3, 2009. https://ul.qucosa.de/id/qucosa%3A14098.

Full text
Abstract:
The proton nuclear magnetic resonance (NMR) spin-lattice relaxation rate (R1) of hydrated sands is often used to determine porosity characteristics of near-surface aquifers using magnetic resonance sounding. Large variations in R1 have been reported in laboratory measurements on hydrated sands. To understand these variations, the R1 values of several fully hydrated sands were studied as a function of grain diameter (d) and magnetic field strength (BB0). We conclude the variations are a consequence of trace paramagnetic metals in the sand grains. R1 values from magnetic resonance sounding data should not be used to predict void size in aquifers unless the exact chemical composition of the grains is known.
APA, Harvard, Vancouver, ISO, and other styles
10

Rydén, Gabriel. "Ab initio lattice dynamics and Anharmonic effects in refractory Rock-salt structure TaN ceramic." Thesis, Linköpings universitet, Teoretisk Fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-174208.

Full text
Abstract:
Transition Metal Nitrides (TMN) are of considerable importance for the industry and have gathered a great deal of interest in the scientific community, mostly due to their unique physical and mechanical properties. To increase the understanding of what enables them to have such extraordinary properties requires the study of lattice dynamics and their phonon dispersion. In this thesis, the transition metal nitride, TaN, is studied extensively along with preliminary results for NbN. The primary tool for this investigation is simulations. Computational methods, such as ab initio Molecular Dynamics (AIMD) and the Temperature Dependent Effective Potential (TDEP) method are used to generate phonon spectra and to compute the lattice thermal conductivity. The results indicate that TaN crystal structure stabilizes dynamically at much lower temperatures than previously established with other methods. The average linear thermal expansion coefficient of TaN is a = 9.0 * 10-6 K-1, which is consistent with other TMN. The phonon-phonon lattice thermal conductivity of TaN follows a similar behaviour as for other TMN. Preliminary result for NbN suggests a behaviour at lower temperatures that are similar to that observed for TaN. However, further investigations are required to pinpoint TaN and NbN transition temperatures more exactly and include effects, such as electron-phonon scattering and isotope effects for a better estimation of the lattice thermal conductivity.
APA, Harvard, Vancouver, ISO, and other styles
11

Xue, Boyu. "3D Printed Lattice Structure for Driveline Applications." Thesis, KTH, Materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299270.

Full text
Abstract:
Lattice structures have received a lot of attention as cellular materials in recent years because of their outstanding properties, such as high strength-to-weight ratio, heat transfer, energy absorption, and capability of improving noise, vibration and harshness (NVH) behavior. This type of structure received a boost from additive manufacturing (AM) technology, which can fabricate geometries in practically any shape. Due to economic and environmental requirements, lightweight design is increasingly used in automobile and construction equipment applications. NVH behavior is a crucial issue for construction equipment. However, the conventional structures' NVH behavior is mainly decided by the mass, so silence often requires heavy systems, leading to more energy consumption and emission. Therefore, the environmental trends and the resulting economic competition have limited traditional (heavy) solutions to improve NVH behavior and make the lightweight design more difficult. Novel solutions are necessary to light the difficulty and challenge of combining NVH and lightweight requirements. In this research, topology optimization was implemented on a New Articulated Hauler Transmission (NAHT) component to balance lightweight and NVH behavior. The topology- optimized 3D model was filled by a non-homogenous lattice structure with optimal lattice density via size optimization. Lattice structure optimization is one type of topology optimization, and it is the term for describing these procedures. To fabricate the complicated lattice structure, additive manufacturing (or 3D printing) is required (after topology and lattice structure optimization). The new models were analyzed using the finite element method (FEM), and the results of the analysis were compared with those of the original models. After the comparison, positive results were obtained, demonstrating that topology and lattice optimization can be applied in the design of construction equipment components. According to the results, lattice structure optimization can create a reliable lightweight design with good NVH behavior. Furthermore, lattice structure's organization and layout have a significant impact on the overall performance.
Gitterstrukturer har fått mycket uppmärksamhet som cellulära material under de senaste åren på grund av deras enastående egenskaper, t.ex. hög hållfasthet i förhållande till vikt, värmeöverföring, energiabsorption och förmåga att förbättra buller-, vibrations- och bullerskador (NVH-beteende). Denna typ av struktur har fått ett uppsving av tekniken för additiv tillverkning (AM), som kan tillverka geometrier i praktiskt taget vilken form som helst. På grund av ekonomiska och miljömässiga krav används lättviktsdesign i allt större utsträckning inom bilindustrin och byggnadsutrustning. NVH-egenskaperna är en viktig fråga för anläggningsutrustning. De konventionella konstruktionernas NVH-beteende bestäms dock huvudsakligen av massan, vilket innebär att tystnad ofta kräver tunga system, vilket leder till ökad energiförbrukning och större utsläpp. Miljötrenderna och den ekonomiska konkurrens som följer av detta har därför begränsat de traditionella (tunga) lösningarna för att förbättra NVH-egenskaperna och gjort lättviktsdesignen svårare. Nya lösningar är nödvändiga för att lösa svårigheten och utmaningen med att kombinera NVH- och lättviktskrav. I den här forskningen genomfördes topologioptimering på en komponent för en ny ledad transportörtransmission (NAHT) för att balansera lättvikts- och NVH-beteende. Den topologioptimerade 3D-modellen fylldes med en icke-homogen gitterstruktur med optimal gittertäthet via storleksoptimering. Gitterstrukturoptimering är en typ av topologioptimering, och det är termen för att beskriva dessa förfaranden. För att tillverka den komplicerade gitterstrukturen krävs additiv tillverkning (eller 3D-utskrift) (efter topologi- och gitterstrukturoptimering). De nya modellerna analyserades med hjälp av finita elementmetoden (FEM), och resultaten av analysen jämfördes med resultaten av de ursprungliga modellerna. Efter jämförelsen erhölls positiva resultat, vilket visar att optimering av topologi och gitterstruktur kan tillämpas vid utformning av komponenter för byggutrustning. Enligt resultaten kan optimering av gitterstrukturen skapa en tillförlitlig lättviktsdesign med bra NVH-beteende. Dessutom har gitterstrukturens organisering och layout en betydande inverkan på den totala prestandan.
APA, Harvard, Vancouver, ISO, and other styles
12

Shafei, Babak. "Reactive transport in natural porous media: contaminant sorption and pore-scale heterogeneity." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45785.

Full text
Abstract:
Reactive Transport Models (RTMs) provide quantitative tools to analyze the interaction between transport and biogeochemical processes in subsurface environments such as aquatic sediments and groundwater flow. A tremendous amount of research has shown the role and impact of scaling behavior of the reactive systems which stems from geologic heterogeneity. Depending on the kinetics of the reactions, different types of formulations have been proposed to describe reactions in RTMs. We introduce a novel quantitative criteria on the range of validity of local equilibrium assumption (LEA) in aquatic sediments with irreversible heterogeneous sorption reactions. Then we present a one-dimensional (1-D) early diagenetic module, MATSEDLAB, developed in MATLAB. The module provides templates for representing the reaction network, boundary conditions and transport regime, which the user can modify to fit the particular early diagenetic model configuration of interest. We describe the theoretical background of the model and introduce the MATLAB pdepe solver, followed by calibration and validation of the model by a number of theoretical and empirical applications. Finally, we introduce a new pore-scale model using lattice Boltzmann (LB) approach. It uses an iterative scheme for the chemical transport-reaction part and recent advances in the development of optimal advection-diffusion solvers within the lattice Boltzmann method framework. We present results for the dissolution and precipitation of a porous medium under different dynamical conditions, varying reaction rates and the ratio of advective to diffusive transport (Pe, Peclet number) for linear reactions. The final set of calculations considers sorption reactions on a heterogeneous porous medium. We use our model to investigate the effect of heterogeneity on the pore-scale distribution of sorption sites and the competition between three different sorption reactions.
APA, Harvard, Vancouver, ISO, and other styles
13

Ivanov, Angelov Mitko. "Sound Scattering by Lattices of Heated Wires." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/63275.

Full text
Abstract:
[EN] The aim of this work is to demonstrate theoretically and experimentally how acoustic wave propagation can be controlled by temperature gradients. Starting with the simplest case of two hot wires in air the study extends over periodic structures known as Sonic Crystals (SCs). The Finite Elements Method (FEM) has been employed to perform numerical simulations in order to demonstrate collimation and focusing effect of acoustic waves in two-dimensional (2D) SC whose filling fraction is adjusted by temperature gradients. As a part of the research, Bragg reflection and Fabry-Perot type of acoustic effects are investigated for the proposed type of SC. As example, a SC with desired transmittance can be tailored. Also, gradient index (GRIN) 2D sonic lenses are studied. Using parallel rows of heated wires whose temperatures vary according to a prefixed gradient index law a GRIN lens can be designed with a given performance. Moreover, by changing the temperature of the wires a change in the filling fraction inside the GRIN SC can be achieved. Thus, the local refraction index, which is directly related to the filling fraction, is changed too and an index gradient variation inside the GRIN SC is obtained. This GRIN SC is a direct analogy of gradient media observed in nature. Like their optical counterparts, the investigated 2D GRIN SC lenses have flat surfaces and are easier for fabrication than curved SC lenses. The bending of sound waves obtained by GRIN acoustics structures can be used to focusing and collimating acoustic waves. Another aspect of this work is about tuning some SC properties as effective refractive index, effective mass density, etc. in order to obtain a SC with prefixed properties. Since active tuning of the phononic band gaps is certainly desirable for future applications with enhanced functionalities, few attempts have been made to develop tunable SCs thus far. By controlling the incident angle or operating frequency, a GRIN SC can dynamically adjust the curved trajectory of acoustic wave propagation inside the SC structure. Among the last studies of tunable SCs, the filling fractions were tuned either by direct physical deformation of the structure or external stimuli. The former is impractical for most applications and the latter often requires very strong stimuli to produce only modest adjustment. In this work another way to tune the SC properties is proposed. Hot and cold media have different density, speed of sound, refractive index, etc. in comparison with the same properties at normal conditions, so inserting temperature gradients inside the medium can be used to tune the SC properties in certain limits. The proposed way to obtain temperature gradients inside SC is by wires made of Nicrom which are heated by electrical currents. There are some important advantages of this method. First, changing the electrical current intensity through the wires the SC properties can be changed dynamically. Second, it is relatively easier to change the filling fraction simply by adjusting the current intensity than physically changing the structure or applying strong electric or magnetic fields. In conclusion, the method proposed in this thesis allows us, in principle, to get materials and structures with dynamically adjustable acoustic properties using the temperature control through electric current in the wires, within certain limits. Thus, it is easy to carry out experiments of wave propagation phenomena in a macroscopic scale similar to those that occur in microscopic structures for the propagation of electromagnetic waves of high frequency (microwaves and light).
[ES] El objetivo de este trabajo es demostrar teoréticamente y experimentalmente como la propagación de ondas acústicas puede ser controlada por gradientes de temperatura. Empezando con el caso más simple de dos hilos calientes en aire, el estudio se extiende sobre estructuras periódicas conocidas como cristales sónicos (CS). Se ha utilizado el Método de Elementos Finitos (FEM) para realizar simulaciones numéricas con el objetivo de demonstrar la colimación y focalización de ondas acústicas en CS bidimensionales (2D) cuya fracción de llenado es ajustable mediante gradientes de temperatura. Como parte de la investigación se ha analizado la reflexión de Bragg y el efecto de tipo Fabry-Perot asociados con los CSs estudiados. Entre los ejemplos tratados figuran un CS con una transmitancia ajustable a voluntad, dentro de ciertos límites. También se han estudiado lentes acústicas bidimensionales de gradiente de índice, basadas en gradiente de temperatura. Utilizando cortinas paralelas de hilos calientes cuya temperatura varía según una ley dada se puede diseñar una lente GRIN con propiedades determinadas. Por otra parte, cambiando la temperatura de los hilos se puede lograr un cambio en la fracción de llenado dentro del GRIN CS. Así, el índice de refracción local, que está directamente relacionado con la fracción de llenado, se cambia también y se obtiene una variación de gradiente de índice dentro del GRIN CS. Este GRIN CS es una analogía directa de medios con gradiente, observados en la naturaleza. Otro aspecto de este trabajo trata sobre el ajuste de algunas propiedades de un SC como el índice de refracción efectivo o la densidad efectiva con el objetivo de obtener unas propiedades deseadas del cristal. Como el ajuste activo de los bandgaps fonónicos es ciertamente deseado para futuras aplicaciones con funcionalidades mejoradas, hasta ahora se han hecho varios intentos de desarrollar CSs de características ajustables. Controlando el ángulo de incidencia o la frecuencia de funcionamiento, un GRIN CS puede ajustar dinámicamente la curvatura de la trayectoria de propagación dentro de la estructura CS. Entre los últimos estudios de CSs las fracciones de llenado se ajustaron mediante una deformación física directa de la estructura o mediante estímulos externos (por ejemplo campos eléctricos o magnéticos). El primero es poco práctico para una gran parte de las aplicaciones y el segundo a menudo requiere estímulos muy fuertes para ajustes modestos. En este trabajo se propone otra forma de ajustar las propiedades de un CS. Las propiedades acústicas del medio de propagación (densidad, índice de refracción) dependen de la temperatura, por tanto, introduciendo gradientes de temperatura dentro de dicho medio pueden ajustarse a voluntad las propiedades del CS dentro de ciertos límites. La manera de obtener gradientes de temperatura dentro del CS, propuesta en este estudio, es mediante hilos de nicrom calentados con corrientes eléctricas. Hay algunas ventajas importantes de este método. En primer lugar, cambiando la intensidad de corriente eléctrica que circula por los hilos se puede conseguir cambiar dinámicamente las propiedades del CS. En segundo lugar, es relativamente más fácil de cambiar la fracción de llenado simplemente ajustando la intensidad de la corriente eléctrica que modificar físicamente la estructura o aplicar fuertes campos eléctricos o magnéticos. En conclusión, el método propuesto en esta tesis permite, en principio, conseguir materiales y estructuras con propiedades acústicas ajustables dinámicamente mediante el control de la temperatura a través de la corriente eléctrica en los hilos, dentro de ciertos límites. De esta forma se puede experimentar fácilmente a escala macroscópica fenómenos de propagación de ondas análogos a los que ocurren en estructuras microscópicas para la propagación de ondas electromagnéticas de alta frecuencia (microondas y l
[CAT] L'objectiu d'este treball és demostrar teorèticament i experimentalment com la propagació d'ones acústiques pot ser controlada per gradients de temperatura. Començant amb el cas més simple de dos fils calents en aire, l'estudi s'estén sobre estructures periòdiques conegudes com a cristalls sónics (CS) . S'ha utilitzat el Mètode d'Elements Finits (FEM) per a realitzar simulacions numèriques amb l'objectiu de demonstrar la col¿limació i focalització d'ones acústiques en CS bidimensionals (2D) la fracció de omplit de la qual és ajustable per mitjà de gradients de temperatura. Com a part de la investigació s'ha analitzat la reflexió de Bragg i l'efecte de tipus Fabry-Perot associats amb els CSs estudiats. Entre els exemples tractats figuren un CS amb una transmitancia ajustable a voluntat, dins de certs límits. També s'han estudiat lents acústiques bidimensionals de gradient d'índex, basades en gradient de temperatura. Utilitzant cortines paral¿leles de fils calents la temperatura de la qual varia segons una llei donada es pot dissenyar una lent GRIN amb propietats determinades. D'altra banda, canviant la temperatura dels fils es pot aconseguir un canvi en la fracció d'ompliment dins del GRIN CS. Així, l'índex de refracció local, que està directament relacionat amb la fracció d'ompliment, es canvia també i s'obté una variació de gradient d'índex dins del GRIN CS. Este GRIN CS és una analogia directa de mitjans amb gradient, observats en la naturalesa. Com les seues analogies òptiques, les lents, estudiades en este treball, tenen les superfícies planes i són més fàcils de fabricar que les lents corbades. La deflexión de les ones acústiques obtinguda per mitjà d'una lent de gradient GRIN es pot utilitzar per a focalitzar o colimar feixos de so. Un altre aspecte d'este treball tracta sobre l'ajust d'algunes propietats d'un SC com l'índex de refracció efectiu o la densitat efectiva amb l'objectiu d'obtindre unes propietats desitjades del cristall. Com l'ajust actiu dels bandgaps fonónicos és certament desitjat per a futures aplicacions amb funcionalitats millorades, fins ara s'han fet diversos intents de desenrotllar CSs de característiques ajustables. Controlant l'angle d'incidència o la freqüència de funcionament, un GRIN CS pot ajustar dinàmicament la curvatura de la trajectòria de propagació dins de l'estructura CS. Entre els últims estudis de CSs les fraccions d'ompliment es van ajustar per mitjà d'una deformació física directa de l'estructura o per mitjà d'estímuls externs. El primer és poc pràctic per a una gran part de les aplicacions i el segon sovint requerix estímuls molt forts per a ajustos modestos. En este treball es proposa una altra forma d'ajustar les propietats d'un CS. Les propietats acústiques del mig de propagació (densitat, índex de refracció) depenen de la temperatura, per tant, introduint gradients de temperatura dins del dit mitjà poden ajustar-se a voluntat les propietats del CS dins de certs límits. La manera d'obtindre gradients de temperatura dins del CS, proposta en este estudi, és per mitjà de fils de Nicrom calfats amb corrents elèctrics. Hi ha alguns avantatges importants d'este mètode. En primer lloc, canviant la intensitat de corrent elèctric que circula pels fils es pot aconseguir canviar dinàmicament les propietats del CS. En segon lloc, és relativament més fàcil de canviar la fracció d'ompliment simplement ajustant la intensitat del corrent elèctric que modificar físicament l'estructura o aplicar forts camps elèctrics o magnètics. En conclusió, el mètode proposat en esta tesi permet, en principi, aconseguir materials i estructures amb propietats acústiques ajustables dinàmicament per mitjà del control de la temperatura a través del corrent elèctric en els fils, dins de certs límits. D'esta manera es pot experimentar fàcilment a escala macroscòpica fenòmens de propagació d'ones anàlegs a què ocorren e
Ivanov Angelov, M. (2016). Sound Scattering by Lattices of Heated Wires [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/63275
TESIS
APA, Harvard, Vancouver, ISO, and other styles
14

Beneitez, Miguel, and Johan Sundin. "Turbulent flow control via nature inspired surface modifications." Thesis, KTH, Mekanik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206189.

Full text
Abstract:
Many of the flows in nature are turbulent. To modify turbulent flows, nature serves itself with different types of coatings. Sharks have riblets-like structures on their skin, fishes have slime with polymers and the surface of the lotus flower has superhydrophobic properties. However many times these naturally occurring coatings also serve other purposes. Due to millions of years of adaption, there are anyway many reasons to be inspired by these. The present work is an investigation of nature inspired coatings with the aim of passive flow manipulations. The goal of the investigation has not been to achieve drag reduction, but to achieve a better understanding of the effect of these coatings on turbulent flows. Simulations have been performed in a channel flow configuration, where the boundary condition on one wall has been modified. A macroscopic description has been used to simulate superhydrophobic and porous-like surfaces and a microscopic description has been used to simulate suspended fibers, both rigid and flexible, attached to the channel wall. For the macroscopic description, a pseudo-spectral method was used and for the microscopic description a lattice-Boltzmann method was used. The superhydrophobic modification was implemented using a general slip tensor formulation. In agreement with earlier results, drag reduction was achieved with slip in the streamwise direction and slip in the spanwise direction resulted in drag increase. Non-zero off-diagonal terms in the slip tensor resulted in a slight drag increase, but with rather similar flow behaviour. Transpiration, imitating a porous media, gave rise to drag increase and severely modified the turbulent structures, forming two-dimensional structures elongated in the spanwise direction. For the short fibers, neither rigid nor flexible fibers modified the velocity field to a large extent. The fibers gave rise to recirculation regions and these were seen to be stronger below high-speed streaks. Flexible fibers showed similarities to porous media through a coupling of wallnormal velocity and pressure fluctuations, and this was not seen for the rigid fibers. The fiber deflections were seen to correlate well with the pressure fluctuations.
Många naturligt förekommande flöden är turbulenta. Naturen har också gett upphov till flera typer av ytskikt som kan påverka dessa. Hajars skinn har räfflor, fiskar har slem som innehåller polymerer och lotusblommans yta har superhydrofobiska egenskaper, men ofta har dessa naturliga ytskikt också andra egenskaper. På grund av miljoner år av anpassning så finns det ändå många skäl att studera dessa. Detta arbete är en studie av naturinspirerade ytskikt, där målet har varit passiva flödesmanipulationer. Målet har inte varit att åstadkomma en ytfriktionsminskning, utan att få en bättre förståelse om hur dessa ytskikt påverkar turbulenta flöden. Simuleringar har utförts i en kanalliknande geometri, där en kanalväggs randvillkor har modifierats. En makroskopisk beskrivning har använts för att simulera superhydrofobiska och porösa ytor och en mikroskopisk beskriving har använts för att simulera fibrer, både stela och böjbara, fastsatta på en kanalvägg. För flödet med det makroskopiskt beskrivna randvillkoret har en pseudospektral metod använts och för flödet med det mikroskopiskt beskrivna randvillkoret har en lattice-Boltzmannmetod använts. Den superhydrofobiska ytan implementerades genom en generell tensorformulering. Ett randvillkor med nollskild hastighet i kanalens riktning gav upphov till en ytfriktionsminskning och ett randvillkor med nollskild hastighet vinkelrät mot kanalens riktning gav upphov till en ökad ytfriktion, i överensstämmelse med tidigare resultat. Nollskilda icke-diagonala tensorelement gav upphov till en smärre ökning av ytfriktionen, utan att nämnvärt förändra flödet. De porösa ytorna gav upphov till en ytfriktionsökning och hade stor inverkan på de turbulenta strukturerna. Dessa ytor bildade tvådimensionella struturer vinkelrät mot kanalens riktning. Varken de stela eller de böjbara fibrerna gav upphov till stora ändringar i hastighetsfältet. Däremot uppstor cirkulationszoner och dessa var starkare under stråkstrukturer med hög hastighet. De böjbara fibrerna uppvisade likheter med porösa material genom en interaktion mellan det vertikala hastighetsfältet och de turbulenta tryckfluktuationerna. Denna interaktion uppstod inte för de stela fibrerna. Fibrernas böjning korrelerade också i stor utsträckning till tryckfluktuationerna.
APA, Harvard, Vancouver, ISO, and other styles
15

Ruzzi, Anna. "Stabilizzazione di succo di carota tramite trattamenti non termici e utilizzo di antimicrobici naturali." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19384/.

Full text
Abstract:
Nella mia tesi sono state valutate le potenzialità della combinazione tra diversi trattamenti non termici quali le alte pressioni di omogeneizzazione (HPH) alla pressione di 150MPa per 3 cicli, la fermentazione da parte del ceppo nisina produttore Lactococcus lactis subsp. lactis LBG2 per 7h a 30°C e l’addizione di un antimicrobico naturale come l’olio essenziale di timo alla concentrazione di 60ppm, per migliorare la shelf-life e la sicurezza microbiologica di succo di carota. I risultati sulla sicurezza microbiologica di succo di carota in presenza di microrganismi patogeni, ha indicato la fermentazione da parte del ceppo LBG2 permettesse di disattivare rapidamente i patogeni presenti. La combinazione con il trattamento HPH ha incrementato tali cinetiche, al contrario l’addizione di olio essenziale di timo non ha portato ad effetti antimicrobici additivi. I risultati delle prove di Shelf-life indicano come l’utilizzo della fermentazione lattica, combinata con un pre-trattamento HPH per ridurre la microflora iniziale, rappresenti un’ottima strategia alternativa al trattamento termico. Infatti i risultati hanno evidenziato come il succo di carota mantenesse delle buone caratteristiche microbiologiche per oltre 9 giorni a 4°C e fino a 7 giorni in condizioni di abuso termico a 10°C. L’addizione dell’olio essenziale di timo non si è dimostrata una strategia efficace e non ha portato ad incrementi di Shelf-life del prodotto. Premesso che questa sperimentazione ha permesso di stabilire la combinazione tra un trattamento HPH seguito da una fermentazione con il ceppo LBG2 incrementi Shelf-life e sicurezza di succo di carota, sarebbero necessarie ulteriori test per stabilire l’accettabilità da parte del consumatore di questo prodotto fermentato ed inoltre, sarebbe auspicabile prevedere uno scaling-up del processo fermentativo messo a punto per validare i dati ottenuti in laboratorio su scala maggiore e in condizioni più vicine a quelle di una realtà industriale.
APA, Harvard, Vancouver, ISO, and other styles
16

Rehhali, Khaoula. "Simulations de la convection naturelle couplée au rayonnement surfacique par la méthode de Boltzmann sur réseau : cas des chauffages variable et discret." Electronic Thesis or Diss., Amiens, 2019. http://www.theses.fr/2019AMIE0001.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre d'une étude numérique visant à étudier les phénomènes de couplage de la convection naturelle et du rayonnement surfacique dans des cavités carrées dont les parois sont soumises à des chauffages discrets ou non-uniformes. En effet, la première étude réalisée s'intéresse à un problème de couplage convection-rayonnement dans une cavité carrée inclinée et remplie d'air, ayant d'un côté une paroi chauffée à une température constante et du côté opposé, une paroi chauffée linéairement. Les parois restantes sont considérées adiabatiques. Dans la seconde étude, la cavité a des parois verticales partiellement chauffées (symétriquement et asymétriquement), une paroi supérieure refroidie et une paroi inférieure adiabatique. L'objectif de ces études numériques réside dans l'analyse de l'effet du rayonnement surfacique et des différents paramètres gouvernants (mode de chauffage, nombre de Rayleigh, angle d'inclinaison, différence de température) sur la structure d'écoulement et le transfert de chaleur. Le second objectif de cette thèse consiste à tester la performance du schéma à temps de relaxation multiple (MRT) de la méthode Lattice Boltzmann (LBM) en présence du couplage convection-rayonnement. Les résultats de ce travail ont révélé que les paramètres de contrôle considérés ont un effet important sur la structure de l'écoulement et le transfert de chaleur à travers la cavité
In this thesis, a numerical study is carried out on the coupling phenomena between natural convection and surface radiation in square cavities whose walls are subjected to discrete or non-uniform temperatures. Indeed, the first study carried out is concerned with a problem of convection-radiation coupling in a square cavity inclined and filled with air, having on one side a wall heated at a constant temperature and on the opposite side, a wall heated linearly. The remaining walls are considered adiabatic. In the second study, the cavity has partially heated vertical walls (symmetrically and asymmetrically), a cooled upper wall and an adiabatic bottom wall. The objective of these numerical studies is to analyze the effect of surface radiation and the different governing parameters (heating mode, Rayleigh number, angle of inclination, temperature difference) on the flow structure and the heat transfer. The second objective of this thesis is to test the performance of the multiple relaxation time (MRT) scheme of the Lattice-Boltzmann method (LBM) in the presence of convection radiation coupling. The results of this study revealed that the considered governing parameters have a significant effect on the flow structure and heat transfer through the cavity
APA, Harvard, Vancouver, ISO, and other styles
17

Bartl, Eduard. "Mathematical foundations of graded knowledge spaces." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
18

Maquignon, Nicolas. "Vers un modèle multiphases et multicomposants (MPMC) de type Lattice Boltzmann Method (LBM) pour la simulation dynamique d'un fluide cyogénique dans l'eau." Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0426/document.

Full text
Abstract:
Au cours de cette thèse, un modèle LBM MPMC avec échanges thermiques est développé. Des tests d'assimilation de données et des mesures par flot optique sont réalisés en vue d'une validation. Le cadre d'application de cette thèse est celui du mélange d'un fluide cryogénique avec l'eau. Dans une première partie, un travail bibliographique rappelant l'équation de Boltzmann, ses diverses hypothèses et simplifications, ainsi que l'aspect algorithmique de la LBM sont exposés. Une comparaison entre opérateur de collision SRT et MRT est réalisée, et une simulation de phénomènes turbulents à différents nombres de Reynolds est étudiée, notamment avec le benchmark de l'instabilité de Von Karman. Dans une seconde partie, le modèle MPMC de Shan & Cehn est rappelé puis étendu au cas où les échanges thermiques entre composants sont présents. Des validations quantitatives sont faites, notamment avec le benchmark du fluide de Couette à deux phases ou à deux composants, du test de cohérence vis-à-vis de la loi de Laplace, ou encore par rapport à un benchmark faisant intervenir la conduction thermique. Des tests qualitatifs de condensation en milieu multicomposants sont proposés pour valider l'aspect des échanges thermiques entre composants en présence d'une transition de phase. Dans la troisième partie de cette thèse, une méthode de validation par assimilation de données est introduite, avec le filtrage de Kalman d'ensemble. Un test d'estimation d'état d'un fluide di-phasique est réalisé, et la compatibilité du filtrage de Kalman d'ensemble par rapport au modèle LBMMPMC est évaluée. Pour la validation du comportement du modèle d'un point de vue de la présence de deux composants, un fluide de substitution (non-cryogénique) au GNL, le butane, a été choisi pour permettre des observations dans des conditions expérimentales accessibles. Puis, une plateforme expérimentale d'injection de butane liquide dans une colonne d'eau sous pression est présentée. Des images d'ombroscopie issues d'expériences de remontée de butane liquide dans de l'eau sont exposées et un algorithme de calcul de flot optique est appliqué à ces images. Une évaluation qualitative des champs de vitesses obtenus par application de cet algorithme est réalisée
In this thesis, a LBM MPMC model with heat exchange is developed. Data assimilation tests and optical flow measurements are made in order to validate the model. The application context of this thesis is the mixture of a cryogenic fluid with water. In the first part, a bibliographical work reminding the Boltzmann equation and its various assumptions and simplifications, as well as the algorithmic aspect of the LBM are exposed. A comparison between SRT and MRT collision operator is performed, and a simulation of turbulent phenomena at different Reynolds numbers is studied, especially with the benchmark of the instability from Von Karman. In the second part, the MPMC model from Shan & Chen is reminded and extended to the case of the inter-component heat exchanges. Quantitative validations are made, especially with the benchmark of a two-phase or two-component Couette fluid. Consistency is tested against Laplace's law rule, or against a benchmark involving heat conduction. Qualitative testing of condensations in a multi-component medium are proposed to validate the heat exchange between components in the presence of a phase transition. In the third part of this thesis, a validation method for data assimilation is introduced, with the ensemble Kalman filter. A state estimation test of a bi-phase fluid is realized, and compatibility of the ensemble Kalman filtering to the LBM MPMC model is assessed. For validation of the behavior of the model for a two-component case, a substitution fluid (non-cryogenic) for LNG, butane, was selected to permit observations in experimental conditions which are accessible. Then, an experimental platform of injection of liquid butane in a pressurised water column is presented. Shadowgraph images from liquid butane experiments in water are exposed and an optical flow calculation algorithm is applied to these images. A qualitative assessment of the velocity field obtaines by application of this algorithm is performed
APA, Harvard, Vancouver, ISO, and other styles
19

Angleby, Linda. "Structural and electronic properties of bare and organosilane-functionalized ZnO nanopaticles." Thesis, Linköping University, Linköping University, Linköping University, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-58691.

Full text
Abstract:

A systematic study of trends in band gap and lattice energies for bare zinc oxide nanoparticles were performed by means of quantum chemical density functional theory (DFT) calculations and density of states (DOS) calculations. The geometry of the optimized structures and the appearance of their frontier orbitals were also studied. The particles studied varied in sizes from (ZnO)6 up to (ZnO)192.The functionalization of bare and hydroxylated ZnO surfaces with MPTMS was studied with emphasis on the adsorption energies for adsorption to different surfaces and the effects on the band gap for such adsorptions.

APA, Harvard, Vancouver, ISO, and other styles
20

Tari, Kévin. "Automorphismes des variétés de Kummer généralisées." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2301/document.

Full text
Abstract:
Dans ce travail, nous classifions les automorphismes non-symplectiques des variétés équivalentes par déformations à des variétés de Kummer généralisées de dimension 4, ayant une action d'ordre premier sur le réseau de Beauville-Bogomolov. Dans un premier temps, nous donnons les lieux fixes des automorphismes naturels de cette forme. Par la suite, nous développons des outils sur les réseaux en vue de les appliquer à nos variétés. Une étude réticulaire des tores complexes de dimension 2 permet de mieux comprendre les automorphismes naturels sur les variétés de type Kummer. Nous classifions finalement tous les automorphismes décrits précédemment sur ces variétés. En application de nos résultats sur les réseaux, nous complétons également la classification des automorphismes d'ordre premier sur les variétés équivalentes par déformations à des schémas de Hilbert de 2 points sur des surfaces K3, en traitant le cas de l'ordre 5 qui restait ouvert
Ln this work, we classify non-symplectic automorphisms of varieties deformation equivalent to 4-dimensional generalized Kummer varieties, having a prime order action on the Beauville-Bogomolov lattice. Firstly, we give the fixed loci of natural automorphisms of this kind. Thereafter, we develop tools on lattices, in order to apply them to our varieties. A lattice-theoritic study of 2-dimensional complex tori allows a better understanding of natural automorphisms of Kummer-type varieties. Finaly, we classify all the automorphisms described above on thos varieties. As an application of our results on lattices, we complete also the classification of prime order automorphisms on varieties deformation-equivalent to Hilbert schemes of 2 points on K3 surfaces, solving the case of order 5 which was still open
APA, Harvard, Vancouver, ISO, and other styles
21

Pitou, Cynthia. "Extraction d'informations textuelles au sein de documents numérisés : cas des factures." Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0015.

Full text
Abstract:
Le traitement automatique de documents consiste en la transformation dans un format compréhensible par un système informatique de données présentes au sein de documents et compréhensibles par l'Homme. L'analyse de document et la compréhension de documents sont les deux phases du processus de traitement automatique de documents. Étant donnée une image de document constituée de mots, de lignes et d'objets graphiques tels que des logos, l'analyse de documents consiste à extraire et isoler les mots, les lignes et les objets, puis à les regrouper au sein de blocs. Les différents blocs ainsi formés constituent la structure géométrique du document. La compréhension de documents fait correspondre à cette structure géométrique une structure logique en considérant des liaisons logiques (à gauche, à droite, au-dessus, en-dessous) entre les objets du document. Un système de traitement de documents doit être capable de : (i) localiser une information textuelle, (ii) identifier si cette information est pertinente par rapport aux autres informations contenues dans le document, (iii) extraire cette information dans un format compréhensible par un programme informatique. Pour la réalisation d'un tel système, les difficultés à surmonter sont liées à la variabilité des caractéristiques de documents, telles que le type (facture, formulaire, devis, rapport, etc.), la mise en page (police, style, agencement), la langue, la typographie et la qualité de numérisation du document. Dans ce mémoire, nous considérons en particulier des documents numérisés, également connus sous le nom d'images de documents. Plus précisément, nous nous intéressons à la localisation d'informations textuelles au sein d'images de factures, afin de les extraire à l'aide d'un moteur de reconnaissance de caractères. Les factures sont des documents très utilisés mais non standards. En effet, elles contiennent des informations obligatoires (le numéro de facture, le numéro siret de l'émetteur, les montants, etc.) qui, selon l'émetteur, peuvent être localisées à des endroits différents. Les contributions présentées dans ce mémoire s'inscrivent dans le cadre de la localisation et de l'extraction d'informations textuelles fondées sur des régions identifiées au sein d'une image de document.Tout d'abord, nous présentons une approche de décomposition d'une image de documents en sous-régions fondée sur la décomposition quadtree. Le principe de cette approche est de décomposer une image de documents en quatre sous-régions, de manière récursive, jusqu'à ce qu'une information textuelle d'intérêt soit extraite à l'aide d'un moteur de reconnaissance de caractères. La méthode fondée sur cette approche, que nous proposons, permet de déterminer efficacement les régions contenant une information d'intérêt à extraire.Dans une autre approche, incrémentale et plus flexible, nous proposons un système d'extraction d'informations textuelles qui consiste en un ensemble de régions prototypes et de chemins pour parcourir ces régions prototypes. Le cycle de vie de ce système comprend cinq étapes:- Construction d'un jeu de données synthétiques à partir d'images de factures réelles contenant les informations d'intérêts.- Partitionnement des données produites.- Détermination des régions prototypes à partir de la partition obtenue.- Détermination des chemins pour parcourir les régions prototypes, à partir du treillis de concepts d'un contexte formel convenablement construit.- Mise à jour du système de manière incrémentale suite à l'insertion de nouvelles données
Document processing is the transformation of a human understandable data in a computer system understandable format. Document analysis and understanding are the two phases of document processing. Considering a document containing lines, words and graphical objects such as logos, the analysis of such a document consists in extracting and isolating the words, lines and objects and then grouping them into blocks. The subsystem of document understanding builds relationships (to the right, left, above, below) between the blocks. A document processing system must be able to: locate textual information, identify if that information is relevant comparatively to other information contained in the document, extract that information in a computer system understandable format. For the realization of such a system, major difficulties arise from the variability of the documents characteristics, such as: the type (invoice, form, quotation, report, etc.), the layout (font, style, disposition), the language, the typography and the quality of scanning.This work is concerned with scanned documents, also known as document images. We are particularly interested in locating textual information in invoice images. Invoices are largely used and well regulated documents, but not unified. They contain mandatory information (invoice number, unique identifier of the issuing company, VAT amount, net amount, etc.) which, depending on the issuer, can take various locations in the document. The present work is in the framework of region-based textual information localization and extraction.First, we present a region-based method guided by quadtree decomposition. The principle of the method is to decompose the images of documents in four equals regions and each regions in four new regions and so on. Then, with a free optical character recognition (OCR) engine, we try to extract precise textual information in each region. A region containing a number of expected textual information is not decomposed further. Our method allows to determine accurately in document images, the regions containing text information that one wants to locate and retrieve quickly and efficiently.In another approach, we propose a textual information extraction model consisting in a set of prototype regions along with pathways for browsing through these prototype regions. The life cycle of the model comprises five steps:- Produce synthetic invoice data from real-world invoice images containing the textual information of interest, along with their spatial positions.- Partition the produced data.- Derive the prototype regions from the obtained partition clusters.- Derive pathways for browsing through the prototype regions, from the concept lattice of a suitably defined formal context.- Update incrementally the set of protype regions and the set of pathways, when one has to add additional data
APA, Harvard, Vancouver, ISO, and other styles
22

Guerra, João Carlos Carneiro. "Magneto-Optical Response of 2- Dimensional Lattices." Dissertação, 2018. https://hdl.handle.net/10216/118781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Silva, Daniel José da. "Lattice location of transition metals in silicon by means of emission channeling." Tese, 2014. https://repositorio-aberto.up.pt/handle/10216/100891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Guerra, João Carlos Carneiro. "Magneto-Optical Response of 2- Dimensional Lattices." Master's thesis, 2018. https://hdl.handle.net/10216/118781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gomes, Mariana Melo Nogueira Rosa. "Polar properties, phase sequence and lattice dynamics of K0.5Na0.5NbO3 ceramics prepared through different sintering methods." Dissertação, 2019. https://hdl.handle.net/10216/123594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Silva, Daniel José da. "Lattice location of transition metals in silicon by means of emission channeling." Doctoral thesis, 2014. https://repositorio-aberto.up.pt/handle/10216/100891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gomes, Mariana Melo Nogueira Rosa. "Polar properties, phase sequence and lattice dynamics of K0.5Na0.5NbO3 ceramics prepared through different sintering methods." Master's thesis, 2019. https://hdl.handle.net/10216/123594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Barbosa, Marcelo Baptista. "Electronic structure, lattice location and stability of dopants in wide band gap semiconductors." Doctoral thesis, 2019. https://hdl.handle.net/10216/119210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Barbosa, Marcelo Baptista. "Electronic structure, lattice location and stability of dopants in wide band gap semiconductors." Tese, 2019. https://hdl.handle.net/10216/119210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tsai, Yi-yuan, and 蔡益元. "Code Lattices and Natural Code Evolution Paths." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/3aqf2u.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
92
For students of computer science, programming is an indispensable knowledge and technical ability. However, during the course of learning programming, the learners are often faced with hesitations, setbacks, and frustrations. Having written a piece of code after spending a lot of time and effort, a learner may only find that there are many mistakes (bugs) in the code. Though there may be a strong desire on the part of the learner to try to correct the code, he/she may only “wonder around”, not knowing where to start and what to do. The result is often that the learner acts like a “headless fly”, trying everything everywhere. Perhaps, with good lucks, the learner can finally find a way of correctly revising the code, but only after trying many “wrong ways”. An instructor, on the other hand, often has to face many different pieces of “wrong” code, each with some kind of strange bugs in it. Though the instructor knows what are wrong with any particular piece of code, there is often a great difficulty in trying to “guide” the student (the author of the code) in correctly revising the code and learning the “correct” way of programming. This problem is further complicated by the fact that the instructor often cannot figure out why the student wrote his/her particular piece of code this way, not to mention how the instructor may be able to “correct” the student’s way of thinking. This is a further hindrance in providing appropriate assistance to the learner so that the learner can “think correctly” and write “correct code”. Because of this, we constructed a computer-assisted learning system (a CAL system, for short), and we call this system CSD (for Code Schema Development). CSD may be said to be constructivism-based. The main idea is to control the problem-solving environment and provide scaffolding for code constructions in such a way so that the leaner can (1) develop the intended code schemas and (2) correctly apply the developed code schemas in solving programming problems. A secondary goal of CSD is to raise the learner’s confidence and interests in programming. CSD tracks the learner’s actions and answers (including the intermediate answers) and records everything in a database. From the learners’ records, we seek to analyze the various “tracks of thinking”, with a goal of trying to identify the various “paths” of code evolution, showing how the learners progress from the initial erroneous code to the final correct code. In related previous research works, the main focus was on the identification of error patterns in the learners’ code, and researchers were not concerned with how the learners made revisions in order to obtain the “right” code. In this research, we propose a way of describing how a learner “evolves” from one error pattern to another in producing his/her final answer (the correct code). In doing so, we also try to analyze what the learner may be thinking when he/she wrote the (erroneous) code.
APA, Harvard, Vancouver, ISO, and other styles
31

Chun-KaiChang and 張駿愷. "Lattice Boltzmann simulation of nanofluid natural/mixed convection heat transfer." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/28913572604744787599.

Full text
Abstract:
碩士
國立成功大學
機械工程學系碩博士班
100
In the present study, mathematical modeling is performed to simulate two-dimensional incompressible natural and mix convection of nanofluids in a vertical square enclosure and channel using the lattice Boltzmann method(LBM).Consider the effects of different concentration of nanofluids (0%, 2%, 4%), Rayleight number and Richardson number to averaged Nusselt number, and Consider the effects of putting a rectangle hole in vertical channel with different ratio of height and length to Nusselt number, averaged Nusselt number, and temperature field. The inlet velocity is chosen appropriately, so as to ensure the reasonable adaptation of fluid field and to avoid un-physical compressible effect. Numerical results show that the averaged Nusselt number of nanofluids can be higher than pure water. Increasing the nanoparticle volume fraction will enhance the averaged Nusselt number. The averaged Nusselt number also increases by the temperature change caused by the increase of the Rayleigh number, the characteristic velocity change caused by Richardson number reduction. In the presence of significantly effects of the ratio of height and length of the rectangle hole to velocity field and vortex shape, and the averaged Nusselt number will decrease when the rectangle hole increased.
APA, Harvard, Vancouver, ISO, and other styles
32

Rodrigues, Pedro Miguel da Rocha. "Functional lattice instabilities in naturally layered perovskites: from local probe studies to macroscopic cross-coupling effects." Doctoral thesis, 2021. https://hdl.handle.net/10216/139585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Huang, Nai-Zhu, and 黃迺筑. "The Natural Measure of Symbolic Dynamical Systems in the Two-Dimensional Lattice Model." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hmmzbw.

Full text
Abstract:
碩士
國立交通大學
應用數學系所
107
This thesis investigates the natural measure of a symbolic dynamical system in two-dimensional lattice model. For a two-dimensional shift of finite type has irreducible ordering matrix H_2, we provided a method for the natural measure of two-dimensional lattice model. And derive all the one-dimensional model into two-dimension. Eventually apply this method to have the exact value of natural measure of totally symmetric system in the two-dimension.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Yi-Hsin, and 陳逸昕. "Lattice Boltzmann method for simulating the macroscopic and mesoscopic natural convection flows inside a rectangular cavity." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/53906405335715367280.

Full text
Abstract:
碩士
國立成功大學
工程科學系碩博士班
95
Natural convection inside a closed cavity is one of the interesting investigations in many scientific and industrial applications. Many numerical methods have been applied to analyze this problem, including the lattice Boltzmann method (LBM), which has emerged as one of the most powerful computational fluid dynamics (CFD) methods in recent years. Using a simple LB model with the Boussinesq approximation, this study investigates the 2D natural convection problem inside a rectangular cavity at different reference Rayleigh numbers, Knudsen numbers, and aspect ratios of cavity when the Prandtl number is fixed as within the range of in macroscopic scale ( ) and in mesoscopic scale ( ) respectively. The flow structures with instability phenomena in macroscopic scale and in mesoscopic scale are compared and analyzed. In simulating the natural convection problems, a model for choosing the appropriate value of the velocity scale, i.e. , is significantly important to simulate the natural convection problems by LBM. Current work proposes a model to determine the value of characteristic velocity (V) based on kinetic theory. A spectrum analysis is performed to identify the unsteady periodic or quasi-periodic oscillatory flow structure. The relationship between the Nusselt number and the reference Rayleigh number is also exhibited. The simulation results show that the instable flow is generated dependent on the Rayleigh numbers, Knudsen numbers, and aspect ratios of cavity. Meanwhile, the effect of Knudsen number and aspect ratio play the significant roles to influence the oscillatory flow structures.
APA, Harvard, Vancouver, ISO, and other styles
35

Hsu, Hao, and 徐豪. "A study on the topological nature of Landau levels in a 2D muffin-tin potential lattice." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/84108410755641157467.

Full text
Abstract:
碩士
國立交通大學
電子物理系所
103
In this work we consider the topological features in a square lattice on a 2DEG under the action of a normal magnetic field. Specifically, the magnetic flux per unit cell is fixed at one half of a flux quanta. Exact numerical diagonalization of the full Hamiltonian is performed to obtain energy band |N,k> of the N-th Landau level (LL).This calculation is facilitated by a TKNN-type (PRL 49,405(1982)) basis wavefunctions |n,k>, where the magnetic translation symmetry is built-in and the n-th eigenstate of a simple harmonic oscillator is used. As the lattice potential U is tuned on, our focus is upon the gap-closing between |N,k>, the k at which the gap-closing occurs, the dispersion relation of the 2 gap-closing LL, the Berry curvatures, and the change in the Chern number. The change in the Chern number for each gap-closing point in the k space is one (two) when the closing bands have linear (quadratic) energy dispersion. Further analysis shows that the closing bands have typically two dominating components in |n1,k> and |n2,k> such that |n1-n2| equals 1 or 3 (2) for the linear (quadratic) energy dispersion case. For the quadratic energy dispersion case, the Berry curvature takes on a volcano type protrusion encycling the high-symmetry k point (also the gap-closing point). This is resulted from our finding that at this high-symmetry point the lattice potential couples basis wavefunctions with the condition |n1-n2|=4 only. We use the kp method to obtain the effective Hamiltonian. To our surprise, for the quadratic energy dispersion case, using two eigenstates of the closing bands as basis is not enough no matter how small the gap between them is since these two basis won't be coupled by the effective Hamiltonian. The smallest dimension of the effective Hamiltonian is three. We further use the Lowdin perturbation to put in the coupling with third basis to get the appropriate two-by-two effective Hamiltonian which matches the energy dispersion and the Berry curvature of the full Hamiltonian well.
APA, Harvard, Vancouver, ISO, and other styles
36

Monteiro, Rafael Torrado. "Post-quantum cryptography: lattice-based cryptography and analysis of NTRU public-key cryptosystem." Master's thesis, 2016. http://hdl.handle.net/10451/28303.

Full text
Abstract:
Tese de mestrado em Matemática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016
Em 1994, Peter Shor desenvolveu um algoritmo quântico para a fatorização de inteiros e para o problema do logaritmo discreto, fazendo de ambos RSA e ECC (Elliptic Curve Cryptography) facilmente quebráveis por computadores quânticos. Como se acredita que computadores quânticos irão fisicamente existir nos próximos poucos anos, quase toda a informação encriptada deixará de estar segura, e o impacto na vida prática das pessoas será no mínimo devastador. Aspetos como a comunicação entre pessoas, empresas e governos, operações bancárias, comércio eletrónico, passwords, entre muitos outros, estarão comprometidos. A criptografia de chave pública é hoje indispensável para indivíduos ou entidades comunicarem em segurança, logo, a existência de computadores quânticos constitui uma enorme ameaça para a nossa criptografia moderna. Caso computadores quânticos venham realmente a existir, substituir os atuais criptossistemas em uso vai ser algo necessário, uma vez que os criptossistemas resistentes a computadores quânticos em uso estão muito longe de ser suficientemente eficientes para serem postos em prática. É portanto imperativo estudar criptossistemas candidatos a substituir os atuais em termos da sua eficiência, praticidade e segurança contra ambos computadores clássicos e quânticos. Em 2016, o NIST, National Institute of Standards and Technology, instituto do Departamento do Comércio dos EUA, anunciou a sua busca por tais construções criptográficas resistentes a computadores quânticos. Estas devem ser eficientes e suficientemente boas para ser adotadas por um largo número de anos. Este instituto prevê a chegada de computadores quânticos para 2030, o que revela que existe pouco tempo para estudar as eventuais propostas de substituição aos criptossistemas atuais. Além disso, esta substituição demorará certamente o seu tempo, de maneira que se deve começar já a investir na busca de construções criptográficas resistentes a computadores quânticos. Desenvolvido por Jeffrey Hoffstein, Jill Pipher e Joseph Silverman (o mesmo Silverman famoso pelo seu trabalho em curvas elípticas) nos anos 90, apresentado na conferencia internacional de criptologia CRYPTO `96 e publicado em 1998, o criptossistema NTRU é um exemplo de um criptossistema resistente à computação quântica com potencial para vir a substituir os atuais criptossistemas clássicos em uso. Embora seja considerado um criptossistema baseado em reticulados, a sua descrição original não os usa, ainda que seja possível descrevê-lo inteiramente usando-os. Ainda assim, no artigo onde vem apresentado constam ataques baseados em reticulados que aproveitam certos aspetos que lhe são intrínsecos. A teoria por detrás do criptossistema NTRU é bastante simples, dado que se baseia em álgebra polinomial e na redução módulo dois inteiros primos entre si. O criptossistema NTRU é, portanto, muito fácil de ser compreendido, implementado, e de ser posto em prática: ambas as chaves pública e privada são rapidamente computáveis, tendo comprimento O(N), fazendo clara distinção com as chaves de comprimento O(N2) de outros criptossistemas ditos rápidos, tais como os criptossistemas McEliece e GGH, e encriptar e desencriptar com o criptossistema NTRU leva O(N2) operações, o que faz do mesmo consideravelmente mais rápido que o criptossistema RSA e a criptografia baseada em curvas elípticas. Além disto, o criptossistema NTRU é considerado como sendo um criptossistema probabilístico, uma vez que usa um elemento aleatório para encriptar uma mensagem. Desse modo, uma mensagem pode ter múltiplas encriptações possíveis. Desde a publicação do criptossistema NTRU, foram emitidos alguns relatórios técnicos, alguns consistindo em algoritmos que permitiram o criptossistema NTRU ganhar eficiência, outros descrevendo melhoramentos de ataques já existentes e também novos ataques e possíveis soluções para os contornar. A análise de segurança feita até hoje, incluindo por criptógrafos reconhecidos como Don Coppersmith, Johan Hastad, Andrew Odlyzho e Adi Shamir, deu ao criptossistema NTRU um estatuto de legitimidade e contribuiu para futura investigação, dado que o criptossistema NTRU se revelou interessante e promissor. Contudo, a análise realizada até hoje não implicou alterações profundas na estrutura do criptossistema NTRU, embora parâmetros mais largos tenham sido recomendados ao longo do tempo para alcançar os níveis de segurança desejados. Apesar de não ser acompanhado de uma prova de segurança, o criptossistema NTRU tem-se revelado seguro, uma vez que nenhum ataque quer clássico quer quântico com impacto significativo no criptossistema foi encontrado, o que revela que é uma boa alternativa aos criptossistemas clássicos em uso. O nosso estudo do criptossistema NTRU começou por explorar a página da companhia Security Innovation, onde se pode encontrar uma breve introdução à criptografia pós-quântica e um vídeo que explica o porquê da mesma ser estudada. Esta página contém um vasto número de recursos, incluindo tutoriais, surveys, relatórios técnicos, resumos e artigos, assim como uma vasta lista de artigos que escrutinam o criptossistema NTRU. De modo a aprofundar o estudo do criptossistema NTRU, explorámos e investigámos uma grande parte da documentação existente nesta página. Existe, como se pode constatar, uma quantidade tremenda de documentação para analisar, e devido a limitações no tempo optámos por estudar o criptossistema NTRUEncrypt, começando pelas suas raízes. Em 1997, Don Coppersmith e Adi Shamir fizeram a primeira análise de segurança ao criptossistema NTRU, em que descobriram que o método de ataque mais eficaz contra o criptossistema consistia em algoritmos de redução de reticulados com vista a encontrar vetores muito curtos numa classe particular de reticulados. Até à data, este é o método de ataque mais eficaz ao criptossistema NTRU. A segurança do criptossistema NTRU tem como base evidências empíricas, que mostra que encontrar vetores muito curtos num reticulado é um problema difícil, especialmente quando a dimensão do reticulado é muito grande. O estudo dos reticulados, denominado Geometria dos Números por Minkowski, data anteriormente das suas aplicações criptográficas e é uma área que contribuiu para o desenvolvimento de muitas outras, tais como a física, análise, álgebra e geometria. Este é um tópico bastante estudado, principalmente devido às aplicações dos algoritmos de redução de reticulados. Como a segurança do NTRU depende principalmente de ataques baseados em reticulados, criámos um capítulo inteiro dedicado aos reticulados e a algoritmos de redução de reticulados. Um importante avanço nesta área foi o algoritmo LLL (por vezes chamado algoritmo L3) de Lenstra, Lenstra e Lovász em 1982, desenvolvido antes dos reticulados serem considerados relevantes em criptografia. Este algoritmo permite encontrar um vetor moderadamente curto num reticulado em tempo polinomial. Com o passar do tempo, este algoritmo foi melhorado, dando origem ao algoritmo BKZ (Blockwise-Korkine-Zolotarev), que é até hoje na prática o melhor algoritmo de redução de reticulados permitindo encontrar vetores muito curtos em reticulados. Este algoritmo deve-se a Schnorr (1987), no entanto não é executado em tempo polinomial. Portanto, encontrar vetores muito curtos num reticulado de larga dimensão continua a ser um problema em aberto. Em 2011, Yuanmi Chen e Phong Q. Nguyen criaram um algoritmo que permite simular o algoritmo BKZ, denominado BKZ simulation algorithm, podendo prever aproximadamente o output e o tempo de execução do algoritmo BKZ. Este trabalho foi dividido em três partes. A primeira é uma pequena introdução à computação quântica, principalmente aos básicos da mecânica quântica e à Transformada de Fourier, uma vez que alguns leitores poderão não conhecer estes conteúdos. Além disso, algumas noções de computação quântica são mencionadas neste trabalho, pelo que faz todo o sentido incluir uma breve referência às mesmas. A segunda parte trata de reticulados em geral, apresenta os algoritmos de redução de reticulados LLL e BKZ (juntamente com o BKZ simulation algorithm) e também alguns exemplos de criptossistemas de chave pública que se baseiam em reticulados. A terceira parte revela uma análise do criptossistema NTRU, focando-se principalmente em ataques baseados em reticulados, que são os que de longe melhor se aplicam aos reticulados associados ao criptossistema NTRU. Como perspetiva de trabalho futuro, apresenta-se uma ideia de um algoritmo quântico para a redução de reticulados.
In 1994, Peter Shor has developed a quantum algorithm for integer factorization and the discrete logarithm problem, known as Shor's algorithm. This was a great finding in the quantum field, given that it makes essentially all currently used cryptographic constructions (such as RSA and ECC) easily breakable by a quantum computer. Since quantum computers are believed to physically exist in the next few years, these encryption schemes will be no longer reliable, making the encrypted data compromised. It is thus of crucial importance to build and analyze new quantum-resistant cryptographic schemes. These new cryptographic constructions should be efficient and secure enough to be used in practice and standardized for a large number of years, replacing the current ones if needed. In the last year, NIST (National Institute of Standards and Technology, from U.S. Department of Commerce) has announced its quest of finding such quantum-resistant cryptographic constructions. The NTRU encryption scheme, developed by Jeffrey Hoffstein, Jill Pipher and Joseph Silverman (the same Silverman famous for his work on elliptic curves) in the 90's, presented in 1996 and published in 1998, is an example of a quantum-resistant proposal. Although it is not supported with a theoretical security proof, the scrutiny done since its presentation until now reveal that NTRU is secure and is a good successor for replacing the current constructions currently in use. It is a fact that there already exist some classical cryptosystems that are quantum-resistant. The McEliece cryptosystem (1978) is an example of such a quantum-resistant construction, but it is not good to be used in practice. The theory behind the NTRU cryptosystem is very simple, therefore easy to understand, and NTRU can be very easily implemented and put in practice: both private and public keys are easy and quickly computable, having O(N) length, making a clear difference between the length O(N2) of other cryptosystems considered as fast, and encrypting and decrypting with NTRU takes O(N2) operations, which makes it considerably faster than RSA and ECC. To date, NTRU remains secure against both classical and quantum computers. The most effective attacks on NTRU are based on lattices, namely using lattice reduction algorithms, which goal is to find very short vectors in a lattice. One can apply the LLL algorithm (1982) of Arjen Lenstra, Hendrik Lenstra and László Lovász, which is recognized as a very important achievement in many areas of mathematics. It runs in polynomial time, but has an exponential approximation factor. This algorithm was improved over time. The most important improvement is due to Schnorr (1987), who introduced the Blockwise-Korkine-Zolotarev algorithm, also known as BKZ algorithm, which is to date the best lattice reduction algorithm to be put in practice. In 2011, Yuanmi Chen and Phong Q. Nguyen improved the BKZ algorithm, revising lattice security estimates, and created the BKZ simulation algorithm, permitting to predict both the output and the running time of the BKZ algorithm without the need of running BKZ, since in high dimension with blocksize ≥ 45 the running time of BKZ is considerably long. There is a lot of investigation in this area, mainly because of its cryptographic applications. This work can be divided in three parts: first, we present a slight and short introduction to quantum computing, namely the basics of quantum mechanics and the Fourier Transform; secondly, we give a sufficient groundwork on lattices, present the LLL and BKZ lattice reduction algorithms, the BKZ simulation algorithm as well, and give some examples of public-key encryption schemes; in third place, we give an analysis of the NTRU cryptosystem, focusing mainly on lattice-based attacks. Finally, as work to do in the future, we present an idea of a quantum algorithm for lattice reduction.
APA, Harvard, Vancouver, ISO, and other styles
37

Abreu, Maria Zita Fiqueli de. "Esquemas de assinatura digital Lattice-based e experimentação de certificados híbridos com criptografia pós-quântica." Master's thesis, 2020. http://hdl.handle.net/1822/73679.

Full text
Abstract:
Dissertação de mestrado em Matemática e Computação
Os algoritmos criptográficos pós-quânticos têm como premissa de segurança a dificuldade na resolução de problemas matemáticos que se conjecturam difíceis na computação quântica. O interesse em implantar esses algoritmos tem vindo a crescer por forma a que a informação esteja protegida contra ataques quânticos no futuro. O National Institute of Standards and Technology (NIST) tem, atualmente, aberto um concurso para a seleção de algoritmos criptográficos pós-quânticos [1]. Nesta dissertação estuda-se dois desses algoritmos, mais precisamente, os esquemas de assinatura digital qTESLA e Crystals-Dilithium, tendo como ferramenta principal, no que concerne à implementação não otimizada dos mesmos, o software SageMath. Embora o concurso da NIST seja um passo importante, é relevante que exista uma transição dos protocolos atuais para um novo modelo, integrando soluções híbridas. Nesse sentido, e tendo em vista uma melhor transição dos algoritmos clássicos, analisa-se a adaptação dos certificados à criptografia pós-quântica e faz-se a experimentação de certificados híbridos com os esquemas de assinatura já mencionados. Este trabalho foi desenvolvido em parceria com a Universidade do Minho e a Multicert.
The post-quantum cryptographic algorithms have as security premise the difficulty in solving mathematical problems that are conjectured difficult in quantum computing. The interest in implementing these algorithms has been growing so that the information is protected against quantum attacks in the future. The National Institute of Standards and Technology (NIST) has currently opened a call for selection of post-quantum cryptographic algorithms [1]. In this dissertation, two of these algorithms are studied, more precisely, the digital signature schemes qTESLA and Crystals-Dilithium, having as its main tool, regarding their non-optimized implementation, the SageMath software. Although the NIST contest is an important step, it is important that there is a transition from the current protocols to a new model, integrating hybrid solutions. In this sense, and with a view to a better transition from classical algorithms, the adaptation of certificates to post-quantum cryptography is analyzed and hybrid certificates are experimented with the signature schemes already mentioned. This work was developed in partnership with the University of Minho and Multicert.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography