Dissertations / Theses on the topic 'Large'

To see the other types of publications on this topic, follow the link: Large.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Horsley, David James. "Large-diameter large-ratio hot tap tees." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0009/MQ31321.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Laflamme, Simon Ph D. Massachusetts Institute of Technology. "Control of large-scale structures with large uncertainties." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66852.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 279-300).
Performance-based design is a design approach that satisfies motion constraints as its primary goal, and then verifies for strength. The approach is traditionally executed by appropriately sizing stiffnesses, but recently, passive energy dissipation systems have gained popularity. Semi-active and active energy dissipation systems have been shown to outperform purely passive systems, but they are not yet widely accepted in the construction and structural engineering fields. Several factors are impeding the application of semi-active and active damping systems, such as large modeling uncertainties that are inherent to large-scale structures, limited state measurements, lack of mechanically reliable control devices, large power requirements, and the need for robust controllers. In order to enhance acceptability of feedback control systems to civil structures, an integrated control strategy designed for large-scale structures with large parametric uncertainties is proposed. The control strategy comprises a novel controller, as well as a new semi-active mechanical damping device. Specifically, the controller is an adaptive black-box representation that creates and optimizes control laws sequentially during an excitation, with no prior training. The novel feature is its online organization of the input space. The representation only requires limited observations for constructing an efficient representation, which allows control of unknown systems with limited state measurements. The semi-active mechanical device consists of a friction device inspired by a vehicle drum brakes, with a viscous and a stiffness element installed in parallel. Its unique characteristic is its theoretical damping force reaching the order of 100 kN, using a friction mechanism powered with a single 12-volts battery. It is conceived using mechanically reliable technologies, which is a solution to large power requirement and mechanical robustness. The integrated control system is simulated on an existing structure located in Boston, MA, as a replacement to the existing viscous damping system. Simulation results show that the integrated control system can mitigate wind vibrations as well as the current damping strategy, utilizing only one third of devices. In addition, the system created effective control rules for several types of earthquake excitations with no prior training, performing similarly to an optimal controller using full parametric and state knowledge.
by Simon Laflamme.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Pechenik, Oliver. "Large Cardinals." Oberlin College Honors Theses / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1279129907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Broomfield, Susannah Elizabeth. "Large deflection, nonlinear loads analysis, with application to large winglets." Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492476.

Full text
Abstract:
The inclusion of static aeroelastic effects is essential to the accurate calculation of the aerodynamic properties of a wing, the resulting wing loads, and ultimately the mass of the wing. Within an industrial aircraft design cycle, the computational time required for structurally coupled nonlinear flow solvers is impractical for the many different solutions required, even with the current development in computing power. The process currently used by most civilian aircraft manufacturers therefore makes use of time efficient linear panel methods for calculating the aerodynamics and modal data for calculating structural movements.University of Bristol.
APA, Harvard, Vancouver, ISO, and other styles
5

Goldschmidt, Christina Anna. "Large random hypergraphs." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cuervo, Maria Cristina. "Datives at large." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/7991.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 2003.
Includes bibliographical references (p. 206-211).
This dissertation is a study of the syntactic and semantic properties of dative arguments. The main source of data is Spanish, where dative arguments can appear with all types of verbs, and can have a wide range of meanings: goal, possessor, source, experiencer, affected object, causee, location, benefactive, malefactive, ethical dative. The challenge for a theory of dative arguments, which form a natural class morphologically, is to explain both what they have in common and how they differ syntactically and semantically. I argue that dative arguments have structural meanings, i.e., the meaning of a dative DP can be derived directly from the position in which it is licensed. To be able to predict the possible meanings of dative arguments, it is crucial to take into account the details of the syntactic configuration, which include the properties of the head that licenses the dative DP and of the functional heads that construct the event structure. Dative arguments are not direct arguments of the verb; they are, like subjects, licensed syntactically and semantically by a specialized head. This argument introducing head, the Applicative, licenses the dative DP as its specifier and relates this DP to the structure it takes as a complement. The range of possible meanings of a dative DP is predicted from the range of possible complements an applicative head can take (i.e. a DP or a vP), and from the range of heads that the applicative phrase can be a complement of. Applicative heads are also sensitive to the type of event expressed by the vP (e.g., dynamic or stative, activity or causative). The theory provides a set of positions into which an applicative head can merge and license an argument DP, as well as the set of interpretations the argument can get in each position.
(cont.) The set of positions is universal, but languages can differ with respect to the positions into which an applicative head is allowed to merge. These predictions generalize to applied arguments in languages in which they are not marked by dative case (e.g., English and Bantu languages).
by María Cristina Cuervo.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Tew, David Peter. "Large amplitude vibration." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.619693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Junyang. "Large current rectifiers." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Башлак, Ірина Анатоліївна, Ирина Анатольевна Башлак, Iryna Anatoliivna Bashlak, and T. Nikolaenko. "Large hadron collider." Thesis, Вид-во СумДУ, 2009. http://essuir.sumdu.edu.ua/handle/123456789/16770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Дядечко, Алла Миколаївна, Алла Николаевна Дядечко, Alla Mykolaivna Diadechko, and D. A. Dedik. "Large hadron collider." Thesis, Видавництво СумДУ, 2010. http://essuir.sumdu.edu.ua/handle/123456789/18312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sonnenwald, Diane H., Paul Soloman, Noriko Hara, Reto Bolliger, and Tom Cox. "Collaboration in the large: Using video conferencing to facilitate large group interaction." Idea Publishing Co, 2002. http://hdl.handle.net/10150/106015.

Full text
Abstract:
This chapter discusses the social, organizational and technical challenges and solutions that emerged when facilitating collaboration through videoconferencing for a large, geographically dispersed research and development (R&D) organization. Collaboration is an integral component of many R&D organizations. Awareness of activities and potential contributions of others is fundamental to initiating and maintaining collaboration, yet this awareness is often difficult to sustain, especially when the organization is geographically dispersed. To address these challenges, we applied an action research approach, working with members of a large, geographically distributed R&D center to implement videoconferencing to facilitate collaboration and large group interaction within the center. We found that social, organizational and technical infrastructures needed to be adapted to compensate for limitations in videoconferencing technology. New social and organizational infrastructure included: explicit facilitation of videoconference meetings; the adaptation of visual aids; and new participant etiquette practices. New technical infrastructure included: upgrades to video conference equipment; the use of separate networks for broadcasting camera views, presentation slides and audio; and implementation of new technical operations practices to support dynamic interaction among participants at each location. Lessons learned from this case study may help others plan and implement videoconferencing to support interaction and collaboration among large groups.
APA, Harvard, Vancouver, ISO, and other styles
12

O'Mahony, Kevin. "Large scale plasmid production /." [S.l.] : [s.n.], 2005. http://library.epfl.ch/theses/?nr=3320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Matt, Urs von. "Large constrained quadratic problems /." Zürich, 1993. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=9979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Batlle, Subirós Elisabet. "Large-Scale Surface registration." Doctoral thesis, Universitat de Girona, 2008. http://hdl.handle.net/10803/7606.

Full text
Abstract:
The first part of this work presents an accurate analysis of the most relevant 3D registration techniques, including initial pose estimation, pairwise registration and multiview registration strategies. A new classification has been proposed, based on both the applications and the approach of the methods that have been discussed.
The main contribution of this thesis is the proposal of a new 3D multiview registration strategy. The proposed approach detects revisited regions obtaining cycles of views that are used to reduce the inaccuracies that may exist in the final model due to error propagation. The method takes advantage of both global and local information of the registration process, using graph theory techniques in order correlate multiple views and minimize the propagated error by registering the views in an optimal way. The proposed method has been tested using both synthetic and real data, in order to show and study its behavior and demonstrate its reliability.
La primera part d'aquest treball presenta una anàlisi acurada de les tècniques de registre 3D es rellevants, incloent tècniques d'estimació de la posició inicial, registre pairwise i registre entre múltiples vistes. S'ha proposat una nova classificació de les tècniques, depenent de les seves aplicacions i de l'estratègia utilitzada.
La contribució mes important d'aquesta tesi és la proposta d'un nou mètode de registre 3D utilitzant múltiples vistes. El mètode proposat detecta regions ja visitades prèviament, obtenint cicles de vistes que s'utilitzen per tal de reduir els desalineaments en el model final deguts principalment a la propagació de l'error durant el procés de registre. Aquest mètode utilitza tant informació global com local, correlacionant les vistes mitjançant tècniques de grafs que permeten minimitzar l'error propagat i registrar les vistes de forma òptima. El mètode proposat ha estat provat utilitzant dades sintètiques i reals, per tal de mostrar i analitzar el seu comportament i demostrar la seva eficàcia.
APA, Harvard, Vancouver, ISO, and other styles
15

Das, Sarma Atish. "Algorithms for large graphs." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Borsa, Christopher. "Large Area Graphene Synthesis." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-93758.

Full text
Abstract:
Herein we attempt to synthesize graphene by annealing epitaxial SiC thin films as a carbon source. Three principal methods of synthesis are attempted: 1) Straight annealing of SiC, 2) Annealing of Cu/SiC/Si, 3) Annealing of Ni/SiC/Si. In the vacuum annealing of SiC/Si structures, an attempt was made to sublimate silicon from the surface of SiC, where remaining carbon reforms into graphene. Subsequent characterization by Raman spectroscopy was inconclusive in identifying any graphitic formation. The second experiment utilizes evaporated nickel thin films, where upon annealing the carbon from the silicon carbide will migrate into the overlaying nickel film and with subsequent cooling, the dissolved carbon will segregate onto the nickel surface and form a graphitic layer. Annealed Ni/SiC/Si structures were characterized with Raman spectroscopy, and showed the presence of nanocrystaline graphite as described by Ferrari et al. [1]. In the final experiment using copper films, carbon from SiC will catalyse at the copper/SiC interface upon annealing. Raman characterization of these structures shows evidence of graphitic formation in only the thinnest copper films used in the experimental matrix.
APA, Harvard, Vancouver, ISO, and other styles
17

Banon, Navarro Alejandro. "Gyrokinetic large Eddy simulations." Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209592.

Full text
Abstract:
Le transport anormal de l’energie observé en régime turbulent joue un rôle majeur dans les propriétés de stabilite des plasmas de fusion par confinement magnétique, dans des machines comme ITER. En effet, la turbulence plasma est intimement corrélée au temps de confinement de l’energie, un point clé des recherches en fusion thermonucléaire.

Du point de vue théorique, la turbulence plasma est décrite par les équations gyrocinétiques, un ensemble d équations aux dérivées partielles non linéaires couplées. Par suite des très différentes échelles spatiales mises en jeu dans des conditions expérimentales réelles, une simulation numérique directe et complète (DNS) de la turbulence gyrocinétique est totalement hors de portée des plus puissants calculateurs actuels, de sorte que démontrer la faisabilité d’une alternative permettant de réduire l’effort numérique est primordiale. En particulier, les simulations de grandes échelles (”Large-Eddy Simulations” - LES) constituent un candidat pertinent pour permettre une telle r éduction. Les techniques LES ont initialement été développées pour les simulations de fluides turbulents à haut nombre de Reynolds. Dans ces simulations, les plus grandes échelles sont explicitement simulées numériquement, alors que l’influence des plus petites est prise en compte via un modèle implémenté dans le code.

Cette thèse présente les premiers développements de techniques LES dans le cadre des équations gyrocinétiques (GyroLES). La modélisation des plus petites échelles est basée sur des bilans d’énergie libre. En effet, l’energie libre joue un rôle important dans la théorie gyrocinétique car elle en est un invariant non lin éaire bien connu. Il est démontré que sa dynamique partage de nombreuses propriétés avec le transfert d’energie dans la turbulence fluide. En particulier, il est montré l’existence d’une cascade d énergie libre, fortement locale et dirigée des grandes échelles vers les petites, dans le plan perpendiculaire â celui du champ magnétique ambiant.

La technique GyroLES est aujourd’hui implantée dans le code GENE et a été testée avec succès pour les instabilités de gradient de température ionique (ITG), connues pour jouer un rôle crucial dans la micro-turbulence gyrocinétique. A l’aide des GyroLES, le spectre du flux de chaleur obtenu dans des simulations à très hautes résolutions est correctement reproduit, et ce avec un gain d’un facteur 20 en termes de coût numérique. Pour ces raisons, les simulations gyrocinétiques GyroLES sont potentiellement un excellent candidat pour réduire l’effort numérique des codes gyrocinétiques actuels.

/ Anomalous transport due to plasma micro-turbulence is known to play an important role in confinement properties of magnetically confined fusion plasma devices such as ITER. Indeed, plasma turbulence is strongly connected to the energy confinement time, a key issue in thermonuclear fusion research. Plasma turbulence is described by the gyrokinetic equations, a set of nonlinear partial differential equations. Due to the various scales characterizing the turbulent fluctuations in realistic experimental conditions, Direct Numerical Simulations (DNS) of gyrokinetic turbulence remain close to the computational limit of current supercomputers, so that any alternative is welcome to decrease the numerical effort. In particular, Large-Eddy Simulations (LES) are a good candidate for such a decrease. LES techniques have been devised for simulating turbulent fluids at high Reynolds number. In these simulations, the large scales are computed explicitly while the influence of the smallest scales is modeled.

In this thesis, we present for the first time the development of the LES for gyrokinetics (GyroLES). The modeling of the smallest scales is based on free energy diagnostics. Indeed, free energy plays an important role in gyrokinetic theory, since it is known to be a nonlinear invariant. It is shown that its dynamics share many properties with the energy transfer in fluid turbulence. In particular, one finds a (strongly) local, forward (from large to small scales) cascade of free energy in the plane perpendicular to the background magnetic field.

The GyroLES technique is implemented in the gyrokinetic code Gene and successfully tested for the ion temperature gradient instability (ITG), since ITG is suspected to play a crucial role in gyrokinetic micro-turbulence. Employing GyroLES, the heat flux spectra obtained from highly resolved direct numerical simulations are recovered. It is shown that the gain of GyroLES runs is 20 in terms of computational time. For this reason, Gyrokinetic Large Eddy Simulations can be considered a serious candidate to reduce the numerical cost of gyrokinetic simulations.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
18

Woynicz, Richard A. "Large data network survivability." Master's thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-01202010-020136/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Akkerman, Hylke Broer. "Large-area molecular junctions." [S.l. : [Groningen : s.n.] ; University Library Groningen] [Host], 2008. http://irs.ub.rug.nl/ppn/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hoover, Douglas Allan. "Supersymmetric large extra dimensions." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=32520.

Full text
Abstract:
In this thesis we examine the viability of a recent proposal, known as Supersymmetric Large Extra Dimensions (SLED), for solving both the cosmological constant and the hierarchy problems. Central to this proposal is the requirement of two large extra dimensions of size r_c ~ 10 micrometres together with a low value for the higher-dimensional scale of gravity, M_* ~ 10 TeV. In order not to run into immediate conflict with experiment, it is presumed that all fields of the Standard Model are confined to a four-dimensional domain wall (brane). A realization of the SLED idea is achieved by relying on the 6D supergravity of Nishino and Sezgin (NS), which is known to have 4D-flat compactifications. When work on this thesis first began, there were many open questions which are now answered either partially or completely. In particular, we expand on the known solutions of NS supergravity, which now include: warped compactifications having either 4D de Sitter or 4D anti-de Sitter symmetry, static solutions with broken 4D Lorentz invariance, and time-dependent "scaling'' solutions. We elucidate the connection between brane properties and the asymptotic form of bulk fields as they approach the brane. Marginal stability of the 4D-flat solutions is demonstrated for a broad range of boundary conditions. Given that the warped solutions of NS supergravity which we consider are singular at the brane locations, we present an explicit regularization procedure for dealing with these singularities. Finally, we derive general formulae for the one-loop quantum corrections for both massless and massive field in arbitrary dimensions, with an eye towards applying these results to NS supergravity
Cette thèse examine la viabilité d'une approche récente, dite des Dimensions Supplémentaires Larges Supersymétriques (Supersymmetric Large Extra Dimensions, or SLED), qui propose une solution au problème de la constante cosmologique et à celui de la hiérarchie. Un aspect central de cette approche est l'existence de deux dimensions supplémentaires de grande taille r_c ~ 10 micromètres, et la faible valeur de l'échelle de gravité, M_* ~ 10 TeV. Afin d'éviter un conflit immédiat avec l'expérience, tous les champs du Modèle Standard sont supposés être confinés dans les quatre dimensions observées (i.e. sur une brane). Une implémentation de cette idée de SLED est realisée par le biais de la supergravité 6D de Nishino et Sezgin (NS), dont on sait qu'elle a des compactifications 4D-plates. Un certain nombre de questions, laissées ouvertes lorsque cette thèse à débutée, sont à présent partiellement ou complètement résolues. En particulier, nous étendons les solutions connues de la supergravité NS; elle incluent à present: compactifications déformées ayant la symétrie de Sitter ou anti-de Sitter 4D, solutions statiques avec invariance de Lorentz 4D brisée, et solutions d'échelle ("scaling'') dépendentes du temps. La relation entre les propriétés des branes et la forme asymptotique des champs de bulk lorsqu'ils approchent la brane est mise en lumière et expliquée. La stabilité marginale des solutions 4D-plate est démontrée pour une large classe de conditions de bord. Etant donné que les solutions déformées de la supergravité NS que l'on considère sont singulières à l'emplacement de la brane, une procédure explicite de rég
APA, Harvard, Vancouver, ISO, and other styles
21

Webster, Ali Matthew. "Quantifying large-scale structure." Thesis, University of Cambridge, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Xu, Changliang. "Large deviation for martingale." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.547460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Taylor, Amelia May. "Substructures in large graphs." Thesis, University of Birmingham, 2017. http://etheses.bham.ac.uk//id/eprint/7241/.

Full text
Abstract:
The first problem we address concerns Hamilton cycles. Suppose G is a large digraph in which every vertex has in- and outdegree at least |G|/2. We show that G contains every orientation of a Hamilton cycle except, possibly, the antidirected one. The antidirected case was settled by DeBiasio and Molla. Our result is best possible and improves on an approximate result by Häggkvist and Thomason. We then investigate the random greedy F-free process which was initially studied by Erdős, Suen and Winkler and by Spencer. This process greedily adds edges without creating a copy of F, terminating in a maximal F-free graph. We provide an upper bound on the number of hyperedges at the end of this process for a large class of hypergraphs. The remainder of this thesis focuses on F-decompositions, i.e., whether the edge set of a graph can be partitioned into copies of F. We obtain the best known bounds on the minimum degree which ensures a K\(_r\)-decomposition of an r-partite graph, with applications to Latin squares. Lastly, we find exact bounds on the minimum degree for a large graph to have a C\(_2\)\(_k\)-decomposition where k≠3. In both cases, we assume necessary divisibility conditions are satisfied.
APA, Harvard, Vancouver, ISO, and other styles
24

Schmid, Patrick R. (Patrick Raphael). "Large scale disease prediction." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43068.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (leaves 69-73).
The objective of this thesis is to present the foundation of an automated large-scale disease prediction system. Unlike previous work that has typically focused on a small self-contained dataset, we explore the possibility of combining a large amount of heterogeneous data to perform gene selection and phenotype classification. First, a subset of publicly available microarray datasets was downloaded from the NCBI Gene Expression Omnibus (GEO) [18, 5]. This data was then automatically tagged with Unified Medical Language System (UMLS) concepts [7]. Using the UMLS tags, datasets related to several phenotypes were obtained and gene selection was performed on the expression values of this tagged microarray data. Using the tagged datasets and the list of genes selected in the previous step, classifiers that can predict whether or not a new sample is also associated with a given UMLS concept based solely on the expression data were created. The results from this work show that it is possible to combine a large heterogeneous set of microarray datasets for both gene selection and phenotype classification, and thus lays the foundation for the possibility of automatic classification of disease types based on gene expression data in a clinical setting.
by Patrick R. Schmid.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
25

Pontzen, Andrew Peter. "Cosmology : small and large." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Furman, Yoel Avraham. "Forecasting with large datasets." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:69f2833b-cc53-457a-8426-37c06df85bc2.

Full text
Abstract:
This thesis analyzes estimation methods and testing procedures for handling large data series. The first chapter introduces the use of the adaptive elastic net, and the penalized regression methods nested within it, for estimating sparse vector autoregressions. That chapter shows that under suitable conditions on the data generating process this estimation method satisfies an oracle property. Furthermore, it is shown that the bootstrap can be used to accurately conduct inference on the estimated parameters. These properties are used to show that structural VAR analysis can also be validly conducted, allowing for accurate measures of policy response. The strength of these estimation methods is demonstrated in a numerical study and on U.S. macroeconomic data. The second chapter continues in a similar vein, using the elastic net to estimate sparse vector autoregressions of realized variances to construct volatility forecasts. It is shown that the use of volatility spillovers estimated by the elastic net delivers substantial improvements in forecast ability, and can be used to indicate systemic risk among a group of assets. The model is estimated on realized variances of equities of U.S. financial institutions, where it is shown that the estimated parameters translate into two novel indicators of systemic risk. The third chapter discusses the use of the bootstrap as an alternative to asymptotic Wald-type tests. It is shown that the bootstrap is particularly useful in situations with many restrictions, such as tests of equal conditional predictive ability that make use of many orthogonal variables, or `test functions'. The testing procedure is analyzed in a Monte Carlo study and is used to test the relevance of real variables in forecasting U.S. inflation.
APA, Harvard, Vancouver, ISO, and other styles
27

Río, Pareja José Manuel. "Ornamentum : for large orchestra /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Vorapanya, Anek. "Large-scale distributed services." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ana6855/dissertation.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains xi, 112 p.; also contains graphics. Vita. Includes bibliographical references (p. 108-111).
APA, Harvard, Vancouver, ISO, and other styles
29

Hellsten, Alex. "Diamonds on large cardinals." Helsinki : University of Helsinki, 2003. http://ethesis.helsinki.fi/julkaisut/mat/matem/vk/hellsten/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kermarrec, Anne-Marie. "Diffusion fiable large-échelle." [S.l.] : [s.n.], 2002. http://www.irisa.fr/centredoc/publis/HDR/2002/irisapublication.2005-08-03.2412138638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Mazumdar, Suvodeep. "Visualising large semantic datasets." Thesis, University of Sheffield, 2013. http://etheses.whiterose.ac.uk/5932/.

Full text
Abstract:
This thesis aims at addressing a major issue in Semantic Web and organisational Knowledge Management: consuming large scale semantic data in a generic, scalable and pleasing manner. It proposes two solutions by de-constructing the issue into two sub problems: how can large semantic result sets be presented to users; and how can large semantic datasets be explored and queried. The first proposed solution is a dashboard-based multi-visualisation approach to present simultaneous views over different facets of the data. Challenges imposed by existing technology infrastructure resulted in the development of a set of design guidelines. These guidelines and lessons learnt from the development of the approach is the first contribution of this thesis. The next stage of research initiated with the formulation of design principles from aesthetic design, Visual Analytics and Semantic Web principles derived from the literature. These principles provide guidelines to developers for building generic visualisation solutions for large scale semantic data and constitute the next contribution of the thesis. The second proposed solution is an interactive node-link visualisation approach that presents semantic concepts and their relations enriched with statistics of the underlying data. This solution was developed with an explicit attention to the proposed design principles. The two solutions exploit basic rules and templates to translate low level user interactions into high level intents, and subsequently into formal queries in a generic manner. These translation rules and templates that enable generic exploration of large scale semantic data constitute the third contribution of the thesis. An iterative User-Centered Design methodology, with the active participation of nearly a hundred users including knowledge workers, managers, engineers, researchers and students over the duration of the research was employed to develop both solutions. The fourth contribution of this thesis is an argument for the continued active participation and involvement of all user communities to ensure the development of a highly effective, intuitive and appreciated solution.
APA, Harvard, Vancouver, ISO, and other styles
32

Hill, Simon John. "Large amplitude fish swimming." Thesis, University of Leeds, 1998. http://etheses.whiterose.ac.uk/12760/.

Full text
Abstract:
A fish swims by stimulating its muscles and causing its body to "wiggle", which in turn generates the thrust required for propulsion. The relationship between the forces generated by the fish muscles and the observed pattern of movement is governed by the mechanics of the internal structure ofthe fish, and the fluid mechanics of the surrounding water. The mathematical modell ing of how fish swim involves coupling the external "biofluiddynamics" to the body's internal solid mechanics. The best-known theory for the hydrodynamics of fish swimming is Lighthill's elongated body theory (Lighthill, 1975). In Lighthill's theory the curvature of the fish is assumed small and the effect on the fish of the vortex wake is neglected. Cheng et al. (1991) did not make these simplifications in developing their vortex lattice panel method, but the fish was assumed to be infinitely thin and its undulations of small amplitude. Lighthill's "recoil correction" is the addition of a solid-body motion to ensure that an imposed "swimming description" satisfies the conservation of momentum and angular momentum. A real fish is expected to minimize such sideways translation and rotation to avoid wasteful vortex shedding. Cheng and Blickhan (1994) found that the panel method model required a smaller recoil than did Lighthill's model. Our approach is to extend Cheng's model to large amplitude. Thus we include the effect of the wake on the fish, and the self-induced deformation of the wake itself. In studying the internal mechanics of the body we model the fish as an active bending beam. Using the equations of motion of cross-sectional slices of the body we can form a set of coupled differential equations for the bending moment distribution. At large amplitude the bending moment equations involve the tangential forces acting on the body (which may be neglected in the small amplitude version). Consequently we include the boundary layer along the fish in order to estimate the viscous drag directly. The panel method has been used successfully for the fluid mechanical calculations associated with large-amplitude fish swimming. We are able to use its results as input to calculate the bending moment distribution. The boundary layer calculations are based on a crude model; solutions to the large amplitude bending moment equations should also be considered in this light.
APA, Harvard, Vancouver, ISO, and other styles
33

WAN, DER-SHEN. "OPTICS FOR LARGE TELESCOPE." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184226.

Full text
Abstract:
There are two topics in this dissertation: one is to develop new phase reduction algorithms for test interferograms especially of large optics and the other one is to find more accurate analytical expression of surface deflection due to gravity when the mirror is supported in the axial direction. Two new algorithms for generating phase maps from interferograms are developed. Both methods are sensitive to small-scale as well as large-scale surface errors. The first method is designed to generate phase from an interferogram that is sampled and digitized only along fringe centers, as in the case of manual digitization. A new interpolation algorithm uses the digitized data more efficiently than the fitting of Zernike polynomials, so the new method can detect small-scale surface error better than Zernike polynomial fitting. The second algorithm developed here is an automatic phase reduction process which works on test interferograms recorded by CCD camera and transferred digitally to a personal computer through a frame grabber. The interferogram results from interference of the test wavefront with a tilted reference wave-front. Phase is generated by assuming it to be proportional to the intensity of the interferogram, apart from changes of sign and offset occurring every half fringe so as to make the phase increase monotoically. The error of the new algorithm is less than 1/20 waves in the wavefront, which can be reduced further by averaging several phase maps which are generated by interferograms with random phase shifts. The new algorithm is quick and involves no smoothing, so it can detect surface errors on large mirrors on a scale of several centimeters. A new model is developed to calculate analytically the surface deflection of a mirror supported axially on multiple points. It is based on thin plate theory, but considerations of thickness variation of a curved mirror, lightweight honeycomb structure and shear are included. These additions improve the accuracy of the calculated surface deflection, giving results close to those obtained from the accurate but computer intensive finite element model.
APA, Harvard, Vancouver, ISO, and other styles
34

Le, Tien Nam. "Patterns in Large Graphs." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN079/document.

Full text
Abstract:
Un graphe est un ensemble de noeuds, ensemble de liens reliant des paires de noeuds. Avec la quantité accumulée de données collectées, il existe un intérêt croissant pour la compréhension des structures et du comportement de très grands graphes. Néanmoins, l’augmentation rapide de la taille des grands graphes rend l’étude de tous les graphes de moins en moins efficace. Ainsi, il existe une demande impérieuse pour des méthodes plus efficaces pour étudier de grands graphes sans nécessiter la connaissance de tous les graphes. Une méthode prometteuse pour comprendre le comportement de grands graphes consiste à exploiter des propriétés spécifiques de structures locales, telles que la taille des grappes ou la présence locale d’un motif spécifique, c’est-à-dire un graphe donné (généralement petit). Un exemple classique de la théorie des graphes (cas avérés de la conjecture d'Erdos-Hajnal) est que, si un graphe de grande taille ne contient pas de motif spécifique, il doit alors avoir un ensemble de noeuds liés par paires ou non liés, de taille exponentiellement plus grande que prévue. Cette thèse abordera certains aspects de deux questions fondamentales de la théorie des graphes concernant la présence, en abondance ou à peine, d’un motif donné dans un grand graphe : - Le grand graphe peut-il être partitionné en copies du motif ? - Le grand graphe contient-il une copie du motif ? Nous discuterons de certaines des conjectures les plus connues de la théorie des graphes sur ce sujet: les conjectures de Tutte sur les flots dans les graphes et la conjecture d'Erdos-Hajnal mentionnée ci-dessus, et présenterons des preuves pour plusieurs conjectures connexes - y compris la conjecture de Barát-Thomassen, une conjecture de Haggkvist et Krissell, un cas particulier de la conjecture de Jaeger-Linial-Payan-Tarsi, une conjecture de Berger et al, et une autre d'Albouker et al
A graph is a set of nodes, together links connecting pairs of nodes. With the accumulating amount of data collected, there is a growing interest in understanding the structures and behavior of very large graphs. Nevertheless, the rapid increasing in size of large graphs makes studying the entire graphs becomes less and less efficient. Thus, there is a compelling demand for more effective methods to study large graphs without requiring the knowledge of the graphs in whole. One promising method to understand the behavior of large graphs is via exploiting specific properties of local structures, such as the size of clusters or the presence locally of some specific pattern, i.e. a given (usually small) graph. A classical example from Graph Theory (proven cases of the Erdos-Hajnal conjecture) is that if a large graph does not contain some specific pattern, then it must have a set of nodes pairwise linked or not linked of size exponentially larger than expected. This thesis will address some aspects of two fundamental questions in Graph Theory about the presence, abundantly or scarcely, of a given pattern in some large graph: - Can the large graph be partitioned into copies of the pattern? - Does the large graph contain any copy of the pattern?We will discuss some of the most well-known conjectures in Graph Theory on this topic: the Tutte's flow conjectures on flows in graphs and the Erdos-Hajnal conjecture mentioned above, and present proofs for several related conjectures -- including the Barát-Thomassen conjecture, a conjecture of Haggkvist and Krissell, a special case of Jaeger-Linial-Payan-Tarsi's conjecture, a conjecture of Berger et al, and another one by Albouker et al
APA, Harvard, Vancouver, ISO, and other styles
35

Clough, Eric C. "Large-displacement Lightweight Armor." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1122.

Full text
Abstract:
Randomly entangled fibers forming loosely bound nonwoven structures are evaluated for use in lightweight armor applications. These materials sacrifice volumetric efficiency in order to realize a reduction in mass versus traditional armor materials, while maintaining equivalent ballistic performance. The primary material characterized, polyester fiberfill, is shown to have improved ballistic performance over control samples of monolithic polyester as well as 1095 steel sheets. The response of fiberfill is investigated at a variety of strain rates, from quasistatic to ballistic, under compression, tension, and shear deformation to elucidate mechanisms at work during ballistic defeat. Fiberfill’s primary mechanisms during loading are fiber reorientation, fiber unfurling, and frictional sliding. Frictional sliding, coupled with high macroscopic strain to failure, is thought to be the source of the high specific ballistic performance in fiberfill materials. The proposed armor is tested for penetration resistance against spherical and cylindrical 7.62 mm projectiles fired from a gas gun. A constitutive model incorporating the relevant deformation mechanisms of texture evolution and progressive damage is developed and implemented in Abaqus explicit in order to expedite further research on ballistic nonwoven fabrics.
APA, Harvard, Vancouver, ISO, and other styles
36

Moberg, Désirée. "Transmission alternatives for grid connection of large offshore wind farms at large distance." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-65804.

Full text
Abstract:
With the great possibility of offshore wind power that can be installed in the world seas, offshore wind power is starting to get and important source of energy. The growing sizes of wind turbines and a growing distance to land, makes the choice of transmission alternative to a more important factor. The profitability of the transmission solution is affected by many parameters, like investment cost and power losses, but also by parameters like operation & maintenance and lead time of the system. The study is based on a planned wind farm with a rated power of 1 200 MW and at a distance of 125 km to the connection point. Four models have been made for the transmission network with the technology of HVAC, HVDC and a hybrid of both. The simulation program used is EeFarm II, which has an interface in Matlab and Simulink. The four solutions have been compared technically, with difficulties and advantages pointed out and also economically, with the help of LCOE, NPV and IRR. Costs, power losses and availability of the wind turbines and intra array network are not included in the study. The result of the simulations implies that the HVAC solution is the most profitable with the lowest Levelized Cost of Energy and highest Net Price Value and Internal Rate of Return. The values are 25.11 €/MWh, 387.60 M€ and 15.32 % respectively. A HVDC model with just one offshore converter station, has a LCOE close to the HVAC solution, but with a more noticeable difference in NPV and IRR (25.71 €/MWh, 300.76 M€ and 14.84 % respectively). A sensitivity analysis has been done, where seven different parameters have been changed for analysing their impact on the economic result. The largest impact made was by a change in investment cost and lead times. The results imply that with a structure of the transmission network as for the models, and with similar input data, the break point where a HVDC solution is more profitable than a HVAC solution is not yet passed at a distance of 125 km from the connection point. With an evolving technology in the field of HVDC, a shorter lead time and lower investment cost could mean that a HVDC solution would be more profitable at this distance. Difficulties for a HVAC solution with more cable required, like bigger land usage and cable manufacturing as a bottle neck, could make an important factor tough while making a decision.
Med den stora potentialen hos världens hav, börjar havsbaserad vindkraft bli en betydande energikälla. Den ökande storleken på vindkraftsturbinerna tillsammans med de ökade avstånden mellan vindkraftsparkerna och land, gör att transmissionslösningen blir en mer betydelsefull komponent. Flera olika parametrar kan vara avgörande för transmissionslösningens lönsamhet, som investeringskostnad och effektförluster, men också saker som drift & underhåll och projektets ledtid. Studien är baserad på en planerad vindkraftspark med en märkeffekt på 1 200 MW och på ett avstånd på 125 km till anslutningspunkten. Fyra modeller av transmissionssnätet har gjorts, där tekniken har bestått av HVAC, HVDC samt en blandning av dessa. Simuleringarna har gjort i EeFarm II, ett program baserat på Matlab och Simulink. De fyra modellerna har jämförts tekniskt, med för- och nackdelar poängterade, och även ekonomiskt med hjälp av LCOE, NPV och IRR. Kostnader, effektförluster och tillgängligheten för vindkraftsturbinerna och internnätet i vindkraftsparken är inte inkluderade i studien. Resultaten av simuleringarna visar på att HVAC-lösningen är den mest lönsamma, med lägst Levelized Cost of Energy och högst Net Price Value och Internal Rate of Return. Värdena för dessa är 25,11 €/MWh, 387,60 M€ respektive 15,32 %. En HVDC-lösning med enbart en DC-plattform och likriktarstation för hela märkeffekten, har en LCOE inte långt ifrån HVAC-lösningen, men med en lite större skillnad i NPV och IRR (25,71 €/MWh, 300,76 M€ respektive 14,84 %). För att analysera påverkan av olika parametrar på de ekonomiska mätvärdena, har en osäkerhetsanalys gjort. Den största påverkan på resultatet syntes av förändringar av investeringskostnader och ledtider. Ovanstående resultat tyder på, med transmissionslösningar enligt modellerna i detta arbete, att brytpunkten där en HVDC-lösning är mer lönsam än en HVAC-lösning inte än är passerad vid ett avstånd på 125 km till anslutningspunkten. Med en fortfarande väldigt ung teknik för HVDC, kan den ständigt utvecklande tekniken i framtiden betyda kortare ledtider och en lägre investeringskostnad för en HVDC-lösning och möjligheten att vara en mer lönsam lösning. Komplikationer med en HVAC-lösning pga den extra landkabeln, som större landanvändning och med kabeltillverkningen som en flaskhals, kan ändå göra en HVDC-lösning mer praktisk.
APA, Harvard, Vancouver, ISO, and other styles
37

Jacobson, A. P. "Large carnivores under threat : investigating human impacts on large carnivores in East Africa." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/1559725/.

Full text
Abstract:
Large carnivores are a polarizing group of species that play an outsized role in relation to their number. They structure ecosystems and feature prominently in human culture. Yet, their place in a rapidly changing world is uncertain. The large carnivore guild in the five countries of East Africa, Burundi, Kenya, Rwanda, Tanzania, and Uganda, is largely intact; however, expanding human populations pose a substantial threat. Interventions are necessary to promote coexistence. To accomplish this, more accurate identification of threats, and improved understanding of species’ responses are needed. Primary threats to large carnivores in the region include habitat loss and human-wildlife conflict (HWC). Problematically, identification of human impacted areas from earth observation data can be difficult in heterogeneous savannah habitat, much of East Africa. I create a tool that enables land cover classification using Google Earth’s high-resolution imagery. With this tool I develop a data set of human impacted areas for East Africa. To ascertain carnivore response to human dominated lands, I use correlative species distribution modeling (SDM). Yet, there is no clear consensus on proper methods for generating pseudo-absence (PsA) data in these models. I review some existing methods in the context of their ecological meaning, and propose new PsA selection strategies. I then apply two novel and one existing PsA strategy to assess four carnivores’ (cheetah, wild dog, leopard, and lion) responses to human land cover and human population densities. Results suggest these carnivores are more susceptible to human land cover than human populations. Finally, I consider existing approaches of using SDM with HWC records to generate spatial risk maps with the goal of alleviating conflict. I draw on the SDM literature to highlight and demonstrate how two commonly overlooked issues in spatial risk modeling can hamper generating useful conclusions. In sum, these efforts represent attempts at improving commonly used methods used to study wildlife distribution and threats, and can be widely applied to other species and systems.
APA, Harvard, Vancouver, ISO, and other styles
38

Jerhov, Carolina. "IN LARGE SCALE : the art of knitting a small shell in large scale." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-26582.

Full text
Abstract:
This work places itself in the field of knitted textile design and the context of body and interior. The primary motive is to investigate the tactile and visual properties of oysters and pearls, inspired by Botticelli’s painting Venus. The aim is to explore free-flowing and texture through knitted three-dimensional textile surfaces. Material and colour choices have been made based on the source of inspiration, the oyster, and investigated on industrial circle knit and flat knit machines. The circle knit’s expression has been explored from a hand knitting perspective, using the manual elements to push the machine’s technique to design new expressions. The result of the project is a collection that has four suggestions for a knitted, three-dimensional surface, each inspired and developed from one specific part of the oyster; the shell, the nacre, the flesh, and the pearl. This work investigates the potential of using circle knit machines, commonly used in fast fashion for bulk production, as a tool for handicraft and higher art forms. The final collection pushes the conversation regarding the future uses of the knitting machines and investigates how rigid objects can be expressed through the flexible structure.
APA, Harvard, Vancouver, ISO, and other styles
39

Maged, Shireen. "The pedagogy of large classes : challenging the "large class equals gutter education" myth." Master's thesis, University of Cape Town, 1997. http://hdl.handle.net/11427/16133.

Full text
Abstract:
Includes bibliography.
The study takes the work of three teachers to examine whether the popular belief of "small is better" is substantiated in the practice of these teachers. The study observes and analyses the classroom instruction of each of these teachers in a small class as well as in a large class. The observation is done with the use of an observation schedule, and the analysis of data is done within a Vygotskian framework. The study shows that the pedagogy and the teaching style of the three teachers does not change when they teach differently sized classes. In other words, their classroom practice is the same for both the small and large classes. The study further shows that the pedagogy of the teacher determines the effectiveness or quality of instruction, and that class size does not impact, either positively (in the case of the small class) or negatively (in the case of a large class) on the effectiveness or quality of instruction.
APA, Harvard, Vancouver, ISO, and other styles
40

Chu, Wen-Hwa Martin. "Microfabricated tweezers with a large gripping force and a large range of motion." Case Western Reserve University School of Graduate Studies / OhioLINK, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=case1057869514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Hipp, Guy L. "A strategy for team building with a large staff in a large church." Theological Research Exchange Network (TREN), 1993. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Feng, Shui 1962 Carleton University Dissertation Mathematics. "Large deviations of particle systems." Ottawa.:, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Colomés, Gené Oriol. "Large scale finite element solvers for the large eddy simulation of incompressible turbulent flows." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/392718.

Full text
Abstract:
In this thesis we have developed a path towards large scale Finite Element simulations of turbulent incompressible flows. We have assessed the performance of residual-based variational multiscale (VMS) methods for the large eddy simulation (LES) of turbulent incompressible flows. We consider VMS models obtained by different subgrid scale approximations which include either static or dynamic subscales, linear or nonlinear multiscale splitting, and different choices of the subscale space. We show that VMS thought as an implicit LES model can be an alternative to the widely used physical-based models. This method is traditionally combined with equal-order velocity-pressure pairs, since it provides pressure stabilization. In this work, we also consider a different approach, based on inf-sup stable elements and convection-only stabilization. In order to do so, we define a symmetric projection stabilization of the convective term using an orthogonal subscale decomposition. The accuracy and efficiency of this method compared with residual-based algebraic subgrid scales and orthogonal subscales methods for equal-order interpolation is also assessed in this thesis. Furthermore, we propose Runge-Kutta time integration schemes for the incompressible Navier-Stokes equations with two salient properties. First, velocity and pressure computations are segregated at the time integration level, without the need to perform additional fractional step techniques that spoil high orders of accuracy. Second, the proposed methods keep the same order of accuracy for both velocities and pressures. Precisely, the symmetric projection stabilization approach is suitable for segregated Runge-Kutta time integration schemes. This combination, together with the use of block-preconditioning techniques, lead to elasticity-type and Laplacian-type problems that can be optimally preconditioned using the balancing domain decomposition by constraints preconditioners. The weak scalability of this formulation have been demonstrated in this document. Additionally, we also contemplate the weak imposition of the Dirichlet boundary conditions for wall-bounded turbulent flows. Four well known problems have been mainly considered for the numerical experiments: the decay of homogeneous isotropic turbulence, the Taylor-Green vortex problem, the turbulent flow in a channel and the turbulent flow around an airfoil.
En aquesta tesi s'han desenvolupat diferents algoritmes per la simulació a gran escala de fluxos turbulents incompressibles mitjançant el mètode dels Elements Finits. En primer lloc s'ha avaluat el comportament dels mètodes de multiescala variacional (VMS) basats en el residu, per la simulació de grans vòrtexs (LES) de fluxos turbulents. S'han considerat diferents models VMS tenint en compte diferents aproximacions de les subescales, que inclouen tant subescales estàtiques o dinàmiques, una definicó lineal o nolineal, i diferents seleccions de l'espai de les subescales. S'ha demostrat que els mètodes VMS pensats com a models LES poden ser una alternativa als models basats en la física del problema. Aquest tipus de mètode normalment es combina amb l'ús de parelles de velocitat i pressió amb igual ordre d'interpolació. En aquest treball, també s'ha considerat un enfocament diferent, basat en l'ús d'elements inf-sup estables conjuntament amb estabilització del terme convectiu. Amb aquest objectiu, s'ha definit un mètode d'estabilització amb projecció simètrica del terme convectiu mitjançant una descomposició ortogonal de les subescales. En aquesta tesi també s'ha valorat la precisió i eficiència d'aquest mètode comparat amb mètodes basats en el residu fent servir interpolacions amb igual ordre per velocitats i pressions. A més, s'ha proposat un esquema d'integració en temps basat en els mètodes de Runge-Kutta que té dues propietats destacables. En primer lloc, el càlcul de la velocitat i la pressió es segrega al nivell de la integració temporal, sense la necessitat d'introduir tècniques de fraccionament del pas de temps. En segon lloc, els esquemes segregats de Runge-Kutta proposats, mantenen el mateix ordre de precisió tant per les velocitats com per les pressions. Precisament, els mètodes d'estabilització amb projecció simètrica són adequats per ser integrats en temps mitjançant esquemes segregats de Runge-Kutta. Aquesta combinació, juntament amb l'ús de tècniques de precondicionament en blocs, dóna lloc a problemes tipus elasticitat i Laplacià que poden ser òptimament precondicionats fent servir els anomenats \textit{balancing domain decomposition by constraints preconditioners}. La escalabilitat dèbil d'aquesta formulació s'ha demostrat en aquest document. Adicionalment, també s'ha contemplat la imposició de forma dèbil de les condicions de contorn de Dirichlet en problemes de fluxos turbulents delimitats per parets. En aquesta tesi principalment s'han considerat quatre problemes ben coneguts per fer els experiments numèrics: el decaïment de turbulència isotròpica i homogènia, el problema del vòrtex de Taylor-Green, el flux turbulent en un canal i el flux turbulent al voltant d'una ala.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Dun. "Determination of Rupture Propagation for Large Earthquakes from Back-Projection Analyses Using Large Arrays." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/175126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Zhang, Xu. "Large comportement au temps large de l'équation de Prandtl et des systèmes de magnétohydrodynamique." Rouen, 2016. http://www.theses.fr/2016ROUES015.

Full text
Abstract:
Cette thèse se compose de deux parties: 1) existence de solutions au temps large pour l’équation de Prandtl sous l’hypothèse de monotonicité. 2) l’étude des solutions globales des systèmes magnétohydrodynamiques non homogènes avec des densités positives et bornées inférieurement. Récemment, sous l’hypothèse de monotonicité, en utilisant la méthode d’énergie, Alexandre-Wang-Xu-Yang et Masmoudi-Wong ont obtenu l’existence locale de solutions dans l’espace de Sobolev pour l’équation de couches limites de Prandtl, mais le temps de vie de leurs solutions est très petit. Par ailleurs, Xin-Zhang ont montré l’existence de solutions globales faibles par la transformation de Crocco, sous l’hypothèse de monotonicité en plus de la favorité de la pression. Le comportement au temps large de l’équation de Prandtl est très important pour étudier la théorie de couches limites de Prandtl. Avec cette motivation, la première partie de cette thèse est consacrée à l’étude du probème de Cauchy associé à l’équation de couches limites de Prandtl sur le demi-plan pour des temps larges. Plus précisemment, pour une classe de données initiales qui sont des perturbations autour d’un profil monotone, nous avons établi l’existence, l’unicité et la stabilité de solutions dans l’espace de Sobolev avec des poids. De plus, nous avons montré que le temps de vie des solutions est arbitrairement long si la perturbation initiale est assez petite. L’approche qu’on a adoptée pour prouver l’existence de solutions est basée sur la méthode d’énergie et la régularisation parabolique. La proprieté d’annulation non-linéaire de l’équation de Prandtl sous l’hypothèse de monotonicité est le point clé pour étabilir une nouvelle estimation d’énergie. La deuxième partie de cette thèse est d’édiée au problème d’existence globale pour les systèmes magnétohydrodynamiques (MHD) non homogènes. Récemment, Danchin-Mucha ont obtenu le caractère bien posé pour le problème de Cauchy associé à l’équation de Navier-Stokes non homogène avec une densité discontinue en utilisant la transformation de Lagrange ou le dérivé matériau. Dans cette partie, nous montrons que le problème de Cauchy associé au système de MHD non homogène est globalement bien posé lorsque la densité admet une borne inférieure positive et le champ magnétique initial contient de grandes oscillations. Nous prouvons d’abord l’estimation à priori dans les coordonnées d’Euler puis nous étabissons l’existence locale pour le système MHD non homogène dans les coordonnées de Lagrange. En outre, nous démontrons que ces solutions locales deviennent globales si les normes H1 de la vitesse et L2 \ L4 du champ magnétique sont suffisamment petites. Ici, les hypothèses de petitesse sont différentes sur les vitesses et les champs magnétiques initiaux. En outre, on n’a pas demandé la petitesse sur le gradient du champ magnétique. Donc les données initiales des champs magnétiques peuvent avoir de grandes oscillations
This thesis is made up of two parts. One is about the long time wellposedness of Prandtl equations with monotonicity assumption. The other one is the study of global solutions for inhomogeneous Magnetohydrodynamics system with bounded positive density. Recently, under the monotonic assumption, by using the energy method, Alexandre-Wang-Xu-Yang and Masmoudi-Wong have obtained the local in time existence of smooth solution in Sobolev space for Prandtl boundary layer equation, but the life span of their solution are very small. On the meantime, Xin-Zhang proved the global-in-time weak solution by Crocco transformation under monotonicity and favorable pressure assumption. The long time behavior of the Prandtl equations is important to make progress towards the inviscid limit of the Navier-Stokes equations. With this motivation, in the first part of this thesis, we study the long time well-posedness for the nonlinear Prandtl boundary layer equation on the half plane. We consider a class of the initial data as perturbations around a monotonic shear profile and we prove the existence, uniqueness and stability of solutions in weighted Sobolev space, whose life span can be arbitrarily long while the initial perturbations are small enough. We use the energy method to prove the existence of solutions by a parabolic regularizing approximation. The nonlinear cancellation properties of Prandtl equations under the monotonic assumption are the main ingredients to establish a new energy estimate. The second part of this thesis is about global well-posedness of inhomogeneous magnetohydrodynamics(MHD) system. Recently, Danchin-Mucha have obtained well posedness of inhomogeneous Navier-Stokes equation while the density could be discontinuous by using Lagrangian transformation, or the material derivative. We will prove the global well-posedness of inhomogeneous MHD system while the density just has a positive lower bound and the initial magnetic field contains large oscillations. We first get the à priori estimate in Euler coordinate and then prove the local-in-time well-posedness of inhomogeneous MHD system in Lagrangian coordinate. Moreover, local solutions become global if the usual H1 norm of velocity and L2\L4 norm of magnetic field are small enough. Here, the smallness assumptions are different on initial velocities and initial magnetic fields. Moreover, we don’t need to demand gradient of magnetic field to be small enough as that of velocities. So the initial magnetic filed can contain large oscillation
APA, Harvard, Vancouver, ISO, and other styles
46

U, Leong-Hou. "Matching problems in large databases." Click to view the E-thesis via HKUTO, 2010. http://sunzi.lib.hku.hk/hkuto/record/B43910488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Jonge, Dave de. "Negotiations over large agreement spaces." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/295709.

Full text
Abstract:
En aquesta tesi investiguem algorismes de negociació en dominis amb funcions d'utilitat no linials i en els quals l'espai d'acords possibles és tan gran que l'ús de cerca exhaustiva és inviable. A més, explorem la relació entre les àrees de negociació automàtica, teoria de jocs, institucions electròniques i optimització amb restrictions. Presentem tres casos d'estudi de complexitat creixent. Primer, proposem un negociador automàtic basat en algorismes genètics i l'apliquem a un domini on el conjunt d'acords possibles es dóna en forma explícita com un espai vectorial i on, encara que les funcions d'utilitat són no linials, el valor d'utilitat de qualsevol acord es pot calcular ràpidament resolent una equació linial. Segon, presentem un algorisme de negociació general anomenat NB3, basat en la tècnica de Branch & Bound. Apliquem aquest algorisme a un nou cas de prova on el valor d'un acord es pot determinar únicament resolent un problema NP-dur. El tercer cas d'ús és el joc Diplomacy, que és encara més difícil que els casos anteriors ja que un acord no determina completament les accions d'un agent. La utilitat obtinguda per un agent depèn també de les accions triades pels altres agents, de manera que es necessita tenir en consideració aspectes de teoria de jocs. Justi quem que en aquest model basat en teoria de jocs no existeix una definició satisfactòria del concepte `valor de reserva', a diferència dels models comunment emprats en la teoria de regateig clàssica. A més, justifiquem que les negociacions requereixen d'un mecanisme, conegut com a Institució Electrònica, per garantir que els acords siguin respectats. Un entorn per al desenvolupament d'Institucions Electròniques és EIDE i proposem una extensió d'EIDE que proporciona una interfície que permet als humans d'interaccionar en institucions electròniques. També plantegem que en el futur serà possible per als humans i els agents de negociar quins protocols fer servir a una institució electrònica. Això podria ser especialment útil en el desenvvolupament de noves xarxes socials on els usuaris puguin determinar les regles de comportament particulars d'una comunitat privada. Ja que l'entorn EIDE és massa complicat per ser emprat per usuaris normals, sense les capacitats d'un enginyer informàtic, introduïm un nou llenguatge per a la definició de protocols que és similar al llenguatge natural i per tant pot ser usat i entès per qualsevol persona.
In this thesis we investigate negotiation algorithms for domains with non-linear utility functions and where the space of possible agreements is so large that the application of exhaustive search is impossible. Furthermore, we explore the relationship between the fields of Automated Negotiations, Game Theory, Electronic Institutions, and Constraint Optimization. We present three case studies with increasing complexity. Firstly, we introduce an automated negotiator based on Genetic Algorithms, which is applied to a domain where the set of possible agreements is explicitly given as a vector space and, although the utility functions are non-linear, the utility value of any given deal can be calculated quickly by solving a linear equation. Secondly, we introduce a general purpose negotiation algorithm called NB3, which is based on Branch & Bound. We apply this to a new negotiation test case in which the value of any given deal can only be determined by solving an NP-hard problem. Our third case involves the game of Diplomacy, which is even harder than the previous test cases, because a given deal usually does not entirely x the agent's possible actions. The utility obtained by an agent thus also depends on the actions it performs after making the deal. Moreover, its utility also depends on the actions chosen by the other agents, so one needs to take Game Theoretical considerations into account. We argue that in this Game Theoretical model there no longer exists a satisfactory de nition of a reservation value, unlike the models commonly used in classical bargaining theory. Furthermore, we argue that negotiations require a mechanism, known as an Electronic Institution, to ensure that agreements are obeyed. One framework for the development of Electronic Institutions is EIDE and we introduce a new extension to EIDE that provides a user interface so that humans can interact within such Electronic Institutions. Moreover, we argue that in the future it should be possible for humans and agents to negotiate which protocols to follow in an Electronic Institution. This could be especially useful for the development of a new kind of social network in which the users can set the rules for their own private communities. Finally, we argue that the EIDE framework is too complicated to be used by average people who do not have the technical skills of a computer scientist. We therefore introduce a new language for the definition of protocols, which is very similar to natural language so that it can be used and understood by anyone.
APA, Harvard, Vancouver, ISO, and other styles
48

Tsaprounis, Konstantinos. "Large cardinals and resurrection axioms." Doctoral thesis, Universitat de Barcelona, 2012. http://hdl.handle.net/10803/97038.

Full text
Abstract:
In the current dissertation we work in set theory and we study both various large cardinal hierarchies and issues related to forcing axioms and generic absoluteness. The necessary preliminaries may be found, as it should be anticipated, in the first chapter. In Chapter 2, we study several C(n) - cardinals as introduced by J. Bagaria (cf. [1]). In the context of an elementary embedding associated with some fixed C(n) - cardinal, and under adequate assumptions, we derive consistency (upper) bounds for the large cardinal notion at hand; in particular, we deal with the C(n) - versions of tallness, superstrongness, strongness, supercompactness, and extendibility. As far as the two latter notions are concerned, we further study their connection, giving an equivalent formulation of extendibility as well. We also consider the cases of C(n) -Woodin and of C(n) – strongly compact cardinals which were not studied in [1] and we get characterizations for them in terms of their ordinary counterparts. In Chapter 3, we briefly discuss the interaction of C(n) – cardinals with the forcing machinery, presenting some applications of ordinary techniques. In Chapter 4, we turn our attention to extendible cardinals; by a combination of methods and results from Chapter 2, we establish the existence of apt Laver functions for them. Although the latter was already known (cf. [2]), it is proved from a fresh viewpoint, one which nicely ties with the material of Chapter 5. We also argue that in the case of extendible cardinals one cannot use such Laver functions in order to attain indestructibility results. Along the way, we give an additional characterization of extendibility, and we, moreover, show that the global GCH can be forced while preserving such cardinals. In Chapter 5, we focus on the resurrection axioms as they are introduced by J.D. Hamkins and T. Johnstone (cf. [3]). Initially, we consider the class of stationary preserving posets and, assuming the (consistency of the) existence of an extendible cardinal, we obtain a model in which the resurrection axiom for this class holds. By analysing the proof of the previous result, we are led to much stronger forms of resurrection for which we introduce a family of axioms under the general name “Unbounded Resurrection”. We then prove that the consistency of these axioms follows from that of (the existence of) an extendible cardinal and that, for the appropriate classes of posets, they are strengthenings of the forcing axioms PFA and MM. We furthermore consider several implications of the unbounded resurrection axioms (e.g., their effect on the continuum, for the classes of c.c.c. and of sygma- closed posets) together with their connection with the corresponding ones of [3]. Finally, we also establish some consistency lower bounds for such axioms, mainly by deriving failures of (weak versions of) squares. We conclude our current mathematical quest with a few final remarks and a small list of open questions, followed by an Appendix on extenders and (some of) their applications. References [1] Bagaria, J., C (n)–cardinals. In Archive Math. Logic, Vol. 51 (3–4), pp. 213–240, 2012. [2] Corazza, P., Laver sequences for extendible and super–almost–huge cardinals. In J. Symbolic Logic, Vol. 64 (3), pp. 963–983, 1999. [3] Johnstone, T., Notes to “The Resurrection Axioms”. Unpublished notes (2009).
APA, Harvard, Vancouver, ISO, and other styles
49

Loeliger, Teddy. "Large-area photosensing in CMOS /." Zürich, 2001. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Petraglio, Gabriele Carlo Luigi. "Large scale motions in macromolecules /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography