Dissertations / Theses on the topic 'Large Scale'

To see the other types of publications on this topic, follow the link: Large Scale.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Large Scale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

O'Mahony, Kevin. "Large scale plasmid production /." [S.l.] : [s.n.], 2005. http://library.epfl.ch/theses/?nr=3320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Batlle, Subirós Elisabet. "Large-Scale Surface registration." Doctoral thesis, Universitat de Girona, 2008. http://hdl.handle.net/10803/7606.

Full text
Abstract:
The first part of this work presents an accurate analysis of the most relevant 3D registration techniques, including initial pose estimation, pairwise registration and multiview registration strategies. A new classification has been proposed, based on both the applications and the approach of the methods that have been discussed.
The main contribution of this thesis is the proposal of a new 3D multiview registration strategy. The proposed approach detects revisited regions obtaining cycles of views that are used to reduce the inaccuracies that may exist in the final model due to error propagation. The method takes advantage of both global and local information of the registration process, using graph theory techniques in order correlate multiple views and minimize the propagated error by registering the views in an optimal way. The proposed method has been tested using both synthetic and real data, in order to show and study its behavior and demonstrate its reliability.
La primera part d'aquest treball presenta una anàlisi acurada de les tècniques de registre 3D es rellevants, incloent tècniques d'estimació de la posició inicial, registre pairwise i registre entre múltiples vistes. S'ha proposat una nova classificació de les tècniques, depenent de les seves aplicacions i de l'estratègia utilitzada.
La contribució mes important d'aquesta tesi és la proposta d'un nou mètode de registre 3D utilitzant múltiples vistes. El mètode proposat detecta regions ja visitades prèviament, obtenint cicles de vistes que s'utilitzen per tal de reduir els desalineaments en el model final deguts principalment a la propagació de l'error durant el procés de registre. Aquest mètode utilitza tant informació global com local, correlacionant les vistes mitjançant tècniques de grafs que permeten minimitzar l'error propagat i registrar les vistes de forma òptima. El mètode proposat ha estat provat utilitzant dades sintètiques i reals, per tal de mostrar i analitzar el seu comportament i demostrar la seva eficàcia.
APA, Harvard, Vancouver, ISO, and other styles
3

Webster, Ali Matthew. "Quantifying large-scale structure." Thesis, University of Cambridge, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.624308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schmid, Patrick R. (Patrick Raphael). "Large scale disease prediction." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43068.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (leaves 69-73).
The objective of this thesis is to present the foundation of an automated large-scale disease prediction system. Unlike previous work that has typically focused on a small self-contained dataset, we explore the possibility of combining a large amount of heterogeneous data to perform gene selection and phenotype classification. First, a subset of publicly available microarray datasets was downloaded from the NCBI Gene Expression Omnibus (GEO) [18, 5]. This data was then automatically tagged with Unified Medical Language System (UMLS) concepts [7]. Using the UMLS tags, datasets related to several phenotypes were obtained and gene selection was performed on the expression values of this tagged microarray data. Using the tagged datasets and the list of genes selected in the previous step, classifiers that can predict whether or not a new sample is also associated with a given UMLS concept based solely on the expression data were created. The results from this work show that it is possible to combine a large heterogeneous set of microarray datasets for both gene selection and phenotype classification, and thus lays the foundation for the possibility of automatic classification of disease types based on gene expression data in a clinical setting.
by Patrick R. Schmid.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
5

Vorapanya, Anek. "Large-scale distributed services." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ana6855/dissertation.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains xi, 112 p.; also contains graphics. Vita. Includes bibliographical references (p. 108-111).
APA, Harvard, Vancouver, ISO, and other styles
6

Jerhov, Carolina. "IN LARGE SCALE : the art of knitting a small shell in large scale." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-26582.

Full text
Abstract:
This work places itself in the field of knitted textile design and the context of body and interior. The primary motive is to investigate the tactile and visual properties of oysters and pearls, inspired by Botticelli’s painting Venus. The aim is to explore free-flowing and texture through knitted three-dimensional textile surfaces. Material and colour choices have been made based on the source of inspiration, the oyster, and investigated on industrial circle knit and flat knit machines. The circle knit’s expression has been explored from a hand knitting perspective, using the manual elements to push the machine’s technique to design new expressions. The result of the project is a collection that has four suggestions for a knitted, three-dimensional surface, each inspired and developed from one specific part of the oyster; the shell, the nacre, the flesh, and the pearl. This work investigates the potential of using circle knit machines, commonly used in fast fashion for bulk production, as a tool for handicraft and higher art forms. The final collection pushes the conversation regarding the future uses of the knitting machines and investigates how rigid objects can be expressed through the flexible structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Laflamme, Simon Ph D. Massachusetts Institute of Technology. "Control of large-scale structures with large uncertainties." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66852.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 279-300).
Performance-based design is a design approach that satisfies motion constraints as its primary goal, and then verifies for strength. The approach is traditionally executed by appropriately sizing stiffnesses, but recently, passive energy dissipation systems have gained popularity. Semi-active and active energy dissipation systems have been shown to outperform purely passive systems, but they are not yet widely accepted in the construction and structural engineering fields. Several factors are impeding the application of semi-active and active damping systems, such as large modeling uncertainties that are inherent to large-scale structures, limited state measurements, lack of mechanically reliable control devices, large power requirements, and the need for robust controllers. In order to enhance acceptability of feedback control systems to civil structures, an integrated control strategy designed for large-scale structures with large parametric uncertainties is proposed. The control strategy comprises a novel controller, as well as a new semi-active mechanical damping device. Specifically, the controller is an adaptive black-box representation that creates and optimizes control laws sequentially during an excitation, with no prior training. The novel feature is its online organization of the input space. The representation only requires limited observations for constructing an efficient representation, which allows control of unknown systems with limited state measurements. The semi-active mechanical device consists of a friction device inspired by a vehicle drum brakes, with a viscous and a stiffness element installed in parallel. Its unique characteristic is its theoretical damping force reaching the order of 100 kN, using a friction mechanism powered with a single 12-volts battery. It is conceived using mechanically reliable technologies, which is a solution to large power requirement and mechanical robustness. The integrated control system is simulated on an existing structure located in Boston, MA, as a replacement to the existing viscous damping system. Simulation results show that the integrated control system can mitigate wind vibrations as well as the current damping strategy, utilizing only one third of devices. In addition, the system created effective control rules for several types of earthquake excitations with no prior training, performing similarly to an optimal controller using full parametric and state knowledge.
by Simon Laflamme.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
8

Petraglio, Gabriele Carlo Luigi. "Large scale motions in macromolecules /." Zürich : ETH, 2006. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=16786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shim, Sangho. "Large scale group network optimization." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31737.

Full text
Abstract:
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Ellis L. Johnson; Committee Member: Brady Hunsaker; Committee Member: George Nemhauser; Committee Member: Jozef Siran; Committee Member: Shabbir Ahmed; Committee Member: William Cook. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
10

Chatzou, Maria 1985. "Large-scale comparative bioinformatics analyses." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/587086.

Full text
Abstract:
One of the main and most recent challenges of modern biology is to keep-up with growing amount of biological data coming from next generation sequencing technologies. Keeping up with the growing volumes of experiments will be the only way to make sense of the data and extract actionable biological insights. Large-scale comparative bioinformatics analyses are an integral part of this procedure. When doing comparative bioinformatics, multiple sequence alignments (MSAs) are by far the most widely used models as they provide a unique insight into the accurate measure of sequence similarities and are therefore instrumental to revealing genetic and/or functional relationships among evolutionarily related species. Unfortunately, the well-established limitation of MSA methods when dealing with very large datasets potentially compromises all downstream analysis. In this thesis I expose the current relevance of multiple sequence aligners, I show how their current scaling up is leading to serious numerical stability issues and how they impact phylogenetic tree reconstruction. For this purpose, I have developed two new methods, MEGA-Coffee, a large scale aligner and Shootstrap a novel bootstrapping measure incorporating MSA instability with branch support estimates when computing trees. The large amount of computation required by these two projects was carried using Nextflow, a new computational framework that I have developed to improve computational efficiency and reproducibility of large-scale analyses like the one carried out in the context of these studies.
Uno de los principales y más recientes retos de la biología moderna es poder hacer frente a la creciente cantidad de datos biológicos procedentes de las tecnologías de secuenciación de alto rendimiento. Mantenerse al día con los crecientes volúmenes de datos experimentales es el único modo de poder interpretar estos datos y extraer conclusiones biológicos relevantes. Los análisis bioinformáticos comparativos a gran escala son una parte integral de este procedimiento. Al hacer bioinformática comparativa, los alineamientos múltiple de secuencias (MSA) son con mucho los modelos más utilizados, ya que proporcionan una visión única de la medida exacta de similitudes de secuencia y son, por tanto, fundamentales para inferir las relaciones genéticas y / o funcionales entre las especies evolutivamente relacionadas. Desafortunadamente, la conocida limitación de los métodos MSA para analizar grandes bases de datos, puede potencialmente comprometer todos los análisis realizados a continuación. En esta tesis expongo la relevancia actual de los métodos de alineamientos multiples de secuencia, muestro cómo su uso en datos masivos está dando lugar a serios problemas de estabilidad numérica y su impacto en la reconstrucción del árbol filogenético. Para este propósito, he desarrollado dos nuevos métodos, MEGA-café, un alineador de gran escala y Shootstrap una nueva medida de bootstrapping que incorpora la inestabilidad del MSA con las estimaciones de apoyo de rama en el cálculo de árboles filogéneticos. La gran cantidad de cálculo requerido por estos dos proyectos se realizó utilizando Nextflow, un nuevo marco computacional que se ha desarrollado para mejorar la eficiencia computacional y la reproducibilidad del análisis a gran escala como la que se lleva a cabo en el contexto de estos estudios.
APA, Harvard, Vancouver, ISO, and other styles
11

Hofer, Heiko. "Large-Scale Gradual Change Detection." Neubiberg Universitätsbibliothek der Universität der Bundeswehr, 2010. http://d-nb.info/1001920856/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nandy, Sagnik. "Large scale autonomous computing systems." Diss., Connected to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3190006.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2005.
Title from first page of PDF file (viewed March 7, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 120-128).
APA, Harvard, Vancouver, ISO, and other styles
13

Lever, Greg. "Large scale quantum mechanical enzymology." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/246261.

Full text
Abstract:
There exists a concerted and continual e ort to simulate systems of genuine biological interest to greater accuracy with methods of increasing transferability. More accurate descriptions of these systems at a truly atomistic and electronic level are irrevocably changing our understanding of biochemical processes. Broadly, classical techniques do not employ enough rigour, while conventional quantum mechanical approaches are too computationally expensive for systems of the requisite size. Linear-scaling density-functional theory (DFT) is an accurate method that can apply the predictive power of quantum mechanics to the system sizes required to study problems in enzymology. This dissertation presents methodological developments and protocols, including best practice, for accurate preparation and optimisation, combined with proof-of-principle calculations demonstrating reliable results for a range of small molecule and large biomolecular systems. Previous authors have shown that DFT calculations yield an unphysical, negligible energy gap between the highest occupied and lowest unoccupied molecular orbitals for proteins and large water clusters, a characteristic reproduced in this dissertation. However, whilst others use this phenomenon to question the applicability of Kohn-Sham DFT to large systems, it is shown within this dissertation that the vanishing gap is, in fact, an electrostatic artefact of the method used to prepare the system. Furthermore, practical solutions are demonstrated for ensuring a physical gap is maintained upon increasing system size. Harnessing these advances, the rst application using linear-scaling DFT to optimise stationary points in the reaction pathway for the Bacillus subtilis chorismate mutase (CM) enzyme is made. Averaged energies of activation and reaction are presented for the rearrangement of chorismate to prephenate in CM and in water, for system sizes comprising up to 2000 atoms. Compared to the uncatalysed reaction, the calculated activation barrier is lowered by 10.5 kcal mol-1 in the presence of CM, in good agreement with experiment. In addition, a detailed analysis of the interactions between individual active-site residues and the bound substrate is performed, predicting the signi cance of individual enzyme sidechains in CM catalysis. These proof-of-principle applications of powerful large-scale DFT methods to enzyme catalysis will provide new insight into enzymatic principles from an atomistic and electronic perspective.
APA, Harvard, Vancouver, ISO, and other styles
14

Khellah, Fakhry Mahmoud. "Large-scale 2D dynamic estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60545.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Roberts, James Hirsch. "Large-scale structures on Mars." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3207737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Morris, A. W. "Large scale instabilities in tokamaks." Thesis, University of Oxford, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Fox, S. W. "Large scale dynamics in turbulence." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.599156.

Full text
Abstract:
In this study we present the results of an attempt to experimentally measure the energy decay in wind-tunnel grid turbulence. We show that it is possible to unambiguously determine the energy decay exponent, but departures of the turbulence from the ideas of homogeneity and isotropy mean it is not possible to compare our results to the theoretical predictions. Integral invariants have also recently been predicted to exist in two-dimensional turbulence. There are three canonical cases: E(k → 0) ~ Jk-1, E(k → 0) ~ Lk and E(k → 0) ~ Ik3. We perform direct numerical simulations in large domains and demonstrate that, in line with the theoretical predictions, J and L are invariants whilst I is strongly time-dependent. In addition, we show that the large scales of E ~ Jk-1 and E ~ Ik3 turbulence evolve in an almost self-similar manner, whilst the evolution of E ~ Lk turbulence cannot be self-similar due to the strong inverse energy cascade. We also extend the analysis to the case of quasi-geostrophic shallow-water turbulence, where we show that there are two further canonical cases, E ~ Mk5 and E ~ Nk7. In this system I and M are invariants, whilst N is time-dependent. We confirm thee predictions with numerical simulations and show that, unlike strictly two-dimensional turbulence, there are no long-range triple correlations in QGSW turbulence.
APA, Harvard, Vancouver, ISO, and other styles
18

Greenhalgh, Chris. "Large scale collaborative virtual environments." Thesis, University of Nottingham, 1997. http://eprints.nottingham.ac.uk/12415/.

Full text
Abstract:
[N.B. Pagination of eThesis differs from printed thesis. The content is identical.] This thesis is concerned with the theory, design, realisation and evaluation of large-scale collaborative virtual environments. These are 3D audio-graphical computer generated environments which actively support collaboration between potentially large numbers of distributed users. The approach taken in this thesis reflects both the sociology of interpersonal communication and the management of communication in distributed systems. The first part of this thesis presents and evaluates MASSIVE-1, a virtual reality tele-conferencing system which implements the spatial model of interaction of Benford and Fahlén. The evaluation of MASSIVE-1 has two components: a user-oriented evaluation of the system’s facilities and the underlying awareness model; and a network-oriented evaluation and modelling of the communication requirements of the system with varying numbers of users. This thesis proposes the “third party object” concept as an extension to the spatial model of interaction. Third party objects can be used to represent the influence of context or environment on interaction and awareness, for example, the effects of boundaries, rooms and crowds. Third party objects can also be used to introduce and manage dynamic aggregates or abstractions within the environments (for example abstract overviews of distant crowds of participants). The third party object concept is prototyped in a second system, MASSIVE-2. MASSIVE-2 is also evaluated in two stages. The first is a user-oriented reflection on the capabilities and effectiveness of the third party concept as realised in the system. The second stage of the evaluation develops a predictive model of total and per-participant network bandwidth requirements for systems of this kind. This is used to analyse a number of design decisions relating to this type of system, including the use of multicasting and the form of communication management adopted.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhou, Y. "Analysing large-scale surveillance video." Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3024330/.

Full text
Abstract:
Analysing large-scale surveillance video has drawn signi cant attention because drone technology and high-resolution sensors are rapidly improving. The mobility of drones makes it possible to monitor a broad range of the environment, but it introduces a more di cult problem of identifying the objects of interest. This thesis aims to detect the moving objects (mostly vehicles) using the idea of background subtraction. Building a decent background is the key to success during the process. We consider two categories of surveillance videos in this thesis: when the scene is at and when pronounced parallax exists. After reviewing several global motion estimation approaches, we propose a novel cost function, the log-likelihood of the student t-distribution, to estimate the background motion between two frames. The proposed idea enables the estimation process to be e cient and robust with auto-generated parameters. Since the particle lter is useful in various subjects, it is investigated in this thesis. An improvement to particle lters, combining near-optimal proposal and Rao-Blackwellisation, is discussed to increase the e ciency when dealing with non-linear problems. Such improvement is used to solve visual simultaneous localisation and mapping (SLAM) problems and we call it RB2-PF. Its superiority is evident in both simulations of 2D SLAM and real datasets of visual odometry problems. Finally, RB2-PF based visual odometry is the key component to detect moving objects from surveillance videos with pronounced parallax. The idea is to consider multiple planes in the scene to improve the background motion estimation. Experiments have shown that false alarms signi cantly reduced. With the landmark information, a ground plane can be worked out. A near-constant velocity model can be applied after mapping the detections on the ground plane regardless of the position and orientation of the camera. All the detection results are nally processed by a multi-target tracker, the Gaussian mixture probabilistic hypothesis density (GM-PHD) lter, to generate tracks.
APA, Harvard, Vancouver, ISO, and other styles
20

Eckhoff, Maren. "Superprocesses and Large-Scale Networks." Thesis, University of Bath, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675692.

Full text
Abstract:
The main theme of this thesis is the use of the branching property in the analysis of random structures. The thesis consists of two self-contained parts. In the first part, we study the long-term behaviour of supercritical superdiffusions and prove the strong law of large numbers. The key tools are spine and skeleton decompositions, and the analysis of the corresponding diffusions and branching particle diffusions. In the second part, we consider preferential attachment networks and quantify their vulnerability to targeted attacks. Despite the very involved global topology, locally the network can be approximated by a multitype branching random walk with two killing boundaries. Our arguments exploit this connection.
APA, Harvard, Vancouver, ISO, and other styles
21

Yan, Tom M. Eng Massachusetts Institute of Technology. "Large scale video action understanding." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119541.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 37-39).
The goal of the project is to build a large scale video dataset called Moments, and train existing/novel models for action recognition. To aid automation of video collection and annotation selection, I trained Convolutional Neural Network models to estimate the likelihood of a desired action appearing in video clips. Selecting clips, which are highly probable to contain the wanted action, for annotation leads to a more efficient process overall with higher yield. Once a sizable dataset had been amassed, I investigated new multi-modal models that make use of different (spatial, temporal, auditory) signals in the video. I also conducted preliminary experiments into several promising directions that Moments opens up, including multi-label training. Lastly, I trained baseline models on Moments to calibrate the performance of existing techniques. Post-training, I diagnosed the shortcomings of the models and visualized videos that were found to be particularly difficult. I discovered that the difficulty largely arises due to the great variety in quality/perspective/subjects found in Moments videos. This highlights the challenging nature of the dataset and its value to the research community.
by Tom Yan.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Andrew A. (Andrew Andi). "Algorithms for large-scale personalization." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119351.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 191-205).
The term personalization typically refers to the activity of online recommender systems, and while product and content personalization is now ubiquitous in e-commerce, systems today remain relatively primitive: they are built on a small fraction of available data, run with heuristic algorithms, and restricted to e-commerce applications. This thesis addresses key challenges and new applications for modern, large-scale personalization. In particular, this thesis is outlined as follows: First, we formulate a generic, flexible framework for learning from matrix-valued data, including the kinds of data commonly collected in e-commerce. Underlying this framework is a classic de-noising problem called tensor recovery, for which we provide an efficient algorithm, called Slice Learning, that is practical for massive datasets. Further, we establish near-optimal recovery guarantees that represent an order improvement over the best available results for this problem. Experimental results from a music recommendation platform are shown. Second, we apply this de-noising framework to new applications in precision medicine where data are routinely complex and in high dimensions. We describe a simple, accurate proteomic blood test (a 'liquid biopsy') for cancer detection that relies on de-noising via the Slice Learning algorithm. Experiments on plasma from healthy patients that were later diagnosed with cancer demonstrate that our test achieves diagnostically significant sensitivities and specificities for many types of cancers in their earliest stages. Third, we present an efficient, principled approach to operationalizing recommendations, i.e. the decision of exactly what items to recommend. Motivated by settings such as online advertising where the space of items is massive and recommendations must be made in milliseconds, we propose an algorithm that simultaneously achieves two important properties: (1) sublinear runtime and (2) a constant-factor guarantee under a wide class of choice models. Our algorithm relies on a new sublinear time sampling scheme, which we develop to solve a class of problems that subsumes the classic nearest neighbor problem. Results from a massive online content recommendation firm are given. Fourth, we address the problem of cost-effectively executing a broad class of computations on commercial cloud computing platforms, including the computations typically done in personalization. We formulate this as a resource allocation problem and introduce a new approach to modeling uncertainty - the Data-Driven Prophet Model - that treads the line between stochastic and adversarial modeling, and is amenable to the common situation where stochastic modeling is challenging, despite the availability of copious historical data. We propose a simple, scalable algorithm that is shown to be order-optimal in this setting. Results from experiments on a commercial cloud platform are shown.
by Andrew A. Li.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Cabezas, Randi. "Large-scale probabilistic aerial reconstruction." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117832.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 153-167).
While much emphasis has been placed on large-scale 3D scene reconstruction from a single data source such as images or distance sensors, models that jointly utilize multiple data types remain largely unexplored. In this work, we will present a Bayesian formulation of scene reconstruction from multi-modal data as well as two critical components that enable large-scale reconstructions with adaptive resolution and high-level scene understanding with meaningful prior-probability distributions. Our first contribution is to formulate the 3D reconstruction problem within the Bayesian framework. We develop an integrated probabilistic model that allows us to naturally represent uncertainty and to fuse complementary information provided by different sensor modalities (imagery and LiDAR). Maximum-a-Posteriori inference within this model leverages GPGPUs for efficient likelihood evaluations. Our dense reconstructions (triangular mesh with texture information) are feasible with fewer observations of a given modality by relaying on others without sacrificing quality. Secondly, to enable large-scale reconstructions our formulation supports adaptive resolutions in both appearance and geometry. This change is motivated by the need for a representation that can adjust to a wide variability in data quality and availability. By coupling edge transformations within a reversible-jump MCMC framework, we allow changes in the number of triangles and mesh connectivity. We demonstrate that these data-driven updates lead to more accurate representations while reducing modeling assumptions and utilizing fewer triangles. Lastly, to enable high-level scene understanding, we include a categorization of reconstruction elements in our formulation. This scene-specific classification of triangles is estimated from semantic annotations (which are noisy and incomplete) and other scene features (e.g., geometry and appearance). The categorization provides a class-specific prior-probability distribution, thus helping to obtain more accurate and interpretable representations by regularizing the reconstruction. Collectively, these models enable complex reasoning about urban scenes by fusing all available data across modalities, a crucial necessity for future autonomous agents and large-scale augmented-reality applications.
by Randi Cabezas.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Arandjelovic, Relja. "Advancing large scale object retrieval." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:619dc397-b645-494b-a014-8e9f51f6884f.

Full text
Abstract:
The objective of this work is object retrieval in large scale image datasets, where the object is specified by an image query and retrieval should be immediate at run time. Such a system has a wide variety of applications including object or location recognition, video search, near duplicate detection and 3D reconstruction. The task is very challenging because of large variations in the imaged object appearance due to changes in lighting conditions, scale and viewpoint, as well as partial occlusions. A starting point of established systems which tackle the same task is detection of viewpoint invariant features, which are then quantized into visual words and efficient retrieval is performed using an inverted index. We make the following three improvements to the standard framework: (i) a new method to compare SIFT descriptors (RootSIFT) which yields superior performance without increasing processing or storage requirements; (ii) a novel discriminative method for query expansion; (iii) a new feature augmentation method. Scaling up to searching millions of images involves either distributing storage and computation across many computers, or employing very compact image representations on a single computer combined with memory-efficient approximate nearest neighbour search (ANN). We take the latter approach and improve VLAD, a popular compact image descriptor, using: (i) a new normalization method to alleviate the burstiness effect; (ii) vocabulary adaptation to reduce influence of using a bad visual vocabulary; (iii) extraction of multiple VLADs for retrieval and localization of small objects. We also propose a method, SCT, for extremely low bit-rate compression of descriptor sets in order to reduce the memory footprint of ANN. The problem of finding images of an object in an unannotated image corpus starting from a textual query is also considered. Our approach is to first obtain multiple images of the queried object using textual Google image search, and then use these images to visually query the target database. We show that issuing multiple queries significantly improves recall and enables the system to find quite challenging occurrences of the queried object. Current retrieval techniques work only for objects which have a light coating of texture, while failing completely for smooth (fairly textureless) objects best described by shape. We present a scalable approach to smooth object retrieval and illustrate it on sculptures. A smooth object is represented by its imaged shape using a set of quantized semi-local boundary descriptors (a bag-of-boundaries); the representation is suited to the standard visual word based object retrieval. Furthermore, we describe a method for automatically determining the title and sculptor of an imaged sculpture using the proposed smooth object retrieval system.
APA, Harvard, Vancouver, ISO, and other styles
25

Yang, Xintian. "Towards large-scale network analytics." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343680930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Yang. "Visually Analyzing Large Scale Graphs." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1439951819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Seshadri, Sangeetha. "Enhancing availability in large scale." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29715.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Ling Liu; Committee Member: Brian Cooper; Committee Member: Calton Pu; Committee Member: Douglas Blough; Committee Member: Karsten Schwan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Fuqin. "Large scale estimation of evapotranspiration." Thesis, Li, Fuqin (1999) Large scale estimation of evapotranspiration. PhD thesis, Murdoch University, 1999. https://researchrepository.murdoch.edu.au/id/eprint/51652/.

Full text
Abstract:
Evapotranspiration is an essential component of the energy and water bud­get, but its estimation depends on available data sources and the environment of an area. Remote sensing techniques, combined with routine meteorological data, are used to estimate evapotranspiration over central Australia through the development and application of a number of models, ranging from physically based instantaneous models to a daily simulation model. The proposed models are evaluated using aircraft observations over two dis­tinct vegetation regimes in south-western Australia. Among the three physically based instantaneous models, single-source models using an excess resistance term empirically determined performed better than a two-source model which does not require such a parameterization. The mean absolute difference between measured and estimated values of the sensible heat flux is below 17 wm-2 in comparison to approximately 40 Wm-2 for evapotranspiration. Estimates of evapotranspiration depend on the closure of the surface energy balance and incorporate all residual errors in this closure. All models perform better over the agricultural vegetation than over the native vegetation. As these physically based models only provide instantaneous estimates of evapotranspiration at satellite overpass, a coupled one dimensional soil-canopy­atmosphere model and a simple budget water balance model have been used to simulate the daily evapotranspiration. Comparison of these results with the air-craft observations shows that the coupled model provides a good estimate of sur­face heat fluxes over the agricultural area with mean absolute differences between measured and estimate values being approximately 25 wm-2 for both sensible heat flux and evapotranspiration. Over the native vegetation, the mean absolute difference between measured and observed fluxes increased to 49 and 47 wm-2, respectively, for the sensible heat and evapotranspiration. This increase results from the inability of a simple water balance model to incorporate the effects of the underlying aquifer on deep rooted native vegetation, particularly during the dry summer season. It also highlights the sensitivity of the one dimensional soil-canopy-atmosphere model to the specification of soil moisture. Since the model simulation of surface temperature is also very sensitive to the soil moisture, a comparison between model simulation of surface temperature and satellite derived surface temperature was used to adjust parameters of a water balance model resulting in better estimates of soil moisture and consequently improved predictions of evapotranspiration. These models have been applied to estimating evapotranspiration in central Australia, using limited routine meteorological data and the NOAA-14 AVHRR overpass. Minimizing the difference between model predicted surface temperature and satellite derived temperature to adjust the estimated soil moisture, both the instantaneous physically based model and the simulation yielded consistent re­sults for 8 representative clear sky days during 1996-1997. These results highlight the sensitivity of surface temperature to soil moisture and suggest that radiomet­ric surface temperature can be used to adjust simple water balance estimates of soil moisture providing a simple and effective means of estimating large scale evapotranspiration in remote arid regions.
APA, Harvard, Vancouver, ISO, and other styles
29

Schatz, Bruce R., and Hsinchun Chen. "Building Large-Scale Digital Libraries." IEEE, 1996. http://hdl.handle.net/10150/106127.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
In this era of the Internet and the World Wide Web, the long-time topic of digital libraries has suddenly become white hot. As the Internet expands, particularly the WWW, more people are recognizing the need to search indexed collections. Digital library research projects thus have a common theme of bringing search to the Net. This is why the US government made digital libraries the flagship research effort for the National Information Infrastructure (NII), which seeks to bring the highways of knowledge to every American. As a result, the four-year, multiagency DLI was funded with roughly $1 million per year for each project (see the "Agency perspectives" sidebar). Six projects (chosen from 73 proposals) are involved in the DLI, which is sponsored by the National Science Foundation, Advanced Research Projects Agency, and the National Aeronautics and Space Administration. This issue of Computer includes project reports from these six university sites: Carnegie Mellon University, University of California at Berkeley, University of California at Santa Barbara, University of Illinois at Urbana-Champaign, University of Michigan, and Stanford University.
APA, Harvard, Vancouver, ISO, and other styles
30

Weirich, Sebastian. "Kontexteffekte in Large-Scale Assessments." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17283.

Full text
Abstract:
Im Rahmen der Item-Response-Theorie evaluiert die kumulative Dissertationsschrift verschiedene Methoden und Modelle zur Identifikation von Kontexteffekten in Large-Scale Assessments. Solche Effekte können etwa in quantitativen empirischen Schulleistungsstudien auftreten und zu verzerrten Item- und Personenparametern führen. Um in Einzelfällen abschätzen zu können, ob Kontexteffekte auftreten und dadurch die Gefahr verzerrter Parameter gegeben ist (und falls ja, in welcher Weise), müssen IRT-Modelle entwickelt werden, die zusätzlich zu Item- und Personeneffekten Kontexteffekte parametrisieren. Solch eine Parametrisierung ist im Rahmen Generalisierter Allgemeiner Linearer Modelle möglich. In der Dissertation werden Positionseffekte als ein Beispiel für Kontexteffekte untersucht, und es werden die statistischen Eigenschaften dieses Messmodells im Rahmen einer Simulationsstudie evaluiert. Hier zeigt sich vor allem die Bedeutung des Testdesigns: Um unverfälschte Parameter zu gewinnen, ist nicht nur ein adäquates Messmodell, sondern ebenso ein adäquates, also ausbalanciertes Testdesign notwendig. Der dritte Beitrag der Dissertation befasst sich mit dem Problem fehlender Werte auf Hintergrundvariablen in Large-Scale Assessments. Als Kontexteffekt wird in diesem Beispiel derjenige Effekt verstanden, der die Wahrscheinlichkeit eines fehlenden Wertes auf einer bestimmten Variablen systematisch beeinflusst. Dabei wurde das Prinzip der multiplen Imputation auf das Problem fehlender Werte auf Hintergrundvariablen übertragen. Anders als bisher praktizierte Ansätze (Dummy-Codierung fehlender Werte) konnten so in einer Simulationsstudie für fast alle Simulationsbedingungen unverfälschte Parameter auf der Personenseite gefunden werden.
The present doctoral thesis evaluates various methods and models of the item response theory to parametrize context effects in large-scale assessments. Such effects may occur in quantitative educational assessments and may cause biased item and person parameter estimates. To decide whether context effects occur in individual cases and lead to biased parameters, specific IRT models have to be developed which parametrize context effects additionally to item and person effects. The present doctoral thesis consists of three single contributions. In the first contribution, a model for the estimation of context effects in an IRT framework is introduced. Item position effects are examined as an example of context effects in the framework of generalized linear mixed models. Using simulation studies, the statistical properties of the model are investigated, which emphasizes the relevance of an appropriate test design. A balanced incomplete test design is necessary not only to obtain valid item parameters in the Rasch model, but to guarantee for unbiased estimation of position effects in more complex IRT models. The third contribution deals with the problem of missing background data in large-scale assessments. The effect which predicts the probability of a missing value on a certain variable, is considered as a context effect. Statistical methods of multiple imputation were brought up to the problem of missing background data in large-scale assessments. In contrast to other approaches used so far in practice (dummy coding of missing values) unbiased population and subpopulation estimates were received in a simulation study for most conditions.
APA, Harvard, Vancouver, ISO, and other styles
31

Tsytsarau, Mikalai. "Large Scale Aggregated Sentiment Analytics." Doctoral thesis, Università degli studi di Trento, 2013. https://hdl.handle.net/11572/368936.

Full text
Abstract:
In the past years we have witnessed Sentiment Analytics becoming increasingly popular topic in Information Retrieval, which has established itself as a promising direction of research. With the rapid growth of the user-generated content represented in blogs, forums, social networks and micro-blogs, it became a useful tool for social studies, market analysis and reputation management, since it made possible capturing sentiments and opinions at a large scale and with the ever-growing precision. Sentiment Analytics came a long way from product review mining to full-fledged multi-dimensional analysis of social sentiment, exposing people attitude towards any topic aggregated along different dimensions, such as time and demographics. The novelty of our work is that it approaches Sentiment Analytics from the perspective of Data Mining, addressing some important problems which fall out of the scope of Opinion Mining. We develop a framework for Large Scale Aggregated Sentiment Analytics, which allows to capture and quantify important changes in aggregated sentiments or in their dynamics, evaluate demographical aspects of these changes, and explain the underlying events and mechanisms which drive them. The first component of our framework is Contradiction Analysis, which studies diverse opinions and their interaction, and allows tracking the quality of aggregated sentiment or detecting interesting sentiment differences. Targeting large scale applications, we develop a sentiment contradiction measure based on the statistical properties of sentiment and allowing efficient computation from aggregated sentiments. Another important component of our framework addresses the problem of monitoring and explaining temporal sentiment variations. Along this direction, we propose novel time series correlation methods tailored specifically for large scale analysis of sentiments aggregated over users demographics. Our methods help to identify interesting correlation patterns between demographic groups and thus better understand the demographical aspect of sentiment dynamics. We bring another interesting dimension to the problem of sentiment evolution by studying the joint dynamics of sentiments and news, uncovering the importance of news events and assessing their impact on sentiments. We propose a novel and universal way of modeling different media and their dynamics, which aims to describe the information propagation in news- and social media. Finally, we propose and evaluate an updateable method of sentiment aggregation and retrieval, which preserves important properties of aggregated sentiments and also supports scalability and performance requirements of our applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Tsytsarau, Mikalai. "Large Scale Aggregated Sentiment Analytics." Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/1023/1/Tsytsarau-Thesis.pdf.

Full text
Abstract:
In the past years we have witnessed Sentiment Analytics becoming increasingly popular topic in Information Retrieval, which has established itself as a promising direction of research. With the rapid growth of the user-generated content represented in blogs, forums, social networks and micro-blogs, it became a useful tool for social studies, market analysis and reputation management, since it made possible capturing sentiments and opinions at a large scale and with the ever-growing precision. Sentiment Analytics came a long way from product review mining to full-fledged multi-dimensional analysis of social sentiment, exposing people attitude towards any topic aggregated along different dimensions, such as time and demographics. The novelty of our work is that it approaches Sentiment Analytics from the perspective of Data Mining, addressing some important problems which fall out of the scope of Opinion Mining. We develop a framework for Large Scale Aggregated Sentiment Analytics, which allows to capture and quantify important changes in aggregated sentiments or in their dynamics, evaluate demographical aspects of these changes, and explain the underlying events and mechanisms which drive them. The first component of our framework is Contradiction Analysis, which studies diverse opinions and their interaction, and allows tracking the quality of aggregated sentiment or detecting interesting sentiment differences. Targeting large scale applications, we develop a sentiment contradiction measure based on the statistical properties of sentiment and allowing efficient computation from aggregated sentiments. Another important component of our framework addresses the problem of monitoring and explaining temporal sentiment variations. Along this direction, we propose novel time series correlation methods tailored specifically for large scale analysis of sentiments aggregated over users demographics. Our methods help to identify interesting correlation patterns between demographic groups and thus better understand the demographical aspect of sentiment dynamics. We bring another interesting dimension to the problem of sentiment evolution by studying the joint dynamics of sentiments and news, uncovering the importance of news events and assessing their impact on sentiments. We propose a novel and universal way of modeling different media and their dynamics, which aims to describe the information propagation in news- and social media. Finally, we propose and evaluate an updateable method of sentiment aggregation and retrieval, which preserves important properties of aggregated sentiments and also supports scalability and performance requirements of our applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Suzuki, Eri. "Small-Scale Statistics and Large-Scale Coherence in Convective Turbulence." 京都大学 (Kyoto University), 1997. http://hdl.handle.net/2433/202424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Amanullah, Ashraf. "Scale down models of mixing performance in large scale bioreactors." Thesis, University of Birmingham, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.633068.

Full text
Abstract:
Scale down models have been successfully developed and applied to the investigation of the effects of both dissolved oxygen and pH gradients, consequent of large scales of operation, on the biological performance of a culture of Bacillus subtilis. The strain used produces acetoin and butanediol as metabolites, and has been used as a model culture for mixing studies due to the unusual sensitivity of its product distribution to oxygen supply. It is a useful biological indicator of bioreactor performance. In addition, the sensitivity of metabolite production rates to pH has been exploited. Experiments using two different scale down models (two inter-connected stirred tanks and a stirred tank connected to a plug flow reactor) have been performed with the aim of simulating incomplete mixing with respect to oxygen supply. The effects of mean circulation time and the relative volumes of the compartments containing high and low dissolved oxygen concentrations, both in the ranges realistic of those found at large scales of operation, have been studied. For a given configuration, the biological response of the culture was consistent with the mixing conditions imposed. Similar trends (although significantly different in magnitude) in the biological performance of the culture in the two scale down models were found. Differences in performance between the two configurations have been explained in terms of the flow characteristics and oxygen availability in each system. The results presented also highlight the importance of the choice of the scale down model when studying the impact of large scale inhomogeneities on micro-organisms. The study shows that significant changes in biological performance are likely to occur upon scale up of this fermentation due to circulation of cells through oxygen deprived regions. These scale down experiments also indicate that both decreasing the mean circulation time and increasing the size of the well mixed impeller region should improve performance at the large scale. pH inhomogeneities can also occur in large scale fermenters near the addition point of acid or base for pH control as a consequence of poor bulk mixing. Frequent exposure of cells to such regions may affect microbial metabolism. Scale down experiments, under identical nonlimiting conditions of oxygen supply, have been used to simulate this phenomenon. It is shown that the effects of localised pH deviations from the bulk value on the biological performance of micro-organisms cannot be ignored for mixing times in bioreactors exceeding 60 seconds. Such effects of pH do not affect the growth of the culture. However, significant changes in product formation can be measured. Such scale down analysis may result in a better understanding of the effects of the physical environment on the biological performance of micro-organisms at different scales of operation, and help to produce a more rational approach to the design of bioreactors.
APA, Harvard, Vancouver, ISO, and other styles
35

Cabré, Albós Anna. "Large scale structure and dark energy." Doctoral thesis, Universitat de Barcelona, 2008. http://hdl.handle.net/10803/750.

Full text
Abstract:
The cosmic expansion history tests the dynamics of the global evolution of the universe
and its energy density contents, while the cosmic growth history tests the evolution of the
inhomogeneous part of the energy density. By comparing both histories, we can distinguish
the nature of the physics responsible of the acceleration of the universe: dark energy or modified gravity. Most of the observational evidence for the accelerating universe comes from
geometrical tests that measure directly H(z) = ˙ a/a, the expansion rate of the universe, such
as measurements of the luminosity distance using standard candles (Sn Ia) or measurements
of the angular distance using standard rulers as baryonic acoustic oscillations. Observations
of the cosmic expansion history alone can not distinguish dark energy from modified gravity,
since the expansion history H(z) can be reproduced by any modified gravity model, by changing the energy equation of state "w". The additional observational input that is required is the growth function delta(z) = (delta-ro / ro)(z) of the linear matter density contrast as a function of redshift (usually used as the normalized growth function D(z) = delta(z)/delta(0)).
In the first part of the thesis, we study the Integrated Sachs-Wolfe effect (ISW), through the
cross-correlation between large scale clustering, traced by galaxies (in our case from the catalog SDSS) and primordial temperature fluctuations from CMB (using the catalog WMAP).
Photons that come from the last scattering surface can be red or blue shifted by the time evolution of fluctuations in the gravitational potentials created by large scale structures, which are traced by the large scale galaxy distribution. The ISW effect gives us information about dark energy (DE), because DE modifies the evolution of dark matter gravitational potential.
In principle, the ISW effect can probe dark energy independently from other observations,
such as Supernovae Ia.
The correlation between galaxies in redshift space can also be used to study the evolution of
the dark matter gravitational potential in a way that is complementary to the cross-correlation
of galaxies with CMB photons. In the second part of the thesis, we will study this effect in the
luminous red galaxies of the SDSS. These galaxies trace very large volumes which is important to have more signal, and they have a known evolution which make easy to work with them.

KEY WORDS: cosmology, large scale structure, growth of perturbations, ISW effect, redshift
distortions, LRG.
CATALÀ:

En la primera part de la tesi, estudiem l'efecte Integrat Sachs-Wolfe (ISW), a través de la
correlació creuada entre estructura a gran escala, traçada per les galàxies (en el nostre cas usem el catàleg SDSS) i fluctuacions primordials de temperatura del fons c-osmic de microones (amb el catàleg WMAP). Els fotons que provenen de la superfície d'última interacció poden ser moguts cap al blau o vermell a causa de l'evolució en el temps de les fluctuacions dels potencials gravitacionals, creats per l'estructura a gran escala. L'efecte ISW ens d´ona informació de l'energia fosca, perquè aquesta modifica l'evolució dels potencials gravitacionals de matèria fosca. En principi, l'efecte ISW pot provar l'energia fosca independentment d'altres observacions, com Supernoves Ia.
La correlació entre galàxies en l'espai de velocitats també es pot utilitzar en l'estudi de
l'evolució dels potencials gravitacionals de manera complementària a l'obtinguda amb l'efecte
ISW. En la segona part de la tesi, estudiem les distorsions en l'espai de velocitats en les
galàxies lluminoses vermelles del catàleg SDSS. Aquestes galàxies tracen volums molt grans, essencial per a obtenir un bon senyal, i tenen una evolució coneguda, facilitant el seu estudi.

RESUMEN CASTELLANO:

En la primera parte de la tesis, estudiamos el efecto Integrado Sachs-Wolfe (ISW), a través
de la correlaci´on cruzada entre la estructura a gran escala trazada por galaxias (aquí usamos
el catálogo SDSS) y las fluctuaciones primordiales de temperatura del fondo cósmico de microondas (catálogo WMAP). Los fotones que provienen de la superficie de última interacción
pueden ser movidos hacia el azul o rojo por la evolución en el tiempo de las fluctuaciones
en los potenciales gravitacionales, creados por la estructura a gran escala. El efecto ISW da
información de la energía oscura, porque esta modifica la evolución de los potenciales gravitacionales de materia oscura. En principio, el efecto ISW puede probar la energía oscura independientemente de otras observaciones, como Supernovas Ia.
La correlación entre galaxias en el espacio de velocidades también se puede utilizar en
el estudio de la evolución de los potenciales gravitacionales de forma complementaria a la
obtenida mediante el efecto ISW. En la segunda parte de la tesis, estudiamos las distorsiones
en el espacio de velocidades para las galaxias luminosas rojas del catálogo SDSS. Estas galaxias trazan volúmenes grandes, esencial para obtener una buena señal, y además tienen una evolución conocida, facilitando su estudio.
APA, Harvard, Vancouver, ISO, and other styles
36

Sales, Pardo Marta. "Large Scale Excitations in Disordered Systems." Doctoral thesis, Universitat de Barcelona, 2002. http://hdl.handle.net/10803/1786.

Full text
Abstract:
Disorder is present in many systems in nature and in many different versions. For instance, the dislocations of a crystal lattice, or the randomness of the interaction between magnetic moments. One of the most studied examples is that of spin glasses because they are simple to model but keep most of the very complex features that many disordered systems have. The frustration of the ground state configuration is responsible for the existence of a gap less spectrum of excitations and a rugged and complex free-energy landscape which bring about a very slow relaxation towards the equilibrium state. The main concern of the thesis has been to study what the properties of the typical excitation, i.e. those excitations that are large and contribute dominantly to the physics in the frozen phase.
The existence of these large excitations brings about large fluctuations of the order parameter, and we have shown in these theses that this feature can be exploited to study the transition of any spin glass model. Moreover, we have shown that the information about these excitations can be extracted from the statistics of the lowest lying excitations. This is because due to the random nature of spin glasses, the physics obtained from averaging over the whole spectrum of excitations of an infinite sample is equivalent to averaging over many finite systems where only the ground state and the first excitation are considered. The novelty of this approach is that we do not need to make any assumption on what are typical excitations like because we can compute them exactly using numerical methods. Finally, we have investigated the dynamics and more specifically the link between the problem of chaos and the rejuvenation phenomena observed experimentally. Rejuvenation means that when lowering the temperature the aging process restarts again from scratch. This is potentially linked with the chaos assumption which states that equilibrium configurations at two different properties are not correlated. Chaos is a large scale phenomenon possible if entropy fluctuations are large. However, in this thesis we have shown that the response to temperature changes can be large in the absence of chaos close to a localization transition where the Boltzmann weight condenses in a few states. This has been observed in simulation of the Sinai model in which this localization is realized dynamically. In this model, since at low temperatures the system gets trapped in the very deep states, the dynamics is only local, so that only small excitations contribute to the rejuvenation signal that we have been able to observe. Thus, in agreement with the hierarchical picture, rejuvenation is possible even in the absence of chaos and reflects the start of the aging process of small length scales.
APA, Harvard, Vancouver, ISO, and other styles
37

Ochs, Fabian. "Modelling large-scale thermal energy stores." Aachen Shaker, 2009. http://d-nb.info/1000359158/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Arriaga, García Jaime Alonso. "Dynamics of large-scale shoreline perturbations." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/620734.

Full text
Abstract:
Shorelines around the world are rarely smooth and they can present undulations and cuspate shapes. On the one hand, human actions can cause shoreline perturbations via beach nourishments, which in turn perturb the wave field that drives the morphological changes. On the other hand, there can be natural perturbations in the coastal system due to positive feedbacks between the wave forcing and the evolving bathymetric contours. In this thesis, the dynamics of mega-nourishments and shoreline sand waves are investigated. A morphodynamic model based on the wave-driven alongshore sediment transport, including cross-shore transport in a simplified way and neglecting tides, is improved and applied to the Zandmotor mega-nourishment on the Dutch Delfland coast. The model is calibrated with the bathymetric data measured from January 2012 to March 2013 using measured offshore wave forcing. The calibrated model reproduces the evolution of the shoreline and depth contours until March 2015. The modelled coastline diffusivity during the 3-yr period is of 0.0021 m^2/s, close to the observed value of 0.0022 m^2/s. In contrast, the coefficient of the classical one-line diffusion equation is 0.0052~m$^2$/s. Thus, the lifetime is predicted to be of 90 yr instead of 35 yr. This difference is attributed to the role played by the 60% of oblique waves in that climate. The dynamics of mega-nourishments are further investigated by designing analytic mega-nourishments with different asymmetry, shape and volume. It is found that narrow initial shapes are less diffusive than wider shapes and that the smaller nourishments are more diffusive than the bigger ones. Also, it is found that the initial asymmetry can influence the asymmetry in feeding capacity to adjacent beaches throughout 50 years. The mega-nourishment is also forced with wave climates of different obliquity percentages. Its diffusivity decays linearly with increasing obliquity and for very oblique wave climates (more than 80%) hotspot areas are formed at the sides (due to high-angle wave instability). The growth rate of the erosion hotspots is especially high for unimodal wave climates, which also makes mega-nourishments to migrate alongshore at rates of 40 m/yr. Kilometric-scale shoreline sand waves have been observed in the northern flank of the Dungeness Cuspate Foreland (southeastern coast of U.K.). They consist of two bumps separated by embayments with a 350-450 m spacing. We have analyzed 36 shoreline surveys of 2~km length using the Discrete Fourier Transformation (DFT), from 2005 to 2016, and seven topographic surveys encompassing the intertidal zone, from 2010 to 2016. The data set shows two clear formation events, which are correlated with moments were the wave energy of high-angle waves is dominant over the low-angle waves. Also, a linear stability model based on the one-line approximation is applied to the site. It predicts accurately the formation moments, with positive growth rates in the correct order of magnitude for wavelengths similar to the observed ones. All these results confirm that the shoreline undulations in Dungeness are self-organized and that the underlying formation mechanism is the high-angle wave instability. The two detected formation events thus provide a unique opportunity to validate the existing morphodynamic models that include such instability.
Las costas alrededor del mundo rara vez son suaves y pueden presentar ondulaciones y formas de cúspide. Por un lado, las acciones humanas pueden causar perturbaciones en la costa a través de rellenos de playa, lo que a su vez perturba el campo de oleaje que provoca los cambios morfológicos. Por otro lado, puede haber perturbaciones naturales en el sistema costero debido a la retroalimentación positiva entre el forzamiento del oleaje y la evolución de los contornos batimétricos. En esta tesis, se investiga la dinámica de los mega-rellenos y las ondas de arena en la linea de costa (de gran escala). Un modelo morfodinámico basado en el transporte longitudinal y que incluye el transporte transversal de una manera simplificada (ignorando el efecto de las mareas) es primero mejorado y después aplicado al mega-relleno de arena llamado Zandmotor. El modelo se calibra con los datos batimétricos medidos de enero de 2012 a marzo de 2013 y utilizando los datos de oleaje de una boya ubicada a 40 metros de profundidad. El modelo calibrado reproduce la evolución de la línea de costa y de los contornos batimétricos hasta marzo de 2015. La difusividad de la línea de costa modelada, durante el período de 3 años, es de 0.0021 m^2/s, cerca del valor observado de 0.0022 m^2/s. Por el contrario, el coeficiente de la ecuación clásica de difusión de una línea es 0.0052 ~ m^2/s. Por lo tanto, se predice que la vida útil será de 90 años en lugar de 35 años. Esta diferencia se atribuye al papel desempeñado por el 60% de las olas oblicuas en ese clima. La dinámica de los mega-rellenos se investiga con más profundidad mediante el diseño de mega-rellenos analíticos con diferentes asimetrías, formas y volúmenes. Se encuentra que las formas iniciales estrechas son menos difusivas que las formas anchas y que los rellenos más pequeños son más difusivos que los más grandes. Además, se encuentra que la asimetría inicial puede influir en la asimetría de la capacidad de alimentación de playas adyacentes a lo largo de 50 años. También se hacen simulaciones con climas de oleaje de diferentes porcentajes de oblicuidad. Su difusividad sigue un comportamiento lineal decreciente con el aumento de la oblicuidad. Para climas muy oblicuos (más del 80%) se forman áreas de erosión a los lados (debido a la inestabilidad de la ángulo grande). La tasa de crecimiento de los puntos calientes de erosión es especialmente alta para los climas de olas unimodales, lo que también hace que los mega-alimentos migren a lo largo de la costa a velocidades de 40 m / año. Se han observado ondas de arena de escala kilométrica en el flanco norte de Dungeness (costa sudeste del Reino Unido). Consisten en dos crestas separadas con un espaciamiento de 350-450 m. Hemos analizado 36 líneas de costa medidas de 2 km de longitud utilizando la Transformada Discreta de Fourier (TDF), de 2005 a 2016, y siete estudios topográficos que abarcan la zona intermareal, de 2010 a 2016. El conjunto de datos muestra dos eventos de formación claros, que son correlacionados con los momentos donde la energía de olas de ángulo grande es dominante sobre las olas de ángulo bajo. Además, se aplica al sitio un modelo de estabilidad lineal basado en la aproximación de una línea. Predice con precisión los momentos de formación, con tasas de crecimiento positivas en el orden correcto de magnitud para longitudes de onda similares a las observadas. Todos estos resultados confirman que las ondulaciones de la costa en Dungeness son auto-organizadas y que el mecanismo de formación subyacente es la inestabilidad de la oleaje de ángulo grande. Los dos eventos de formación detectados proporcionan así una oportunidad única para validar los modelos morfodinámicos existentes que incluyen dicha inestabilidad
APA, Harvard, Vancouver, ISO, and other styles
39

Krause, Antje. "Large scale clustering of protein sequences." [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=965776190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Tran, Van-Hoai. "Solving large scale crew pairing problems." [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975292714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Worm, Stefan. "Monitoring of large-scale Cluster Computers." Master's thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700032.

Full text
Abstract:
The constant monitoring of a computer is one of the essentials to be up-to-date about its state. This may seem trivial if one is sitting right in front of it but when monitoring a computer from a certain distance it is not as simple anymore. It gets even more difficult if a large number of computers need to be monitored. Because the process of monitoring always causes some load on the network and the monitored computer itself, it is important to keep these influences as low as possible. Especially for a high-performance cluster that was built from a lot of computers, it is necessary that the monitoring approach works as efficiently as possible and does not influence the actual operations of the supercomputer. Thus, the main goals of this work were, first of all, analyses to ensure the scalability of the monitoring solution for a large computer cluster as well as to prove the functionality of it in practise. To achieve this, a classification of monitoring activities in terms of the overall operation of a large computer system was accomplished first. Thereafter, methods and solutions were presented which are suitable for a general scenario to execute the process of monitoring as efficient and scalable as possible. During the course of this work, conclusions from the operation of an existing cluster for the operation of a new, more powerful system were drawn to ensure its functionality as good as possible. Consequently, a selection of applications from an existing pool of solutions was made to find one that is most suitable for the monitoring of the new cluster. The selection took place considering the special situation of the system like the usage of InfiniBand as the network interconnect. Further on, an additional software was developed which can read and process the different status information of the InfiniBand ports, unaffected by the vendor of the hardware. This functionality, which so far had not been available in free monitoring applications, was exemplarily realised for the chosen monitoring software. Finally, the influence of monitoring activities on the actual tasks of the cluster was of interest. To examine the influence on the CPU and the network, the self-developed plugin as well as a selection of typical monitoring values were used exemplarily. It could be proven that no impact on the productive application for typical monitoring intervals can be expected and only for atypically short intervals a minor influence could be determined
Die ständige Überwachung eines Computers gehört zu den essentiellen Dingen, die zu tun sind um immer auf dem Laufenden zu sein, wie der aktuelle Zustand des Rechners ist. Dies ist trivial, wenn man direkt davor sitzt, aber wenn man einen Computer aus der Ferne beobachten soll ist dies schon nicht mehr so einfach möglich. Schwieriger wird es dann, wenn es eine große Anzahl an Rechnern zu überwachen gilt. Da der Vorgang der Überwachung auch immer etwas Netzwerklast und Last auf dem zu überwachenden Rechner selber verursacht, ist es wichtig diese Einflüsse so gering wie möglich zu halten. Gerade dann, wenn man viele Computer zu einem leistungsfähigen Cluster zusammen geschalten hat ist es notwendig, dass diese Überwachungslösung möglichst effizient funktioniert und die eigentliche Arbeit des Supercomputers nicht stört. Die Hauptziele dieser Arbeit sind deshalb Analysen zur Sicherstellung der Skalierbarkeit der Überwachungslösung für einen großen Computer Cluster, sowie der praktische Nachweis der Funktionalität dieser. Dazu wurde zuerst eine Einordnung des Monitorings in den Gesamtbetrieb eines großen Computersystems vorgenommen. Danach wurden Methoden und Lösungen aufgezeigt, welche in einem allgemeinen Szenario geeignet sind, um den ganzheitlichen Vorgang der Überwachung möglichst effizient und skalierbar durchzuführen. Im weiteren Verlauf wurde darauf eingegangen welche Lehren aus dem Betrieb eines vorhandenen Clusters für den Betrieb eines neuen, leistungsfähigeren Systems gezogen werden können um dessen Funktion möglichst gut gewährleisten zu können. Darauf aufbauend wurde eine Auswahl getroffen, welche Anwendung aus einer Menge existierende Lösungen heraus, zur Überwachung des neuen Clusters besonders geeignet ist. Dies fand unter Berücksichtigung der spezielle Situation, zum Beispiel der Verwendung von InfiniBand als Verbindungsnetzwerk, statt. Im Zuge dessen wurde eine zusätzliche Software entwickelt, welche die verschiedensten Statusinformationen der InfiniBand Anschlüsse auslesen und verarbeiten kann, unabhängig vom Hersteller der Hardware. Diese Funktionalität, welche im Bereich der freien Überwachungsanwendungen bisher ansonsten noch nicht verfügbar war, wurde beispielhaft für die gewählte Monitoring Software umgesetzt. Letztlich war der Einfluss der Überwachungsaktivitäten auf die eigentlichen Anwendungen des Clusters von Interesse. Dazu wurden exemplarisch das selbst entwickelte Plugin sowie eine Auswahl an typischen Überwachungswerten benutzt, um den Einfluss auf die CPU und das Netzwerk zu untersuchen. Dabei wurde gezeigt, dass für typische Überwachungsintervalle keine Einschränkungen der eigentlichen Anwendung zu erwarten sind und dass überhaupt nur für untypisch kurze Intervalle ein geringer Einfluss festzustellen war
APA, Harvard, Vancouver, ISO, and other styles
42

Benson, Kirk C. "Adaptive Control of Large-Scale Simulations." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5002.

Full text
Abstract:
This thesis develops adaptive simulation control techniques that differentiate between competing system configurations. Here, a system is a real world environment under analysis. In this context, proposed modifications to a system denoted by different configurations are evaluated using large-scale hybrid simulation. Adaptive control techniques, using ranking and selection methods, compare the relative worth of competing configurations and use these comparisons to control the number of required simulation observations. Adaptive techniques necessitate embedded statistical computations suitable for the variety of data found in detailed simulations, including hybrid and agent-based simulations. These embedded statistical computations apply efficient sampling methods to collect data from simulations running on a network of workstations. The National Airspace System provides a test case for the application of these techniques to the analysis and design of complex systems, implemented here in the Reconfigurable Flight Simulator, a large-scale hybrid simulation. Implications of these techniques for the use of simulation as a design activity are also presented.
APA, Harvard, Vancouver, ISO, and other styles
43

Eversole, Dolan. "Large-Scale Beach Change: Kaanapali, Hawai'i." Thesis, University of Hawaii at Manoa, 2002. http://hdl.handle.net/10125/6946.

Full text
Abstract:
Using monthly beach profile surveys and historical aerial photographs, the seasonal and long-term (48 year) beach morphology for Kaanapali Beach, Maui is described. By identifying the shoreline position in historical aerial photographs it is determined that the Kaanapali area is subject to long periods of mild erosion and accretion punctuated by severe erosional events related to short-period Kona storms and hurricane waves. Increased Central Pacific tropical cyclone activity of the late 1950's and early 1960's and Hurricane Iniki in 1992 are identified as contributing factors to the observed volume change during these periods. Between these erosional periods the Kaanapali shoreline is relatively stable characterized by light erosion to moderate accretion suggesting the recovery time may be on the order of roughly 20 years. Over the 48-year period 1949 to 1997, the Kaanapali and Honokowai cells have experienced a net sediment volume loss of 43,000 ±730 m3 and 30,700 ±630 m3 respectively for a total net volume loss of 73,700 ± 990 m3. The Kona storms and hurricanes of the early 1960's and 1992 collectively account for 136,000 m3 of sediment lost or approximately 62% of the gross volume change for the entire period, revealing the significant erosional effect of these storms. Recovery after each of these storms accounts for 73,900 m3 or approximately 33% of the gross volume change. A residual loss of 10,600 m3 representing 5% of the gross volume change is inferred as chronic erosion and may be a product of relative sea-level rise (RSLR). An increase in short-period southwesterly wave energy during these erosional periods is well documented and may have transported beach sediment further offshore than normal (beyond the reef) and is identified as a possible mechanism for long-term erosion in this area. The spatial distribution of historical shoreline movement suggests the majority of sediment transport occurs in the central portion of Kaanapali near Kekaa and Hanakoo Point and is driven by longshore rather than cross-shore transport. Surveyed beach profiles reveal a strong seasonal variability with net erosion in the summer and accretion in the winter with an along the shore-alternating pattern of erosion and accretion. 65% of the net volume change occurs south of Kekaa Point confirming the more dynamic nature of the southern (Kaanapali Cell). Net beach profile volume change from the mean suggests that June and January are the most dynamic months each with approximately 14% of the total volume change. We attribute the significant and rapid erosion and accretion events due to wave-induced longshore transport of sediment. Field observations of monthly beach sediment impoundment in the Kaanapali cell are examined and compared to three models that predict longshore sediment transport (LST). Beach profile results indicate sediment impoundment occurs seasonally with a nearly balanced longshore sediment transport system between profile 5 and 9. Longshore transport rates are derived from seasonal cumulative net volume change in the middle of Kaanapali Beach at profile 7. Cumulative net sediment transport rates are 29,379 m3/yr ±15% to the north and 22,358 m3/yr ± 6% to the south for summer and winter respectively, a net annual rate of 7,021 m3/yr ± 10% to the north and a gross annual rate of 51,736 m3/yr ± 2%. Predictive transport formulas such as CERC (1984), CERC (1991) and Kamphius (1991) predict net annual transport rates at 3 x 103 percent, 77 percent and 6 x 103 percent of the observed transport rates respectively. The presence of fringing reef significantly effects the ability of the LST models to accurately predict sediment transport. When applying the CERC (1984, 1991) and Kamphius (1991) formulas, the functional beach profile area available for sediment transport is assumed much larger than actually exists in Kaanapali because of the presence of a fringing reef that truncates a portion of the sandy profile area. The CERC (1984, 1991) and Kamphius (1991) formulas don't account for the presence of a reef system which may contribute to the models overestimate of longshore sediment transport as they assume the entire profile is mobile sediment. However the fact that the CERC (1991) model underestimates the observed transport implies that additional environmental parameters (such as wave height, direction and period) playa more substantial role than the influence of the reef in the model results. The CERC (1991) Genesis model is found to be superior in fitting the observed longshore transport at Kaanapali Beach. The success of the Genesis model is partly attributed to its' ability to account for short-term changes in near-shore parameters such as wave shoaling, refraction, bathymetry, antecedent conditions and several other shore face parameters not accounted for in the CERC (1984) or Kamphius (1991) formulas. The use of the CERC (1984) formula is prone to practical errors in its' application particularly in the use of the recommended "K" coefficient and wave averaging that may a significantly overestimate the LST. A better fit to the observed LST is achieved with the CERC (1984) if the K value is decreased by an order of magnitude from 0.77 to 0.07. The Kamphius (1991) formula is especially sensitive to extremes in wave period and tends to deviate from observed transport estimates for unusually high wave periods (this study) and approximates observations nicely in areas with low wave periods (Ping Wang et al. (1998). Many of the studied predictive LST formulas are prone to overestimate transport and thus their use requires a comprehensive understanding of the complexities and errors associated with employing them. Great care must be used when applying LST models in areas with significant hard bottom or shallow reefs that alter the beach profile shape. Due to these errors, the use of the CERC (1984) and Kamphius (1991) formulas are better suited as a qualitative interpretative tool of transport direction rather than magnitude.
ix, 62 leaves
APA, Harvard, Vancouver, ISO, and other styles
44

Jenelius, Erik. "Large-Scale Road Network Vulnerability Analysis." Doctoral thesis, KTH, Transport och lokaliseringsanalys, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24952.

Full text
Abstract:
Disruptions in the transport system can have severe impacts for affected individuals, businesses and the society as a whole. In this research, vulnerability is seen as the risk of unplanned system disruptions, with a focus on large, rare events. Vulnerability analysis aims to provide decision support regarding preventive and restorative actions, ideally as an integrated part of the planning process.The thesis specifically develops the methodology for vulnerability analysis of road networks and considers the effects of suddenly increased travel times and cancelled trips following road link closures. The major part consists of model-based studies of different aspects of vulnerability, in particular the dichotomy of system efficiency and user equity, applied to the Swedish road network. We introduce the concepts of link importance as the overall impact of closing a particular link, and regional exposure as the impact for individuals in a particular region of, e.g., a worst-case or an average-case scenario (Paper I). By construction, a link is important if the normal flow across it is high and/or the alternatives to this link are considerably worse, while a traveller is exposed if a link closure along her normal route is likely and/or the best alternative is considerably worse. Using regression analysis we show that these relationships can be generalized to municipalities and counties, so that geographical variations in vulnerability can be explained by variations in network density and travel patterns (Paper II). The relationship between overall impacts and user disparities are also analyzed for single link closures and is found to be negative, i.e., the most important links also have the most equal distribution of impacts among individuals (Paper III).In addition to links' roles for transport efficiency, the thesis considers their importance as rerouting alternatives when other links are disrupted (Paper IV). Such redundancy-important roads, found often to be running in parallel to highways with heavy traffic, may be warranted a higher standard than their typical use would suggest. We also study the vulnerability of the road network under area-covering disruptions, representing for example flooding, heavy snowfall or forest fires (Paper V). In contrast to single link failures, the impacts of this kind of events are largely determined by the population concentration, more precisely the travel demand within, in and out of the disrupted area itself, while the density of the road network is of small influence. Finally, the thesis approaches the issue of how to value the delays that are incurred by network disruptions and, using an activity-based modelling approach, we illustrate that these delay costs may be considerably higher than the ordinary value of time, in particular during the first few days after the event when travel conditions are uncertain (Paper VI).
QC 20101004
APA, Harvard, Vancouver, ISO, and other styles
45

Bilal, Muhammad Shahid. "Large Scale Modelling of Striatal Network." Thesis, KTH, Matematik (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103502.

Full text
Abstract:
Numerical simulations play an important role to uncover the dynamic behaviour at the cellular and network levels and accelerate the work in the field of Neuroscience. The modern computational technologies have made it possible to simulate a huge network of neurons which was possible only in theory two decades ago. The simulations of networks of thousand of neurons are carried out on the parallel machine Cray XE6 system, based on the AMD Opteron 12-core which shows good scaling properties. These models can be beneficial for the generation of global behaviour which could not be produced by fewer cells. For example, the effect of inhibition in a striatal network of MSNs is only seen if the number of cells and the synapses are increased sufficiently. The simulated responses of cells are greatly influenced by the numerical scheme. This has been demonstrated using gap junctions between striatal fast spiking inter-neurons. Implicit numerical schemes need to be used in order to get stable and accurate results. The simulations are carried out using serial and parallel implementations on the GENESIS and PGENESIS simulator respectively. The limitations of the simulators have been highlighted by performing several simulation experiments. After exploiting the shortcomings presented in this work, it would be possible to use the insight to investigate biologically relevant questions.
Numeriska simuleringar har en stor betydelse när man vill undersöka och förstå dynamiska fenomen på cell- och nätverksnivå. Detta är mycket viktigt för hela neuroområdet. Dagens beräkningsteknologier har gjort det möjligt att simulera stora nätverk av neuroner, vilket knappast var realistiskt för 10-20 år sedan. Simuleringar av nätverk som består av tusentals neuroner kan göras lokalt på KTH på en parallelldator som heter Cray XE6 och som är baserad på AMD Opteron 12-core,med goda skalningsegenskaper. Sådana nätverkssimuleringar är nödvändiga för att undersöka det globala beteendet i nätverket vilket inte kan produceras med färre antal celler. Ett exempel på en nätverkseffekt som endast kan ses i en storskalig modell är hur inhibitionen mellan s.k. medium spiny neurons (MSNs) i det striatal nätverket fungerar. Eftersom inhibitionen mellan varje cellpar är mycket svag behövs input från många celler för att nätverket skall påverkas. Simuleringsresultaten påverkas signifikant av vilken numerisk metod som används. Detta demonstreras med ett striatalt nätverk innehållande s.k. gap junctions (elektriska synapser) mellan striatala fast-spiking interneurons (FS). Implicita numeriska metoder blir nödvändiga för att få stabila och riktiga resultat. Simuleringar görs både seriellt och parallellt med hjälp m.h.a. Genesis simulatorn (PGenesis för parallella implementationer), vilket är en standardsimulator för biofysikaliskt detaljerade neuronmodeller. För att utvärdera Genesis simulatorn och dess svagheter har flera simuleringsexperiment utförts. Insikter från dessa simuleringar, vilka diskuteras i detta arbete, kan hjälpa till att lägga en grund för framtida användning av Genesis för storskaliga simuleringar.
APA, Harvard, Vancouver, ISO, and other styles
46

Sutor, S. R. (Stephan R. ). "Large-scale high-performance video surveillance." Doctoral thesis, Oulun yliopisto, 2014. http://urn.fi/urn:isbn:9789526205618.

Full text
Abstract:
Abstract The last decade was marked by a set of harmful events ranging from economical crises to organized crime, acts of terror and natural catastrophes. This has led to a paradigm transformation concerning security. Millions of surveillance cameras have been deployed, which led to new challenges, as the systems and operations behind those cameras could not cope with the rapid growth in number of video cameras and systems. Looking at today’s control rooms, often hundreds or even thousands of cameras are displayed, overloading security officers with irrelevant information. The purpose of this research was the creation of a novel video surveillance system with automated analysis mechanisms which enable security authorities and their operators to cope with this information flood. By automating the process, video surveillance was transformed into a proactive information system. The progress in technology as well as the ever increasing demand in security have proven to be an enormous driver for security technology research, such as this study. This work shall contribute to the protection of our personal freedom, our lives, our property and our society by aiding the prevention of crime and terrorist attacks that diminish our personal freedom. In this study, design science research methodology was utilized in order to ensure scientific rigor while constructing and evaluating artifacts. The requirements for this research were sought in close cooperation with high-level security authorities and prior research was studied in detail. The created construct, the “Intelligent Video Surveillance System”, is a distributed, highly-scalable software framework, that can function as a basis for any kind of high-performance video surveillance system, from installations focusing on high-availability to flexible cloud-based installation that scale across multiple locations and tens of thousands of cameras. First, in order to provide a strong foundation, a modular, distributed system architecture was created, which was then augmented by a multi-sensor analysis process. Thus, the analysis of data from multiple sources, combining video and other sensors in order to automatically detect critical events, was enabled. Further, an intelligent mobile client, the video surveillance local control, which addressed remote access applications, was created. Finally, a wireless self-contained surveillance system was introduced, a novel smart camera concept that enabled ad hoc and mobile surveillance. The value of the created artifacts was proven by evaluation at two real-world sites: An international airport, which has a large-scale installation with high-security requirements, and a security service provider, offering a multitude of video-based services by operating a video control center with thousands of cameras connected
Tiivistelmä Viime vuosikymmen tunnetaan vahingollisista tapahtumista alkaen talouskriiseistä ja ulottuen järjestelmälliseen rikollisuuteen, terrori-iskuihin ja luonnonkatastrofeihin. Tämä tilanne on muuttanut suhtautumista turvallisuuteen. Miljoonia valvontakameroita on otettu käyttöön, mikä on johtanut uusiin haasteisiin, koska kameroihin liittyvät järjestelmät ja toiminnot eivät pysty toimimaan yhdessä lukuisien uusien videokameroiden ja järjestelmien kanssa. Nykyajan valvontahuoneissa voidaan nähdä satojen tai tuhansien kameroiden tuottavan kuvaa ja samalla runsaasti tarpeetonta informaatiota turvallisuusvirkailijoiden katsottavaksi. Tämän tutkimuksen tarkoitus oli luoda uusi videovalvontajärjestelmä, jossa on automaattiset analyysimekanismit, jotka mahdollistavat turva-alan toimijoiden ja niiden operaattoreiden suoriutuvan informaatiotulvasta. Automaattisen videovalvontaprosessin avulla videovalvonta muokattiin proaktiiviseksi tietojärjestelmäksi. Teknologian kehitys ja kasvanut turvallisuusvaatimus osoittautuivat olevan merkittävä ajuri turvallisuusteknologian tutkimukselle, kuten tämä tutkimus oli. Tämä tutkimus hyödyttää yksittäisen ihmisen henkilökohtaista vapautta, elämää ja omaisuutta sekä yhteisöä estämällä rikoksia ja terroristihyökkäyksiä. Tässä tutkimuksessa suunnittelutiedettä sovellettiin varmistamaan tieteellinen kurinalaisuus, kun artefakteja luotiin ja arvioitiin. Tutkimuksen vaatimukset perustuivat läheiseen yhteistyöhön korkeatasoisten turva-alan viranomaisten kanssa, ja lisäksi aiempi tutkimus analysoitiin yksityiskohtaisesti. Luotu artefakti - ’älykäs videovalvontajärjestelmä’ - on hajautettu, skaalautuva ohjelmistoviitekehys, joka voi toimia perustana monenlaiselle huipputehokkaalle videovalvontajärjestelmälle alkaen toteutuksista, jotka keskittyvät saatavuuteen, ja päättyen joustaviin pilviperustaisiin toteutuksiin, jotka skaalautuvat useisiin sijainteihin ja kymmeniin tuhansiin kameroihin. Järjestelmän tukevaksi perustaksi luotiin hajautettu järjestelmäarkkitehtuuri, jota laajennettiin monisensorianalyysiprosessilla. Siten mahdollistettiin monista lähteistä peräisin olevan datan analysointi, videokuvan ja muiden sensorien datan yhdistäminen ja automaattinen kriittisten tapahtumien tunnistaminen. Lisäksi tässä työssä luotiin älykäs kännykkäsovellus, videovalvonnan paikallinen kontrolloija, joka ohjaa sovelluksen etäkäyttöä. Viimeksi tuotettiin langaton itsenäinen valvontajärjestelmä – uudenlainen älykäs kamerakonsepti – joka mahdollistaa ad hoc -tyyppisen ja mobiilin valvonnan. Luotujen artefaktien arvo voitiin todentaa arvioimalla ne kahdessa reaalimaailman ympäristössä: kansainvälinen lentokenttä, jonka laajamittaisessa toteutuksessa on korkeat turvavaatimukset, ja turvallisuuspalveluntuottaja, joka tarjoaa moninaisia videopohjaisia palveluja videovalvontakeskuksen avulla käyttäen tuhansia kameroita
APA, Harvard, Vancouver, ISO, and other styles
47

Pritchard, Mark Anderson. "Numerical modelling of large scale toppling." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/27991.

Full text
Abstract:
The principle purpose of this research is to resolve the mode of failure of the Heather Hill landslide, one of several well defined failures in the Beaver Valley, Glacier National Park, British Columbia. Field work led to the preliminary conclusion that some type of toppling process contributed to the failure. A literature review of toppling revealed that large scale topples have never been quantitatively assessed, and that currently used analytical techniques are not adequate. Consideration of alternative numerical techniques resulted in the distinct element method being selected as the best technique for modelling toppling. The Universal Distinct Element Code (UDEC) was purchased and its suitability demonstrated by reevaluating examples of toppling analysis reported in the literature, and evaluating a large scale engineered slope at Brenda mine where toppling is known to occur. UDEC is used to examine and classify the mode of failure of the Heather Hill slide. This research leads to very important general conclusions on toppling and specific conclusions relating to the Heather Hill landslide: UDEC is suitable for modelling all types of topples. The program can be used to back analyze rock mass strength parameters and determine the shape and location of the final failure surface in flexural toppling. A quantitative assessment with UDEC confirms that the base of failure in flexural toppling may be planar or curvilinear, and that pore pressures significantly affect stability. The Heather Hill landslide failed by flexural toppling limiting to a curvilinear failure surface, and the slope immediately north of the Heather Hill landslide is deformed by flexural toppling. The locations of landslides in the Beaver Valley correspond with the occurrence of foliated pelitic rocks in the lower slopes and the boundary between these rocks and stronger grits is the up slope limit. The kinematic test of toppling potential is violated by the Heather Hill landslide. This test is shown to only apply to small scale drained slopes.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
48

Muthu, Raju D. "Large scale pullout testing of geosynthetics." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/30027.

Full text
Abstract:
An evaluation of soil-geosynthetic interface friction is important to the design of any anchorage detail of a reinforced soil structure or membrane-lined waste containment facility. A large pullout apparatus has been designed and commissioned to evaluate the mobilization of pullout resistance in geosynthetic test specimens. Sand samples were prepared by pluviation into a rectangular box, 1.30m x 0.64m x 0.60m. A stress controlled top boundary was used to apply vertical stresses in the range 5 to 90 kPa. A rate of pullout displacement of 0.5 mm/min was used in the program of testing. A technique of strain gauging the geosynthetic test specimen has been developed. Variables examined in the program of testing were type of geosynthetic and confining stress. Measurement of pullout force, pullout displacement, horizontal pressure on front face of the test box, strain in geosynthetic material, water pressure in the surcharge bag, and volume change were taken during testing. Pullout resistance increases with confining stress and is described by a bond factor or a bond coefficient. Some test specimens failed in pullout, and some were tending toward tensile yield. A development of progressive strain was observed.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
49

LeMoine, Pierre. "Large Scale Generation of Voxelized Terrain." Thesis, Linköpings universitet, Institutionen för systemteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102761.

Full text
Abstract:
Computer-aided generation of virtual worlds is vital in modern content production. To manuallycreate all the details which todays computers can visualize would be too daunting atask for any number of artists. Procedural algorithms can quickly generate content, but thecontent suffers from being repetitive. Simulation of geological processes produce good resultsbut require a lot of resources. In this report a solution is presented which combines procedural algorithms with geologicalsimulation in the form of erosion. A pre-processing stage generates a heightfield using proceduralnoise which is then eroded. The erosion is accelerated by being performed on the GPU.A road network is generated by connecting points scattered in the world. The pre-processedworld is then used to define a field function. The function is sampled in a grid as neededto produce voxels with different materials. Roads are added to the world by changing thematerial of the voxels. The voxels are then rendered as textured tiles depending on material. The generated worlds are varied and interesting, much more so than worlds created purelyby procedural methods. A world can be pre-processed within a few minutes and explored inrealtime.
APA, Harvard, Vancouver, ISO, and other styles
50

Loke, Han Ying. "The large scale motions of galaxies." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/28449.

Full text
Abstract:
In my thesis, several methods of extracting some of the most important cosmological parameters such as density and bias are developed. All these methods rely on the analysis of the large-scale motions of galaxies. If galactic motions are affected mainly by gravity, then the study of large-scale flow should in principle be able to probe the underlying mass distribution and thus the density parameter. Comparison of this inferred mass distribution with the observed distribution of galaxies is however complicated by the lack of understanding of the relationship between mass and light. Nevertheless if a simple bias correlation between mass and light is assumed, then such study would be a very useful tool for testing some of the cosmological theories. The thesis starts off with a brief introduction to some of the basic cosmological principles that are relevant to my work. It then continues with the details of the analysis mentioned above using chiefly the data from the IRAS (Infrared Astronomical Satellite) survey along with the results. The thesis then ends with an overview of the current state of the large-scale motion study in the field of cosmology.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography