Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Point scale.

Дисертації з теми "Point scale"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Point scale".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Lindeberg, Tony. "Scale Selection Properties of Generalized Scale-Space Interest Point Detectors." KTH, Beräkningsbiologi, CB, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101220.

Повний текст джерела
Анотація:
Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising: an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator.

QC 20121003


Image descriptors and scale-space theory for spatial and spatio-temporal recognition
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Griffin, Joshua D. "Interior-point methods for large-scale nonconvex optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3167839.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Graehling, Quinn R. "Feature Extraction Based Iterative Closest Point Registration for Large Scale Aerial LiDAR Point Clouds." University of Dayton / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1607380713807017.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Siegl, Manuel. "Atomic-scale investigation of point defect interactions in semiconductors." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10043636/.

Повний текст джерела
Анотація:
Miniaturisation of computer hardware has increased the transistor density in silicon devices significantly and is approaching the ultimate physical limit of single atom transistors. A thorough understanding of the nature of materials at the atomic scale is needed in order to increase the transistor density further and exploit more recent technology proposals. Moreover, exploring other materials with more desirable characteristics such as wide band gap semiconductors with a higher dielectric strength and optical addressability are paramount in the effort of moving to a post-silicon era. Scanning Tunnelling Microscopy (STM) has been shown to be a suitable tool for the investigation of the technologically important material surface properties at the atomic scale. In particular, Scanning Tunnelling Spectroscopy (STS) – and its spatial extension Current Imaging Tunnelling Spectroscopy (CITS) – can reveal the electronic properties of single atom point defects as well as quantum effects caused by the confinement of energetic states. Nanoscale device performance is governed by these effects. In order to control and exploit the quantum effects, they firstly need to be understood. In this thesis, three systems have been investigated with STM and STS/CITS to broaden the comprehension of confined quantum states and material surface properties. The first data chapter concentrates on the interaction of confined quantum states of dangling bonds (DB) on the Si(111)-(√ 3 × √ 3)R30◦ surface. The site dependent interaction between neighbouring bound states is investigated by changing the distance and crystallographic direction between two DB point defects, revealing a non-linear constructive interference of the bound states and an antibonding state in resonance with the CB. In the second data chapter we explore subsurface bismuth dopants in silicon, a system relevant to recent information processing proposals. Bismuth was ion-implanted in the Si(001) surface and hydrogen passivated before the STM study. The bismuth dopants form a bismuth-vacancy (Bi+V) complex, which acts as an acceptor and lowers the Fermi level. The Bi+V complex further induces in-band gap states, which appear as square-like protrusions with a round depression in the centre. Interference of these states is energy dependent and the antibonding state is found at a lower energy than the bonding state due to the acceptor-like nature of the Bi+V defect complex. The third investigated system concerns the silicon face of the wide band gap semiconductor Silicon Carbide (SiC(0001)). The influence of atomic hydrogen on the 4HSiC(0001)-3 × 3 surface was investigated and found to result in a surface etching at the lower and upper end of the passivation temperature range. The electronic structure of two different surface defects of the 3 × 3 reconstruction is presented and a new superstructure consisting of silicon atoms on top of the 4H-SiC(0001)-(√ 3 × √ 3)R30◦ surface was discovered. A Schottky barrier height study of different surface reconstructions finds a nearly optimal power device fabrication value for the (√ 3× √ 3)R30◦ prepared surface. In summary, I have found a quantum interference that results in bonding and antibonding states for DB bound states on the Si(111):B surface and Bi+V complex states in the Si(001):H surface. Additionally, a new silicon superstructure on the SiC surface and a silicon reconstruction dependent Schottky barrier height are found.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Marcia, Roummel F. "Primal-dual interior-point methods for large-scale optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2002. http://wwwlib.umi.com/cr/ucsd/fullcit?p3044769.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Khoury, Rasha. "Nanometer scale point contacting techniques for silicon Photovoltaic devices." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX070/document.

Повний текст джерела
Анотація:
Au cours de cette thèse, j’ai étudié la possibilité et les avantages d’utiliser des contacts nanométriques au-dessous de 1 µm. Des simulations analytiques et numériques ont montré que ces contacts nanométriques sont avantageux pour les cellules en silicium cristallin comme ils peuvent entrainer une résistance ohmique négligeable. Mon travail expérimental était focalisé sur le développement de ces contacts en utilisant des nanoparticules de polystyrène comme un masque. En utilisant la technique de floating transfert pour déposer les nanosphères, une monocouche dense de nanoparticules s’est formée. Cela nécessite une gravure par plasma de O2 afin de réduire la zone de couverture des NPs. Cette gravure était faite et étudiée en utilisant la technique de plasmas matriciels distribués à résonance cyclotronique électronique (MD-ECR). Une variété de techniques de créations de trous nanométriques était développée et testée dans des structures de couches minces et silicium cristallin. Des trous nanométriques étaient formés dans la couche de passivation, de SiO2 thermique, du silicium cristallin pour former des contacts nanométriques dopés. Un dopage local de bore était fait, à travers ces trous nanométriques par diffusion thermique et implantation ionique. En faisant la diffusion, le dopage local était observé par CP-AFM en mesurant des courbes de courant-tension à l’intérieur et à l’extérieur des zones dopées et en détectant des cellules solaires nanométriques. Par contre le processus de dopage local par implantation ionique a besoin d’être améliorer afin d’obtenir un résultat similaire à celui de diffusion
The use of point contacts has made the Passivated Emitter and Rear Cell design one of the most efficient monocrystalline-silicon photovoltaic cell designs in production. The main feature of such solar cell is that the rear surface is partially contacted by periodic openings in a dielectric film that provides surface passivation. However, a trade-off between ohmic losses and surface recombination is found. Due to the technology used to locally open the contacts in the passivation layer, the distance between neighboring contacts is on the order of hundreds of microns, introducing a significant series resistance.In this work, I explore the possibility and potential advantages of using nanoscale contact openings with a pitch between 300 nm to 10 µm. Analytic and numerical simulations done during the course of this thesis have shown that such nanoscale contacts would result in negligible ohmic losses while still keeping the surface recombination velocity Seff,rear at an acceptable level, as long as the recombination velocity at the contact (Scont) is in the range from 103-105 cm/s. To achieve such contacts in a potentially cost-reducing way, my experimental work has focused on the use of polystyrene nanospheres as a sacrificial mask.The thesis is therefore divided into three sections. The first section develops and explores processes to enable the formation of such contacts using various nanosphere dispersion, thin-film deposition, and layer etching processes. The second section describes a test device using a thin-film amorphous silicon NIP diode to explore the electrical properties of the point contacts. Finally, the third section considers the application of such point contacts on crystalline silicon by exploring localized doping through the nanoholes formed.In the first section, I have explored using polystyrene nanoparticles (NPs) as a patterning mask. The first two tested NPs deposition techniques (spray-coating, spin-coating) give poorly controlled distributions of nanospheres on the surface, but with very low values of coverage. The third tested NPs deposition technique (floating transfer technique) provided a closely-packed monolayer of NPs on the surface; this process was more repeatable but necessitated an additional O2 plasma step to reduce the coverage area of the sphere. This was performed using matrix distributed electron cyclotron resonance (MD-ECR) in order to etch the NPs by performing a detailed study.The NPs have been used in two ways; by using them as a direct deposition mask or by depositing a secondary etching mask layer on top of them.In the second section of this thesis, I have tested the nanoholes as electrical point-contacts in thin-film a-Si:H devices. For low-diffusion length technologies such as thin-film silicon, the distance between contacts must be in the order of few hundred nanometers. Using spin coated 100 nm NPs of polystyrene as a sacrificial deposition mask, I could form randomly spaced contacts with an average spacing of a few hundred nanometers. A set of NIP a-Si:H solar cells, using RF-PECVD, have been deposited on the back reflector substrates formed with metallic layers covered with dielectrics having nanoholes. Their electrical characteristics were compared to the same cells done with and without a complete dielectric layer. These structures allowed me to verify that good electrical contact through the nanoholes was possible, but no enhanced performance was observed.In the third section of this thesis, I investigate the use of such nanoholes in crystalline silicon technology by the formation of passivated contacts through the nanoholes. Boron doping by both thermal diffusion and ion implantation techniques were investigated. A thermally grown oxide layer with holes was used as the doping barrier. These samples were characterized, after removing the oxide layer, by secondary electron microscopy (SEM) and conductive probe atomic force microscopy (CP-AFM)
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Colombo, Marco. "Advances in interior point methods for large-scale linear programming." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/2488.

Повний текст джерела
Анотація:
This research studies two computational techniques that improve the practical performance of existing implementations of interior point methods for linear programming. Both are based on the concept of symmetric neighbourhood as the driving tool for the analysis of the good performance of some practical algorithms. The symmetric neighbourhood adds explicit upper bounds on the complementarity pairs, besides the lower bound already present in the common N−1 neighbourhood. This allows the algorithm to keep under control the spread among complementarity pairs and reduce it with the barrier parameter μ. We show that a long-step feasible algorithm based on this neighbourhood is globally convergent and converges in O(nL) iterations. The use of the symmetric neighbourhood and the recent theoretical under- standing of the behaviour of Mehrotra’s corrector direction motivate the introduction of a weighting mechanism that can be applied to any corrector direction, whether originating from Mehrotra’s predictor–corrector algorithm or as part of the multiple centrality correctors technique. Such modification in the way a correction is applied aims to ensure that any computed search direction can positively contribute to a successful iteration by increasing the overall stepsize, thus avoid- ing the case that a corrector is rejected. The usefulness of the weighting strategy is documented through complete numerical experiments on various sets of publicly available test problems. The implementation within the hopdm interior point code shows remarkable time savings for large-scale linear programming problems. The second technique develops an efficient way of constructing a starting point for structured large-scale stochastic linear programs. We generate a computation- ally viable warm-start point by solving to low accuracy a stochastic problem of much smaller dimension. The reduced problem is the deterministic equivalent program corresponding to an event tree composed of a restricted number of scenarios. The solution to the reduced problem is then expanded to the size of the problem instance, and used to initialise the interior point algorithm. We present theoretical conditions that the warm-start iterate has to satisfy in order to be successful. We implemented this technique in both the hopdm and the oops frameworks, and its performance is verified through a series of tests on problem instances coming from various stochastic programming sources.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wehbe, Diala. "Simulations and applications of large-scale k-determinantal point processes." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I012/document.

Повний текст джерела
Анотація:
Avec la croissance exponentielle de la quantité de données, l’échantillonnage est une méthode pertinente pour étudier les populations. Parfois, nous avons besoin d’échantillonner un grand nombre d’objets d’une part pour exclure la possibilité d’un manque d’informations clés et d’autre part pour générer des résultats plus précis. Le problème réside dans le fait que l’échantillonnage d’un trop grand nombre d’individus peut constituer une perte de temps.Dans cette thèse, notre objectif est de chercher à établir des ponts entre la statistique et le k-processus ponctuel déterminantal(k-DPP) qui est défini via un noyau. Nous proposons trois projets complémentaires pour l’échantillonnage de grands ensembles de données en nous basant sur les k-DPPs. Le but est de sélectionner des ensembles variés qui couvrent un ensemble d’objets beaucoup plus grand en temps polynomial. Cela peut être réalisé en construisant différentes chaînes de Markov où les k-DPPs sont les lois stationnaires.Le premier projet consiste à appliquer les processus déterminantaux à la sélection d’espèces diverses dans un ensemble d’espèces décrites par un arbre phylogénétique. En définissant le noyau du k-DPP comme un noyau d’intersection, les résultats fournissent une borne polynomiale sur le temps de mélange qui dépend de la hauteur de l’arbre phylogénétique.Le second projet vise à utiliser le k-DPP dans un problème d’échantillonnage de sommets sur un graphe connecté de grande taille. La pseudo-inverse de la matrice Laplacienne normalisée est choisie d’étudier la vitesse de convergence de la chaîne de Markov créée pour l’échantillonnage de la loi stationnaire k-DPP. Le temps de mélange résultant est borné sous certaines conditions sur les valeurs propres de la matrice Laplacienne.Le troisième sujet porte sur l’utilisation des k-DPPs dans la planification d’expérience avec comme objets d’étude plus spécifiques les hypercubes latins d’ordre n et de dimension d. La clé est de trouver un noyau positif qui préserve le contrainte de ce plan c’est-à-dire qui préserve le fait que chaque point se trouve exactement une fois dans chaque hyperplan. Ensuite, en créant une nouvelle chaîne de Markov dont le n-DPP est sa loi stationnaire, nous déterminons le nombre d’étapes nécessaires pour construire un hypercube latin d’ordre n selon le n-DPP
With the exponentially growing amount of data, sampling remains the most relevant method to learn about populations. Sometimes, larger sample size is needed to generate more precise results and to exclude the possibility of missing key information. The problem lies in the fact that sampling large number may be a principal reason of wasting time.In this thesis, our aim is to build bridges between applications of statistics and k-Determinantal Point Process(k-DPP) which is defined through a matrix kernel. We have proposed different applications for sampling large data sets basing on k-DPP, which is a conditional DPP that models only sets of cardinality k. The goal is to select diverse sets that cover a much greater set of objects in polynomial time. This can be achieved by constructing different Markov chains which have the k-DPPs as their stationary distribution.The first application consists in sampling a subset of species in a phylogenetic tree by avoiding redundancy. By defining the k-DPP via an intersection kernel, the results provide a fast mixing sampler for k-DPP, for which a polynomial bound on the mixing time is presented and depends on the height of the phylogenetic tree.The second application aims to clarify how k-DPPs offer a powerful approach to find a diverse subset of nodes in large connected graph which authorizes getting an outline of different types of information related to the ground set. A polynomial bound on the mixing time of the proposed Markov chain is given where the kernel used here is the Moore-Penrose pseudo-inverse of the normalized Laplacian matrix. The resulting mixing time is attained under certain conditions on the eigenvalues of the Laplacian matrix. The third one purposes to use the fixed cardinality DPP in experimental designs as a tool to study a Latin Hypercube Sampling(LHS) of order n. The key is to propose a DPP kernel that establishes the negative correlations between the selected points and preserve the constraint of the design which is strictly confirmed by the occurrence of each point exactly once in each hyperplane. Then by creating a new Markov chain which has n-DPP as its stationary distribution, we determine the number of steps required to build a LHS with accordance to n-DPP
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Leaf, Kyle, and Fulvio Melia. "A two-point diagnostic for the H ii galaxy Hubble diagram." OXFORD UNIV PRESS, 2018. http://hdl.handle.net/10150/627132.

Повний текст джерела
Анотація:
A previous analysis of starburst-dominated HII galaxies and HII regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the R-h = c(t) universe, over Lambda cold dark matter (Lambda CDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck Lambda CDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck Lambda CDM. But we also find that the reported errors in the HII measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of HII galaxies and HII regions as standard candles may be improved even further with a better handling of the systematics in these sources.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Nxumalo, Jochonia Norman. "Cross-sectional imaging of semiconductor devices using nanometer scale point contacts." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0007/NQ32010.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Woodsend, Kristian. "Using interior point methods for large-scale support vector machine training." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3310.

Повний текст джерела
Анотація:
Support Vector Machines (SVMs) are powerful machine learning techniques for classification and regression, but the training stage involves a convex quadratic optimization program that is most often computationally expensive. Traditionally, active-set methods have been used rather than interior point methods, due to the Hessian in the standard dual formulation being completely dense. But as active-set methods are essentially sequential, they may not be adequate for machine learning challenges of the future. Additionally, training time may be limited, or data may grow so large that cluster-computing approaches need to be considered. Interior point methods have the potential to answer these concerns directly. They scale efficiently, they can provide good early approximations, and they are suitable for parallel and multi-core environments. To apply them to SVM training, it is necessary to address directly the most computationally expensive aspect of the algorithm. We therefore present an exact reformulation of the standard linear SVM training optimization problem that exploits separability of terms in the objective. By so doing, per-iteration computational complexity is reduced from O(n3) to O(n). We show how this reformulation can be applied to many machine learning problems in the SVM family. Implementation issues relating to specializing the algorithm are explored through extensive numerical experiments. They show that the performance of our algorithm for large dense or noisy data sets is consistent and highly competitive, and in some cases can out perform all other approaches by a large margin. Unlike active set methods, performance is largely unaffected by noisy data. We also show how, by exploiting the block structure of the augmented system matrix, a hybrid MPI/Open MP implementation of the algorithm enables data and linear algebra computations to be efficiently partitioned amongst parallel processing nodes in a clustered computing environment. The applicability of our technique is extended to nonlinear SVMs by low-rank approximation of the kernel matrix. We develop a heuristic designed to represent clusters using a small number of features. Additionally, an early approximation scheme reduces the number of samples that need to be considered. Both elements improve the computational efficiency of the training phase. Taken as a whole, this thesis shows that with suitable problem formulation and efficient implementation techniques, interior point methods are a viable optimization technology to apply to large-scale SVM training, and are able to provide state-of-the-art performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Rydberg, David. "GPU Predictor-Corrector Interior Point Method for Large-Scale Linear Programming." Thesis, KTH, Numerisk analys, NA, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168672.

Повний текст джерела
Анотація:
This master’s thesis concerns the implementation of a GPUaccelerated version of Mehrotra’s predictor-corrector interior point algorithm for large-scale linear programming (LP). The implementations are tested on LP problems arising in the financial industry, where there is high demand for faster LP solvers. The algorithm was implemented in C++, MATLAB and CUDA, using double precision for numerical stability. A performance comparison showed that the algorithm can be accelerated from 2x to 6x using an Nvidia GTX Titan Black GPU compared to using only an Intel Xeon E5-2630v2 CPU. The amount of memory on the GPU restricts the size of problems that can be solved, but all tested problems that are small enough to fit on the GPU could be accelerated.
Detta masterexamensarbete behandlar implementeringen av en grafikkortsaccelererad inrepunktsmetod av predictor-corrector-typ för storskalig linjärprogrammering (LP). Implementeringarna testas på LP-problem som uppkommer i finansbranschen, där det finns ett stort behov av allt snabbare LP-lösare. Algoritmen implementeras i C++, MATLAB och CUDA, och dubbelprecision används för numerisk stabilitet. En prestandajämförelse visade att algoritmen kan accelereras 2x till 6x genom att använda ett Nvidia GTX Titan Black jämfört med att bara använda en Intel Xeon E5-2630v2. Mängden minne på grafikkortet begränsar problemstorleken, men alla testade problem som får plats i grafikkortsminnet kunde accelereras.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Whitfield, Brent. "SVAT calibration of point and regional scale water and energy dynamics." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000824.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Eggemeier, Alexander. "Challenges and prospects of probing galaxy clustering with three-point statistics." Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/80679/.

Повний текст джерела
Анотація:
In this work we explore three-point statistics applied to the large-scale structure in our Universe. Three-point statistics, such as the bispectrum, encode information not accessible via the standard analysis method-the power spectrum-and thus provide the potential for greatly improving current constraints on cosmological parameters. They also present us with additional challenges, and we focus on two of these arising from a measurement as well as modelling point of view. The first challenge we address is the covariance matrix of the bispectrum, as its precise estimate is required when performing likelihood analyses. Covariance matrices are usually estimated from a set of independent simulations, whose minimum number scales with the dimension of the covariance matrix. Because there are many more possibilities of finding triplets of galaxies than pairs, compared to the power spectrum this approach becomes rather prohibitive. With this motivation in mind, we explore a novel alternative to the bispectrum: the line correlation function (LCF). It specifically targets information in the phases of density modes that are invisible to the power spectrum, making it a potentially more efficient probe than the bispectrum, which measures a combination of amplitudes and phases. We derive the covariance properties and the impact of shot noise for the LCF and compare these theoretical predictions with measurements from N-body simulations. Based on a Fisher analysis we assess the LCF's sensitivity on cosmological parameters, finding that it is particularly suited for constraining galaxy bias parameters and the amplitude of fluctuations. As a next step we contrast the Fisher information of the LCF with the full bispectrum and two other recently proposed alternatives. We show that the LCF is unlikely to achieve a lossless compression of the bispectrum information, whereas a modal decomposition of the bispectrumcan reduce the size of the covariancematrix by at least an order of magnitude. The second challenge we consider in this work concerns the relation between the dark matter field and luminous tracers, such as galaxies. Accurate knowledge of this galaxy bias relation is required in order to reliably interpret the data gathered by galaxy surveys. On the largest scales the dark matter and galaxy densities are linearly related, but a variety of additional terms need to be taken into account when studying clustering on smaller scales. These have been fully included in recent power spectrumanalyses, whereas the bispectrummodel relied on simple prescriptions that were likely extended beyond their realm of validity. In addition, treating power spectrumand bispectrum on different footings means that the two models become inconsistent on small scales. We introduce a new formalism that allows us to elegantly compute the lacking bispectrum contributions from galaxy bias, without running into the renormalization problem. Furthermore, we fit our new model to simulated data by implementing these contributions into a likelihood code. We show that they are crucial in order to obtain results consistent with those fromthe power spectrum, and that the bispectrum retains its capability of significantly reducing uncertainties in measured parameters when combined with the power spectrum.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Acharya, Parash. "Small Scale Maximum Power Point Tracking Power Converter for Developing Country Application." Thesis, University of Canterbury. Electrical and Computer Engineering, 2013. http://hdl.handle.net/10092/8608.

Повний текст джерела
Анотація:
This thesis begins with providing a basic introduction of electricity requirements for small developing country communities serviced by small scale generating units (focussing mainly on small wind turbine, small Photo Voltaic system and Micro-Hydro Power Plants). Scenarios of these small scale units around the world are presented. Companies manufacturing different size wind turbines are surveyed in order to propose a design that suits the most abundantly available and affordable turbines. Different Maximum Power Point Tracking (MPPT) algorithms normally employed for these small scale generating units are listed along with their working principles. Most of these algorithms for MPPT do not require any mechanical sensors in order to sense the control parameters like wind speed and rotor speed (for small wind turbines), temperature and irradiation (for PV systems), and water flow and water head (for Micro-Hydro). Models for all three of these systems were developed in order to generate Maximum Power Point (MPP) curves. Similarly, a model for Permanent Magnet Synchronous Generators (PMSGs) has been developed in the d-q reference frame. A boost rectifier which enables active Power Factor Correction (PFC) and has a DC regulated output voltage is proposed before implementing a MPPT algorithm. The proposed boost rectifier works on the principle of Direct Power Control Space Vector Modulation (DPC-SVM) which is based on instantaneous active and reactive power control loops. In this technique, the switching states are determined according to the errors between commanded and estimated values of active and reactive powers. The PMSG and Wind Turbine behaviour are simulated at various wind speeds. Similarly, simulation of the proposed PFC boost rectifier is performed in matlab/simulink. The output of these models are observed for the variable wind speeds which identifies PFC and boosted constant DC output voltage is obtained. A buck converter that employs the MPPT algorithm is proposed and modeled. The model of a complete system that consists of a variable speed small wind turbine, PMSG, DPC-SVM boost rectifier, and buck converter implementing MPPT algorithm is developed. The proposed MPPT algorithm is based upon the principle of adjusting the duty ratio of the buck converter in order reach the MPP for different wind speeds (for small wind turbines) and different water flow rates (Micro-Hydro). Finally, a prototype DPC-SVM boost rectifier and buck converter was designed and built for a turbine with an output power ranging from 50 W-1 kW. Inductors for the boost rectifier and buck DC-DC converter were designed and built for these output power ranges. A microcontroller was programmed in order to generate three switching signals for the PFC boost rectifier and one switching signal for the MPPT buck converter. Three phase voltages and currents were sensed to determine active and reactive power. The voltage vectors were divided into 12 sectors and a switching algorithm based on the DPC-SVM boost rectifier model was implemented in order to minimize the errors between commanded and estimated values of active and reactive power. The system was designed for charging 48 V battery bank. The generator three phase voltage is boosted to a constant 80 V DC. Simulation results of the DPC-SVM based rectifier shows that the output power could be varied by varying the DC load maintaining UPF and constant boosted DC voltage. A buck DC-DC converter is proposed after the boost rectifier stage in order to charge the 48 V battery bank. Duty ratio of the buck converter is varied for varying the output power in order to reach the MPP. The controller prototype was designed and developed. A laboratory setup connecting 4 kW induction motor (behaving as a wind turbine) with 1kW PMSG was built. Speed-torque characteristic of the induction motor is initially determined. The torque out of the motor varies with the motor speed at various motor supply voltages. At a particular supply voltage, the motor torque reaches peak power at a certain turbine speed. Hence, the control algorithm is tested to reach this power point. Although the prototype of the entire system was built, complete results were not obtained due to various time constraints. Results from the boost rectifier showed that the appropriate switching were performed according to the digitized signals of the active and reactive power errors for different voltage sectors. Simulation results showed that for various wind speed, a constant DC voltage of 80 V DC is achieved along with UPF. MPPT control algorithm was tested for induction motor and PMSG combination. Results showed that the MPPT could be achieved by varying the buck converter duty ratio with UPF achieved at various wind speeds.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ellis, Noah. "Design, fabrication, and characterization of nano-scale cross-point hafnium oxide-based resistive random access memory." Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55038.

Повний текст джерела
Анотація:
Non-volatile memory (NVM) is a form of computer memory in which the logical value (1 or 0) of a bit is retained when the computer is in its’ powered off state. Flash memory is a major form of NVM found in many computer-based technologies today, from portable solid state drives to numerous types of electronic devices. The popularity of flash memory is due in part to the successful development and commercialization of the floating gate transistor. However, as the floating gate transistor reaches its’ limits of performance and scalability, viable alternatives are being aggressively researched and developed. One such alternative is a memristor-based memory application often referred to as ReRAM or RRAM (Resistive Random Access Memory). A memristor (memory resistor) is a passive circuit element that exhibits programmable resistance when subjected to appropriate current levels. A high resistance state in the memristor corresponds to a logical ‘0’, while the low resistance state corresponds to a logical ‘1’. One memristive system currently being actively investigated is the metal/metal oxide/metal material stack in which the metal layers serve as contact electrodes for the memristor with the metal oxide providing the variable resistance functionality. Application of an appropriate potential difference across the electrodes creates oxygen vacancies throughout the thickness of the metal oxide layer, resulting in the formation of filaments of metal ions which span the metal oxide, allowing for electronic conduction through the stack. Creation and disruption of the filaments correspond to low and high resistance states in the memristor, respectively. For some time now, HfO2 has been researched and developed to serve as a high-k material for use in high performance CMOS MOSFETs. As it happens, HfO2-based RRAM devices have proven themselves as viable candidates for NVM as well, demonstrating high switching speed (< 10 ns), large OFF/ON ratio (> 100), good endurance (> 106 cycles), long lifetime, and multi-bit storage capabilities. HfO2-based RRAM is also highly scalable, having been fabricated in cells as small as 10 x 10 nm2 while still maintaining good performance. Previous work examining switching properties of micron scale HfO2-based RRAM has been performed by the Vogel group. However, a viable process for fabrication of nano-scale RRAM is required in order to continue these studies. In this work, a fabrication process for nano-scale cross-point TiN/ HfO2/TiN RRAM devices will be developed and described. Materials processing challenges will be addressed. The switching performance of devices fabricated by this process will be compared to the performance of similar devices from the literature in order to confirm process viability.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Rashidi, Abbas. "Improved monocular videogrammetry for generating 3D dense point clouds of built infrastructure." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52257.

Повний текст джерела
Анотація:
Videogrammetry is an affordable and easy-to-use technology for spatial 3D scene recovery. When applied to the civil engineering domain, a number of issues have to be taken into account. First, videotaping large scale civil infrastructure scenes usually results in large video files filled with blurry, noisy, or simply redundant frames. This is often due to higher frame rate over camera speed ratio than necessary, camera and lens imperfections, and uncontrolled motions of the camera that results in motion blur. Only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames is a tough challenge. Second, the generated point cloud using a monocular videogrammetric pipeline is up to scale, i.e. the user has to know at least one dimension of an object in the scene to scale up the entire scene. This issue significantly narrows applications of generated point clouds in civil engineering domain since measurement is an essential part of every as-built documentation technology. Finally, due to various reasons including the lack of sufficient coverage during videotaping of the scene or existence of texture-less areas which are common in most indoor/outdoor civil engineering scenes, quality of the generated point clouds are sometimes poor. This deficiency appears in the form of outliers or existence of holes or gaps on surfaces of point clouds. Several researchers have focused on this particular problem; however, the major issue with all of the currently existing algorithms is that they basically treat holes and gaps as part of a smooth surface. This approach is not robust enough at the intersections of different surfaces or corners while there are sharp edges. A robust algorithm for filling holes/gaps should be able to maintain sharp edges/corners since they usually contain useful information specifically for applications in the civil and infrastructure engineering domain. To tackle these issues, this research presents and validates an improved videogrammetric pipeline for as built documentation of indoor/outdoor applications in civil engineering areas. The research consists of three main components: 1. Optimized selection of key frames for processing. It is necessary to choose a number of informative key frames to get the best results from the videogrammetric pipeline. This step is particularly important for outdoor environments as it is impossible to process a large number of frames existing in a large video clip. 2. Automated calculation of absolute scale of the scene. In this research, a novel approach for the process of obtaining absolute scale of points cloud by using 2D and 3D patterns is proposed and validated. 3. Point cloud data cleaning and filling holes on the surfaces of generated point clouds. The proposed algorithm to achieve this goal is able to fill holes/gaps on surfaces of point cloud data while maintaining sharp edges. In order to narrow the scope of the research, the main focus will be on two specific applications: 1. As built documentation of bridges and building as outdoor case studies. 2. As built documentation of offices and rooms as indoor case studies. Other potential applications of monocular videogrammetry in the civil engineering domain are out of scope of this research. Two important metrics, i.e. accuracy, completeness and processing time, are utilized for evaluation of the proposed algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

de, la Lama Zubiran Paula. "Solving large-scale two-stage stochastic optimization problems by specialized interior point method." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671488.

Повний текст джерела
Анотація:
Two-stage stochastic optimization models give rise to very large linear problems (LP). Several approaches have been devised for efficiently solving them, among which are interior-point methods (IPM). However, using IPM, the linking columns that are associated with first-stage decisions cause excessive fill-ins for the solutions of the normal equations, thus making the procedure computationally expensive. We have taken a step forward on the road to a better solution by reformulating the LP through a variable splitting technique which has significantly reduced the solution time. This work presents a specialized IPM that first applies variable splitting, then exploits the structure of the deterministic equivalent formulation of the stochastic problem. The specialized IPM works with an algorithm that combines Cholesky factorization and preconditioned conjugate gradients for solving the normal equations when computing the Newton direction for stochastic optimization problems in which the first-stages variables are large enough. Our specialized approach outperforms standard IPM. This work provides the computational results of two stochastic problems from the literature: (1) a supply chain system and (2) capacity expansion in an electric system. Both linear and quadratic formulations were used to obtain instances of up to 39 million variables and six million constraints. When used in these applications, the computational results show that our procedure is more efficient than alternative state-of-the-art IP implementations (e.g., CPLEX) and other specialized methods for stochastic optimization.
Los modelos de optimización estocástica de dos etapas dan lugar a problemas lineales (PL) muy grandes. Se han ideado varios enfoques para resolverlos de manera eficiente, entre los que se encuentran los métodos de punto interior (MPI). Sin embargo, al usar MPI, las columnas de enlace que están asociados con las decisiones de la primera etapa provocan rellenos excesivos para las soluciones de las ecuaciones normales, lo que hace que el procedimiento sea computacionalmente costoso. Hemos dado un paso adelante en el camino hacia una mejor solución al reformular el PL mediante una técnica de división variable que ha reducido significativamente el tiempo de solución. Este trabajo presenta un MPI especializado que primero aplica la división de variables y luego explota la estructura de la formulación determinista equivalente del problema estocástico. El MPI especializado trabaja con un algoritmo que combina la factorización Cholesky y gradientes conjugados precondicionados para resolver las ecuaciones normales al calcular la dirección de Newton para problemas de optimización estocástica en los que las variables de las primeras etapas son lo suficientemente grandes. Nuestro enfoque especializado supera al MPI estándar. Este trabajo proporciona los resultados computacionales de dos problemas estocásticos de la literatura: (1) un sistema de cadena de suministro y (2) expansión de capacidad en un sistema eléctrico. Se utilizaron formulaciones tanto lineales como cuadráticas para obtener instancias de hasta 39 millones de variables y seis millones de restricciones. Cuando se utiliza en estas aplicaciones, los resultados computacionales muestran que nuestro procedimiento es más eficiente que las implementaciones de PI alternativas de última generación (por ejemplo, CPLEX) y otros métodos especializados para la optimización estocástica.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Kokaew, Vorrapath. "Maximum power point tracking of a small-scale compressed air energy storage system." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/404178/.

Повний текст джерела
Анотація:
The thesis is concerned with a small-scale compressed air energy storage (SS-CAES) system. Although these systems have relatively low energy density, they offer advantages of low environmental impact and ease of maintenance. The thesis focuses on solving a number of commonly known problems related to the perturb and observe (P&O) maximum power point tracking (MPPT) system for SS-CAES, including confusion under input power fluctuation conditions and operating point dither. A test rig was designed and built to be used for validation of the theoretical work. The rig comprised an air motor driving a permanent magnet DC generator whose power output is controlled by a buck converter. A speed control system was designed and implemented using a dSPACE controller. This enabled fast convergence of MPPT. Four MPPT systems were investigated. In the first system, the air motor characteristics were used to determine the operating speed corresponding to MPP for a given pressure. This was compared to a maximum efficiency point tracking (MEPT) system. Operating at the maximum power point resulted in 1% loss of efficiency compared to operating at the maximum efficiency point. But MPPT does not require an accurate model of the system that is needed for MEPT, which also requires more sensors. The second system that was investigated uses a hybrid MPPT approach that did not require a prior knowledge system model. It used the rate of change of power output with respect to the duty cycle of the buck converter as well as the change in duty cycle to avoid confusion under input power fluctuations. It also used a fine speed step in the vicinity of the MPP and a coarse speed step when the operating point was far from the MPP. Both simulation and experimental results demonstrate the efficiency of this proposed system. The third P&O MPPT system used a fuzzy logic approach which avoided confusion and eliminated operating point dither. This system was also implemented experimentally. A speed control system improved the controllable speed-range by using a buck-boost converter instead. The last MPPT system employed a hybrid P&O and incremental inductance (INC) approach to avoid confusion and eliminate operating point dither. The simulation results validate the design. Although the focus of the work is on SS-CAES, the results are generic in nature and could be applied to MPPT of other systems such as PV and wind turbine.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Leonardini, Quelca Gonzalo Americo. "Point-scale evaluation of the Soil, Vegetation, and Snow (SVS) land surface model." Doctoral thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/67450.

Повний текст джерела
Анотація:
Le modèle de surface Soil, Vegetation, and Snow (SVS) a été récemment mis au point par Environnement et Changement climatique Canada (ECCC) à des fins opérationnelles de prévisions météorologique et hydrologique. L’objectif principal de cette étude est d’évaluer la capacité de SVS, en mode hors ligne à l’échelle de points de grille, à simuler différents processus par rapport aux observations in situ. L’étude est divisée en deux parties: (1) une évaluation des processus de surface terrestre se produisant dans des conditions sans neige, et (2) une évaluation des processus d’accumulation et de fonte de la neige. Dans la première partie, les flux d’énergie de surface et la teneur en eau ont été évalués sous des climats arides, méditerranéens et tropicaux pour six sites sélectionnés du réseau FLUXNET ayant entre 4 et 12 ans de données. Dans une seconde partie, les principales caractéristiques de l’enneigement sont examinées pour dix sites bien instrumentés ayant entre 8 et 21 ans de données sous climats alpins, maritimes et taiga du réseau ESM-SnowMIP. Les résultats de la première partie montrent des simulations réalistes de SVS du flux de chaleur latente (NSE = 0.58 en moyenne), du flux de chaleur sensible (NSE = 0.70 en moyenne) et du rayonnement net (NSE = 0.97 en moyenne). Le flux de chaleur dans sol est raisonnablement bien simulé pour les sites arides et un site méditerranéen, et simulé sans succès pour les sites tropicaux. Pour sa part, la teneur en eau de surface a été raisonnablement bien simulée aux sites arides (NSE = 0.30 en moyenne) et méditerranéens (NSE = 0.42 en moyenne) et mal simulée aux sites tropicaux (NSE = -16.05 en moyenne). Les performances du SVS étaient comparables aux simulations du Canadian Land surface Scheme (CLASS) non seulement pour les flux d’énergie et le teneur en eau, mais aussi pour des processus plus spécifiques tels que l’évapotranspiration et le bilan en eau. Les résultats de la deuxième partie montrent que SVS est capable de reproduire de manière réaliste les principales caractéristiques de l’enneigement de ces sites. Sur la base des résultats, une distinction claire peut être faite entre les simulations aux sites ouverts et forestiers. SVS simule adéquatement l’équivalent en eau de la neige, la densité et la hauteur de la neige des sites ouverts (NSE = 0.64, 0.75 et 0.59, respectivement), mais présente des performances plus faibles aux sites forestiers (NSE = - 0.40, 0.15 et 0.56, respectivement), ce qui est principalement attribué aux limites du module de tassement et à l’absence d’un module d’interception de la neige. Les évaluations effectuées au début, au milieu et à la fin de l’hiver ont révélé une tendance à la baisse de la capacité de SVS à simuler SWE, la densité et l’épaisseur de la neige à la fin de l’hiver. Pour les sites ouverts, les températures de la neige en surface sont bien représentées (RMSE = 3.00 _C en moyenne), mais ont montré un biais négatif (PBias = - 1.6 % en moyenne), qui était dû à une mauvaise représentation du bilan énergétique de surface sous conditions stables la nuit. L’albédo a montré une représentation raisonnable (RMSE = 0.07 en moyenne), mais une tendance à surestimer les valeurs de fin d’hiver (biais = 0,04 sur la fin de l’hiver), en raison de la diminution progressive pendant les longues périodes de fonte. Enfin, un test de sensibilité a conduit à des suggestions aux développeurs du modèles. Les tests de sensibilité du processus de fonte de la neige suggèrent l’utilisation de la température de surface de la neige au lieu de la température moyenne lors du calcul. Cela permettrait d’améliorer les simulations SWE, à l’exception de deux sites ouverts et d’un site forestier. Les tests de sensibilité à la partition des précipitations permettent d’identifier une transition linéaire de la température de l’air entre 0 et 1 _C comme le meilleur choix en l’absence de partitions observées ou plus sophistiquées.
The Soil, Vegetation, and Snow (SVS) land surface model has been recently developed at Environment and Climate Change Canada (ECCC) for operational numerical weather prediction (NWP) and hydrological forecasting. The main goal of this study is to evaluate the ability of SVS, in offline point-scale mode, to simulate different processes when compared to in-situ observations. The study is divided in two parts: (1) an evaluation of land-surface processes occuring on snow-free conditions, and (2) and evaluation of the snow accumulation and melting processes. In the first part, surface heat fluxes and soil moisture were evaluated under arid, mediterranean, and tropical climates at six selected sites of the FLUXNET network having between 4 and 12 years of data. In the second part, the main characteristics of the snow cover are examined at ten well-instrumented sites having between 8 and 21 years under alpine, maritime and taiga climates from ESM-SnowMIP network. Results of the first part show SVS’s realistic simulations of latent heat flux (NSE = 0.58 on average), sensible heat flux (NSE = 0.70 on average), and net radiation (NSE = 0.97 on average). Soil heat flux is reasonably well simulated for the arid sites and one mediterranean site, and poorly simulated for the tropical sites. On the other hand, surface soil moisture was reasonably well simulated at the arid (NSE = 0.30 on average) and mediterranean sites (NSE = 0.42 on average) and poorly simulated at the tropical sites (NSE = - 16.05 on average). SVS performance was comparable to simulations of the Canadian Land Surface Scheme (CLASS) not only for energy fluxes and soil moisture, but more specific processes such as evapotranspiration and water balance. Results of the second part show that SVS is able to realistically reproduce the main characteristics of the snow cover at these sites. Based on the results, a clear distinction between simulations at open and forest sites can be made. SVS is able to simulate well snow water equivalent, density and snow depth at open sites (NSE = 0.64, 0.75 and 0.59, respectively), but exhibits lower performances over forest sites (NSE = - 0.40, 0.15 and 0.56, respectively), which is attributed mainly to the limitations of the compaction scheme and the absence of a snow interception scheme. Evaluations over early, mid and end winter periods revealed a tendency to decrease SVS’s ability to simulate SWE, density and snow depth during end winter. At open sites, SVS’ snow surface temperatures are well represented (RMSE = 3.00_C on average), but exhibited a cold bias (PBias = - 1.6% on average), which was due to a poor representation of the surface energy balance under stable conditions at nighttime. Albedo showed a reasonable representation (RMSE = 0.07 on average), but a tendency to overestimate end winter albedo (bias = 0.04 over end winter), due to the slow decreasing rate during long melting periods. Finally, sensitivity tests to the snow melting process suggest the use of surface snow temperature instead of the average temperature when computing the melting rate. This would provide the improvement of the SWE simulations, with exception of two open and one forest sites. Sensitivity tests to partition of precipitation allows to identify a linear transition of air temperature between 0 and 1_C as the best choice in the absence of observed or more sophisticated partitions
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Bayram, Ilker. "Interest Point Matching Across Arbitrary Views." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605114/index.pdf.

Повний текст джерела
Анотація:
Making a computer &lsquo
see&rsquo
is certainly one of the greatest challanges for today. Apart from possible applications, the solution may also shed light or at least give some idea on how, actually, the biological vision works. Many problems faced en route to successful algorithms require finding corresponding tokens in different views, which is termed the correspondence problem. For instance, given two images of the same scene from different views, if the camera positions and their internal parameters are known, it is possible to obtain the 3-Dimensional coordinates of a point in space, relative to the cameras, if the same point may be located in both images. Interestingly, the camera positions and internal parameters may be extracted solely from the images if a sufficient number of corresponding tokens can be found. In this sense, two subproblems, as the choice of the tokens and how to match these tokens, are examined. Due to the arbitrariness of the image pairs, invariant schemes for extracting and matching interest points, which were taken as the tokens to be matched, are utilised. In order to appreciate the ideas of the mentioned schemes, topics as scale-space, rotational and affine invariants are introduced. The geometry of the problem is briefly reviewed and the epipolar constraint is imposed using statistical outlier rejection methods. Despite the satisfactory matching performance of simple correlation-based matching schemes on small-baseline pairs, the simulation results show the improvements when the mentioned invariants are used on the cases for which they are strictly necessary.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Leaf, Kyle, and Fulvio Melia. "Analysing H(z) data using two-point diagnostics." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/625514.

Повний текст джерела
Анотація:
Measurements of the Hubble constantH(z) are increasingly being used to test the expansion rate predicted by various cosmological models. But the recent application of two-point diagnostics, such as Om(zi, zj) and Omh(2)(zi, zj), has produced considerable tension between Lambda CDM's predictions and several observations, with other models faring even worse. Part of this problem is attributable to the continued mixing of truly model-independent measurements using the cosmic-chronometer approach, and model-dependent data extracted from baryon acoustic oscillations. In this paper, we advance the use of two-point diagnostics beyond their current status, and introduce new variations, which we call Delta h(zi, zj), that are more useful for model comparisons. But we restrict our analysis exclusively to cosmic-chronometer data, which are truly model independent. Even for these measurements, however, we confirm the conclusions drawn by earlier workers that the data have strongly non-Gaussian uncertainties, requiring the use of both 'median' and 'mean' statistical approaches. Our results reveal that previous analyses using two-point diagnostics greatly underestimated the errors, thereby misinterpreting the level of tension between theoretical predictions and H(z) data. Instead, we demonstrate that as of today, only Einstein-de Sitter is ruled out by the two-point diagnostics at a level of significance exceeding similar to 3s. The R-h = ct universe is slightly favoured over the remaining models, including Lambda cold dark matter and Chevalier-Polarski-Linder, though all of them (other than Einstein-de Sitter) are consistent to within 1 sigma with the measured mean of the Delta h(zi, zj) diagnostics.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhao, Mengyu. "The design of HACCP plan for a small-scale cheese plant." Online version, 2003. http://www.uwstout.edu/lib/thesis/2003/2003zhaom.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Bozgeyikli, Evren. "Locomotion in Virtual Reality for Room Scale Tracked Areas." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6470.

Повний текст джерела
Анотація:
In the recent years, virtual reality has been used as an effective tool for a wide range of areas such as training, rehabilitation, education and games. The affordability of the new generation headsets helped this medium to become more widespread. However, in order for virtual reality to become mainstream, more content that is specifically designed for this medium is needed. Since virtual reality is a different technology than the computer systems, different design principles may be required for these content for better user experience. One of the crucial components of virtual reality applications is locomotion, since the viewpoint of the user is very important in immersing the users into virtual reality and locomotion is used for moving the viewpoint of user in virtual environments. Locomotion in virtual reality is expected to have a direct effect on user experience in terms of many elements such as effort, enjoyment, frustration, motion sickness and presence. Up to date, many locomotion techniques for virtual reality have been studied in the literature. However, many of these techniques were evaluated in large tracked areas. Although professional motion tracking systems can track large areas, today’s new generation affordable commercial virtual reality systems can only track room scale environments. This dissertation aims at evaluating different locomotion techniques in room scale tracked areas for neurotypical individuals and individuals with ASD. Several previous studies concurred that virtual reality is an effective medium for the training and rehabilitation of individuals with ASD. However, no previous study evaluated locomotion in virtual reality for this specific population. Thus, this dissertation aims at finding out the suitable virtual reality locomotion techniques for individuals with ASD. With these motivations, in this dissertation, locomotion techniques for room scale virtual reality were evaluated under three experiments: virtual reality for vocational rehabilitation system, evaluation of eight virtual reality locomotion techniques, and point & teleport direction specification experiment. In the first experiment of virtual reality for vocational rehabilitation system, locomotion, interaction, and display components in an immersive virtual reality system for vocational rehabilitation was evaluated by 10 neurotypical individuals and 9 individuals with high functioning ASD. The results indicated that neurotypical individuals favored real walking over walk-in-place; tangible interaction over haptic device, touch & snap and touch screen; and head mounted display over curtain screen. For the participants with high functioning ASD, real walking was favored over walk-in-place; touch screen was favored over haptic device, tangible interaction and touch & snap; and curtain screen was favored over head mounted display. In the second experiment, eight virtual reality locomotion techniques were evaluated in a room scale tracked area (2m by 2m). These eight locomotion techniques were: redirected walking, walk-in-place, stepper machine, point & teleport, joystick, trackball, hand flapping and flying. Among these locomotion techniques, the three were commonly used in virtual reality (redirected walking, walk-in-place and joystick), the two were unexplored –explored previously only by a few related studies (stepper machine and point & teleport), and the three were selected and/or modified for individuals with ASD based on their common characteristics (trackball, hand flapping and flying). These eight techniques were evaluated in an immersive virtual reality test environment. A user study was performed with 16 neurotypical participants and 15 participants with high functioning ASD. The results indicated that for neurotypical individuals, point & teleport, joystick and redirected walking were suitable virtual reality locomotion techniques for room scale tracked areas whereas hand flapping and flying were not suitable. For individuals with high functioning ASD, point & teleport, joystick and walk-in-place were suitable virtual reality locomotion techniques for room scale tracked areas whereas hand flapping and flying were not suitable. Locomotion techniques that are similar to point & teleport have been starting to be used in commercial video games, however were not evaluated in the literature. For this reason, a separate experiment was performed as the third experiment to investigate the effects of an additional direction specification component of point & teleport. Since this direction specification component exerted an additional cognitive load into the use of the same technique, which was recommended to be avoided for individuals with ASD in the literature, it was only evaluated by neurotypical individuals. An immersive virtual maze environment was developed and a user study was performed with 16 neurotypical users. The results indicated that the additional direction specification feature did not improve the user experience.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Emir, Erdem. "A Comparative Performance Evaluation Of Scale Invariant Interest Point Detectors For Infrared And Visual Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12610159/index.pdf.

Повний текст джерела
Анотація:
In this thesis, the performance of four state-of-the-art feature detectors along with SIFT and SURF descriptors in matching object features of mid-wave infrared, long-wave infrared and visual-band images is evaluated across viewpoints and changing distance conditions. The utilized feature detectors are Scale Invariant Feature Transform (SIFT), multiscale Harris-Laplace, multiscale Hessian-Laplace and Speeded Up Robust Features (SURF) detectors, all of which are invariant to image scale and rotation. Features on different blackbodies, human face and vehicle images are extracted and performance of reliable matching is explored between different views of these objects each in their own category. All of these feature detectors provide good matching performance results in infrared-band images compared with visual-band images. The comparison of matching performance for mid-wave and long-wave infrared images is also explored in this study and it is observed that long-wave infrared images provide good matching performance for objects at lower temperatures, whereas mid-wave infrared-band images provide good matching performance for objects at higher temperatures. The matching performance of SURF detector and descriptor for human face images in long-wave infrared-band is found to be outperforming than other detectors and descriptors.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Levkovitz, Ron. "An investigation of interior point methods for large scale linear programs : theory and computational algorithms." Thesis, Brunel University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316541.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Sanjab, Anibal Jean. "Statistical Analysis of Electric Energy Markets with Large-Scale Renewable Generation Using Point Estimate Methods." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/74356.

Повний текст джерела
Анотація:
The restructuring of the electric energy market and the proliferation of intermittent renewable-energy based power generation have introduced serious challenges to power system operation emanating from the uncertainties introduced to the system variables (electricity prices, congestion levels etc.). In order to economically operate the system and efficiently run the energy market, a statistical analysis of the system variables under uncertainty is needed. Such statistical analysis can be performed through an estimation of the statistical moments of these variables. In this thesis, the Point Estimate Methods (PEMs) are applied to the optimal power flow (OPF) problem to estimate the statistical moments of the locational marginal prices (LMPs) and total generation cost under system uncertainty. An extensive mathematical examination and risk analysis of existing PEMs are performed and a new PEM scheme is introduced. The applied PEMs consist of two schemes introduced by H.P. Hong, namely, the 2n and 2n+1 schemes, and a proposed combination between Hong's and M. E Harr's schemes. The accuracy of the applied PEMs in estimating the statistical moments of system LMPs is illustrated and the performance of the suggested combination of Harr's and Hong's PEMs is shown. Moreover, the risks of the application of Hong's 2n scheme to the OPF problem are discussed by showing that it can potentially yield inaccurate LMP estimates or run into unfeasibility of the OPF problem. In addition, a new PEM configuration is also introduced. This configuration is derived from a PEM introduced by E. Rosenblueth. It can accommodate asymmetry and correlation of input random variables in a more computationally efficient manner than its Rosenblueth's counterpart.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Johnson, Jared M. "Atomic Scale Characterization of Point Defects in the Ultra-Wide Band Gap Semiconductor β-Ga2O3". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1577916628182296.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ojo, Emmanuel Rotimi. "In situ and modelled soil moisture determination and upscaling from point-based to field scale." Soil Science Society of America, 2015. http://hdl.handle.net/1993/32026.

Повний текст джерела
Анотація:
The relevance, value and multi-dimensional application of soil moisture in many areas such as hydrological, meteorological and agricultural sciences have increased the focus on this important part of the ecosystem. However, due to its spatial and temporal variability, accurate soil moisture determination is an ongoing challenge. In the fall of 2013 and spring of 2014, the accuracy of five soil moisture instruments was tested in heavy clay soils and the Root Mean Squared Error (RMSE) values of the default calibration ranged from 0.027 and 0.129 m3 m-3. However, after calibration, the range was improved to 0.014 – 0.040 m3 m-3. The need for calibration has led to the development of generic calibration procedures such as soil texture-based calibrations. As a result of the differences in soil minerology, especially in clay soils, the texture-based calibrations often yield very high RMSE. A novel approach that uses the Cation Exchange Capacity (CEC) grouping was independently tested at three sites and out of seven different calibration equations tested; the CEC-based calibration was the second best behind in situ derived calibration. The high cost of installing and maintaining a network of soil moisture instruments to obtain measurements at limited points has influenced the development of models that can estimate soil moisture. The Versatile Soil Moisture Budget (VSMB) is one of such models and was used in this study. The comparison of the VSMB modelled output to the observed soil moisture data from a single, temporally continuous, in-field calibrated Hydra probe gave mean RMSE values of 0.052 m3 m-3 at the eight site-years in coarse textured soils and 0.059 m3 m-3 at the six site-years in fine textured soils. At field-scale level, the representativeness of an arbitrarily placed soil moisture station was compared to the mean of 48 data samples collected across the field. The single location underestimated soil moisture at 3 of 4 coarse textured fields with an average RMSE of 0.038 m3 m-3 and at only one of the four fine textured sites monitored with an average RMSE of 0.059 m3 m-3.
February 2017
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Guo, Bingyong. "Study of scale modelling, verification and control of a heaving point absorber wave energy converter." Thesis, University of Hull, 2017. http://hydra.hull.ac.uk/resources/hull:16419.

Повний текст джерела
Анотація:
This study focuses on scale modelling of a heaving Point Absorber Wave Energy Converter (PAWEC), model verification via wave tank tests and power maximisation control development. Starting from the boundary element method simulation of the wave-PAWEC interaction, linear and non-linear modelling approaches of Wave-To-Excitation-Force (W2EF), Force-To-Motion (F2M), Wave-To-Motion (W2M) are studied. To verify the proposed models, a 1/50 scale PAWEC has been designed, simulated, constructed and tested in a wave tank under a variety of regular and irregular wave conditions. To study the coupling between the PAWEC hydrodynamics and the Power Take-Off (PTO) mechanism, a Finite Element Method (FEM) is applied to simulate and optimise a Tubular Permanent Magnet Linear Generator (TPMLG) as the PTO system and control actuator. Thus linear and non-linear Wave-To-Wire (W2W) models are proposed via combining the W2M and PTO models for the study and development of power maximisation control. The main contributions of this study are summarised as follows: Linear and non-linear F2M models are derived with the radiation force approximated by a finite order state-space model. The non-linear friction is modelled as the Tustin model, a summation of the Stribeck, Coloumb and damping friction forces, whilst the non-linear viscous force is simulated as the drag term in the Morison equation. Thus a non-linear F2M model is derived considering the non-linear friction and viscous forces as a correction or calibration to the linear F2M model. A wide variety of free-decay tests are conducted in the wave tank and the experimental data fit the non-linear F2M modelling results to a high degree. Further, the mechanism how these non-linear factors influence the PAWEC dynamics and energy dissipations is discussed with numerical and experimental results. Three approaches are proposed in this thesis to approximate the wave excitation force: (i) identifying the excitation force from wave elevation, referred to as the W2EF method, (ii) estimating the excitation force from the measurements of pressure, acceleration and displacement, referred to as the Pressure-Acceleration-Displacement-To-Excitation-Force (PAD2EF) approach and (iii) observing the excitation force via an unknown input observer, referred to as the Unknown-Input-Observation-of-Excitation-Force (UIOEF) technique. The W2EF model is integrated with the linear/non-linear F2M models to deduce linear/non-linear W2M models. A series of excitation tests are conducted under regular and irregular wave conditions to verify the W2EF model in both the time- and frequency-domains. The numerical results of the proposed W2EF model show a high accordance to the excitation test data and hence the W2EF method is valid for the 1/50 scale PAWEC. Meanwhile, a wide range of forced-motion tests are conducted to compare the excitation force approximation results between the W2EF, PAD2EF and UIOEF approaches and to verify the linear and non-linear W2M models. Comparison of the PAWEC displacement responses between the linear/non-linear W2M models and forced-motion tests indicates that the non-linear modelling approach considering the friction and viscous forces can give more accurate PAWEC dynamic representation than the linear modelling approach. Based on the 1/50 scale PAWEC dimension and wave-maker conditions, a three-phase TPMLG is designed, simulated and optimised via FEM simulation with special focus on cogging force reduction. The cogging force reduction is achieved by optimise the TPMLG geometric design of the permanent magnets, slots, pole-shoe and back iron. The TPMLG is acting as the PTO mechanism and control actuator. The TPMLG is connected with the buoy rigidly and hence the coupling is achieved by the PTO force. Linear and non-linear W2W models are derived for the study of power maximisation control. To investigate the control performance on the linear and non-linear W2W models, reactive control and phase control by latching are developed numerically with electrical implementation on the TPMLG. Further, a W2W tracking control structure is proposed to achieve power maximisation and displacement constriction under both regular and irregular wave conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Shaffer, Daniel Alan. "An FPGA Implementation of Large-Scale Image Orthorectification." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1523624621509277.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Dalala', Zakariya Mahmoud. "Design and Analysis of a Small-Scale Wind Energy Conversion System." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/51846.

Повний текст джерела
Анотація:
This dissertation aims to present detailed analysis of the small scale wind energy conversion system (WECS) design and implementation. The dissertation will focus on implementing a hardware prototype to be used for testing different control strategies applied to small scale WECSs. Novel control algorithms will be proposed to the WECS and will be verified experimentally in details. The wind turbine aerodynamics are presented and mathematical modeling is derived which is used then to build wind simulator using motor generator (MG) set. The motor is torque controlled based on the turbine mathematical model and the generator is controlled using the power electronic conversion circuits. The power converter consists of a three phase diode bridge followed by a boost converter. The small signal modeling for the motor, generator, and power converter are presented in details to help building the needed controllers. The main objectives of the small scale WECS controller are discussed. This dissertation focuses on two main regions of wind turbine operation: the maximum power point tracking (MPPT) region operation and the stall region operation. In this dissertation, the concept of MPPT is investigated, and a review of the most common MPPT algorithms is presented. The advantages and disadvantaged of each method will be clearly outlined. The practical implementation limitation will be also considered. Then, a MPPT algorithm for small scale wind energy conversion systems will be proposed to solve the common drawback of the conventional methods. The proposed algorithm uses the dc current as the perturbing variable and the dc link voltage is considered as a degree of freedom that will be utilized to enhance the performance of the proposed algorithm. The algorithm detects sudden wind speed changes indirectly through the dc link voltage slope. The voltage slope is also used to enhance the tracking speed of the algorithm and to prevent the generator from stalling under rapid wind speed slow down conditions. The proposed method uses two modes of operation: A perturb and observe (PandO) mode with adaptive step size under slow wind speed fluctuation conditions, and a prediction mode employed under fast wind speed change conditions. The dc link capacitor voltage slope reflects the acceleration information of the generator which is then used to predict the next step size and direction of the current command. The proposed algorithm shows enhanced stability and fast tracking capability under both high and low rate of change wind speed conditions and is verified using a 1.5-kW prototype hardware setup. This dissertation deals also with the WECS control design under over power and over speed conditions. The main job of the controller is to maintain MPPT while the wind speed is below rated value and to limit the electrical power and mechanical speed to be within the system ratings when the wind speed is above the rated value. The concept of stall region and stall control is introduced and a stability analysis for the overall system is derived and presented. Various stall region control techniques are investigated and a new stall controller is proposed and implemented. Two main stall control strategies are discussed in details and implemented: the constant power stall control and the constant speed stall control. The WECS is expected to work optimally under different wind speed conditions. The system should be designed to handle both MPPT control and stall region control at the same time. Thus, the control transition between the two modes of operation is of vital interest. In this dissertation, the light will be shed on the control transition optimization and stabilization between different operating modes. All controllers under different wind speed conditions and the transition controller are designed to be blind to the system parameters pre knowledge and all are mechanical sensorless, which highlight the advantage and cost effectiveness of the proposed control strategy. The proposed control method is experimentally validated using the WECS prototype developed. Finally, the proposed control strategies in different regions of operation will be successfully applied to a battery charger application, where the constraints of the wind energy battery charger control system will be analyzed and a stable and robust control law will be proposed to deal with different operating scenarios.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Vest, Jeffrey D. "Robust, location-free scale estimators for the linear regression and k-sample models." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-06062008-151058/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Digne, Julie. "Inverse geometry : from the raw point cloud to the 3d surface : theory and algorithms." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2010. http://tel.archives-ouvertes.fr/tel-00610432.

Повний текст джерела
Анотація:
Many laser devices acquire directly 3D objects and reconstruct their surface. Nevertheless, the final reconstructed surface is usually smoothed out as a result of the scanner internal de-noising process and the offsets between different scans. This thesis, working on results from high precision scans, adopts the somewhat extreme conservative position, not to loose or alter any raw sample throughout the whole processing pipeline, and to attempt to visualize them. Indeed, it is the only way to discover all surface imperfections (holes, offsets). Furthermore, since high precision data can capture the slightest surface variation, any smoothing and any sub-sampling can incur in the loss of textural detail.The thesis attempts to prove that one can triangulate the raw point cloud with almost no sample loss. It solves the exact visualization problem on large data sets of up to 35 million points made of 300 different scan sweeps and more. Two major problems are addressed. The first one is the orientation of the complete raw point set, an the building of a high precision mesh. The second one is the correction of the tiny scan misalignments which can cause strong high frequency aliasing and hamper completely a direct visualization.The second development of the thesis is a general low-high frequency decomposition algorithm for any point cloud. Thus classic image analysis tools, the level set tree and the MSER representations, are extended to meshes, yielding an intrinsic mesh segmentation method.The underlying mathematical development focuses on an analysis of a half dozen discrete differential operators acting on raw point clouds which have been proposed in the literature. By considering the asymptotic behavior of these operators on a smooth surface, a classification by their underlying curvature operators is obtained.This analysis leads to the development of a discrete operator consistent with the mean curvature motion (the intrinsic heat equation) defining a remarkably simple and robust numerical scale space. By this scale space all of the above mentioned problems (point set orientation, raw point set triangulation, scan merging, segmentation), usually addressed by separated techniques, are solved in a unified framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Garstka, Jens [Verfasser]. "Learning strategies to select point cloud descriptors for large-scale 3-D object classification / Jens Garstka." Hagen : Fernuniversität Hagen, 2016. http://d-nb.info/1082301027/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Baylis, Samuel Andrew. "Tunable patch antenna using semiconductor and nano-scale Barium Strontium Titanate varactors." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0001970.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

ISHIHARA, T., M. KANEDA, K. YOKOKAWA, K. ITAKURA, and A. UNO. "Small-scale statistics in high-resolution direct numerical simulation of turbulence: Reynolds number dependence of one point." Taylor & Francis, 2007. http://hdl.handle.net/2237/11132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Youness, Chebli. "L'e-réputation du point de vue client : modèle intégrateur et échelle de mesure." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAG016.

Повний текст джерела
Анотація:
Bien que la réputation en ligne ait attiré une attention particulière parmi les praticiens en marketing, la recherche dans ce domaine est encore limitée. Dans ce travail doctoral, les auteurs examinent les antécédents et les conséquences de l’e-réputation du point de vue client. Une approche selon un modèle d’équations structurelles est utilisée pour tester le modèle basé sur les données d’une enquête auprès de 1097 acheteurs en ligne français. Les résultats montrent l’impact de la confiance, l’héritage et la qualité du site sur l’e-réputation, ainsi que la façon dont l’e-réputation affecte l’engagement du client, le bouche à oreille, le risque perçu et la valeur perçue. Plusieurs implications managériales et théoriques sont ensuite discutées
Although online reputation has attracted significant attention among marketing practitioners, research in this area is still limited. In this research dissertation, the authors examine the antecedents and consequences of online reputation from the customer’s perspective. A structural equation modeling approach is used to test the model based on data from a survey of 1097 French online buyers. The results show the impact of trust, heritage, and website quality on online reputation, as well as how online reputation affects customer commitment, word of mouth, perceived risk, and perceived value. Several implications either in terms of conceptual or managerial insights are then discussed
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Abayowa, Bernard Olushola. "Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City Models." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1372508452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Wei, Junhui. "A hypothetical urban design approach for rethinking mega-scale podium redevelopment in Hong Kong North Point Harbour redevelopment /." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B42930893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Österdahl, Mathias. "Water treatment at personal level : An examination of five products intended for a small scale, personal point-of-use." Thesis, Mittuniversitetet, Avdelningen för ekoteknik och hållbart byggande, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-30793.

Повний текст джерела
Анотація:
Water, and particularly clean water is essential for humans with a profound effect on health and has the capacity to reduce illnesses. Paradoxical, water is a medium where disease causing agents could be transmitted into the human body. Water can cause illness both from distribution of pathogenic organisms into the human system and also if not consumed in a required amount, leading to dehydration and other complications. Today catastrophes and disasters hit different areas in various forms. When such an event occurs, infrastructure is often disturbed or destroyed, and the supply of fresh water may be threatened. The Swedish Civil Contingencies Agency (MSB) is a government agency in Sweden, with a task to developing the community’ssocietal ability to prevent and handle emergencies, accidents and crises. The agency support various actors when a crisis or an accident occurs, both abroad and at a national level. The personnelal supporting at a crisis zone is sometimes working under extreme conditions where basic needs, such as access to food and fresh water can be a deficiency. To ensure that the personnel working at these sites can continue to solve problems without risking their own health caused by dehydration or other waterborne diseases, different methods can be used to treat water for personal use. Five different products intended for personal water treatment are chlorine dioxide pills, chlorine dioxide liquid, the Katadyn filter bottle, the Lifesaver filter bottle and the UV-lamp SteriPEN. These products use different water treatment techniques to purify water and secure the access to fresh water during exposed conditions. The aim with this study is to create an information basis in order for MSB to choose water treatment product for their future international missions. This is done by examine four parameters of these different products; purification capacity, manageability, environmental impact and economic aspects. The study showed that there is no product that pervading is best according to all parameters, they all have their pros and cons. The product that was best on average throughout the whole study is the SteriPEN but only if used during 10 or more missions. If a product should be used for only five or fewer missions, the chlorine dioxide liquid is recommended to use. At sites where the raw water is heavily contaminated a combination of two products could be an option, as a result of this thesis it is recommended to combine the chlorine dioxide liquid and the SteriPEN. This study is done qualitative and the result is based on literature, laboratorial reports and own measurements and calculations. Actual field tests are needed to further evaluate the products. The importance is that the product functions practically during MSBs working conditions, so relief workers really applicate the product to purify water and not refrain because it is not compatible with the working situations. If the product isn’t used because of these reasons it shouldn’t be used because it puts the relief workers at health risks.
Vatten, och i synnerhet rent vatten är livsavgörande för människor och har en grundlig hälsoeffekt med en förmåga att reducera hälsoåkommor. Paradoxalt nog är vatten samtidigt ett transportmedium för ämnen som orsakar sjukdomar. Vatten kan orsaka sjukdom och illamående både från distributionen av patogena ämnen in i människokroppen men också om intaget av vatten inte är tillräckligt för kroppen, vilket kan leda till uttorkning med stora komplikationer. Idag drabbas vissa områden av katastrofer och olyckor i varierande form. När sådana kriser och katastrofer sker, blir ofta infrastruktur skadad eller förstörd vilket kan medföra att tillgången till rent vatten hotas. Myndigheten för Samhällsskydd och Beredskap (MSB) är en myndighet i Sverige, med uppgift att utveckla social kapacitet för att motverka och hantera nödsituationer, olyckor och kriser. Myndigheten stödjer olika aktörer när en kris eller olycka uppstår, både utomlands och på nationell nivå. Personalen som stödjer på plats i en kriszon arbetar ibland under extrema förhållanden där basala nödvändigheter, som t.ex. tillgången till mat och rent vatten kan vara en bristvara. För att säkerhetsställa att personalen som arbetar på dessa platser kan fortsätta att lösa sin uppgift utan att riskera sitt eget välbefinnande på grund av vattenbrist eller andra vattenrelaterade sjukdomar, kan olika metoder för vattenrening på personlig nivå användas. I den här studien valdes fem olika produkter avsedda för personlig vattenrening ut; klordioxid i tablettform, klordioxid i vätskeform, Katadynflaskan, Lifesaverflaskan och UV-lampan SteriPEN. Dessa produkter utnyttjar olika tekniker för att rena vatten och säkerhetsställa tillgången av rent vatten under utsatta situationer. Målet med den här studien är att skapa en informationsbas som underlag för MSB att använda sig av när de väljer vattenreningsmetod för kommande internationella insatser. Fem produkter utvärderats därför utifrån fyra parametrar; reningskapacitet, handhavande, miljöpåverkan och ekonomisk aspekt. Studien visade att det inte var någon enskild produkt som genomgående var bäst utifrån alla parametrar, de hade alla sina för och nackdelar. Produkten som överlag fick bäst resultat genom studien var SteriPEN men det utifrån att produkten används under tio insatser eller mer. Om en produkt endast ska användas under ett fåtal insatser är klordioxid i vätskeform att föredra. På platser där råvattnet är skarpt kontaminerat kan en kombination av två olika produkter vara aktuell, rekommenderat är att kombinera klordioxid i vätskeform med SteriPEN, draget som slutsats av resultatet av denna studie. Det här är en kvalitativ studie och resultatet grundar sig på litteratur, analysresultat från laboratorietester samt egna mätningar och beräkningar. Faktiska tester i fält är nödvändiga för att vidare utvärdera produkterna. Det viktiga är att produkten faktiskt fungerar praktiskt baserat på förhållandena MSB arbetar under så att hjälparbetare verkligen använder produkten för att rena kontaminerat vatten och inte avstår att använda produkten på grund av att den inte är kompatibel med arbetsförhållandena. Om produkten inte används på grund av den anledningen ska den inte användas i fält då den utsätter hjälparbetarnas hälsa för risk.

2016-12-01

Стилі APA, Harvard, Vancouver, ISO та ін.
42

Wang, Qihe. "Scheduling and Simulation of Large Scale Wireless Personal Area Networks." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1148050113.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

GANESAN, GAUTHAM. "Accessibility Studies of Potentially Hazardous Asteroids from the Sun-Earth L2 Libration Point." Thesis, Luleå tekniska universitet, Rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-81630.

Повний текст джерела
Анотація:
A newly proposed F-class mission by the European Space Agency (ESA) in 2019,Comet Interceptor, aims to dynamically intercept a New Solar System Objectsuch as a Dynamically New Comet (DNC). The Spacecraft will be placed in aperiodic (Halo) orbit around the Sun-Earth L2 Lagrangian point, waiting for furtherinstructions about the passage of a comet or an asteroid, which could well bereached within the stipulated mission constraints.A major part of the detection of these bodies will be owed to the Large SynopticSurvey Telescope (Currently under construction in Chile), which hopes to vastlyincrease the ability to discover a possible target using the catalogue of LongPeriod Comets and a set of its orbits. It is suggested that, in a mission length of<5 years, discoveries and warnings are possible so that optimization of thetrajectory and characterisation of the object are done within the set windows.This thesis is aimed at facilitating a transfer to a Potentially Hazardous Asteroid(PHA), a subset of the Near-Earth Objects (NEO), as a secondary choice on theoff-chance that the discovered comet could not be reached from the L2 Librationpoint within the mission constraints.The first section of this thesis deals with the selection of a Potentially HazardousAsteroid for our mission from the larger database of the Near-Earth Objects,based on a measure of impact hazard called the Palermo Scale, while the secondsection of the thesis aims to obtain a suitable Halo orbit around L2 through ananalytical construction method. After a desired orbit is found, the invariantmanifolds around the Halo orbit are constructed and analysed in an attempt toreduce the ΔV, where from the spacecraft can intercept the Potentially Hazardous Asteroid through the trajectory demanding the least energy.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Mallon, Kelsey N. "Altering the Gag Reflex via a Hand Pressure Device: Perceptions of Pressure." Miami University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=miami1398622026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Izadi, Saeed. "Optimal Point Charge Approximation: from 3-Atom Water Molecule to Million-Atom Chromatin Fiber." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81539.

Повний текст джерела
Анотація:
Atomistic modeling and simulation methods enable a modern molecular approach to bio-medical research. Issues addressed range from structure-function relationships to structure-based drug design. The ability of these methods to address biologically relevant problems is largely determined by their accurate treatment of electrostatic interactions in the target biomolecular structure. In practical molecular simulations, the electrostatic charge density of molecules is approximated by an arrangement of fractional "point charges" throughout the molecule. While chemically intuitive and straightforward in technical implementation, models based exclusively on atom-centered charge placement, a major workhorse of the biomolecular simulations, do not necessarily provide a sufficiently detailed description of the molecular electrostatic potentials for small systems, and can become prohibitively expensive for large systems with thousands to millions of atoms. In this work, we propose a rigorous and generally applicable approach, Optimal Point Charge Approximation (OPCA), for approximating electrostatic charge distributions of biomolecules with a small number of point charges to best represent the underlying electrostatic potential, regardless of the distance to the charge distribution. OPCA places a given number of point charges so that the lowest order multipole moments of the reference charge distribution are optimally reproduced. We provide a general framework for calculating OPCAs to any order, and introduce closed-form analytical expressions for the 1-charge, 2-charge and 3-charge OPCA. We demonstrate the advantage of OPCA by applying it to a wide range of biomolecules of varied sizes. We use the concept of OPCA to develop a different, novel approach of constructing accurate and simple point charge water models. The proposed approach permits a virtually exhaustive search for optimal model parameters in the sub-space most relevant to electrostatic properties of the water molecule in liquid phase. A novel rigid 4-point Optimal Point Charge (OPC) water model constructed based on the new approach is substantially more accurate than commonly used models in terms of bulk water properties, and delivers critical accuracy improvement in practical atomistic simulations, such as RNA simulations, protein folding, protein-ligand binding and small molecule hydration. We also apply our new approach to construct a 3-point version of the Optimal Point Charge water model, referred to as OPC3. OPCA can be employed to represent large charge distributions with only a few point charges. We use this capability of OPCA to develop a multi-scale, yet fully atomistic, generalized Born approach (GB-HCPO) that can deliver up to 2 orders of magnitude speedup compared to the reference MD simulation. As a practical demonstration, we exploit the new multi-scale approach to gain insight into the structure of million-atom 30-nm chromatin fiber. Our results suggest important structural details consistent with experiment: the linker DNA fills the core region and the H3 histone tails interact with the linker DNA. OPC, OPC3 and GB-HCPO are implemented in AMBER molecular dynamics software package.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Nicholson, John Corbett. "Design of a large-scale constrained optimization algorithm and its application to digital human simulation." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5583.

Повний текст джерела
Анотація:
A new optimization algorithm, which can efficiently solve large-scale constrained non-linear optimization problems and leverage parallel computing, is designed and studied. The new algorithm, referred to herein as LASO or LArge Scale Optimizer, combines the best features of various algorithms to create a computationally efficient algorithm with strong convergence properties. Numerous algorithms were implemented and tested in its creation. Bound-constrained, step-size, and constrained algorithms have been designed that push the state-of-the-art. Along the way, five novel discoveries have been made: (1) a more efficient and robust method for obtaining second order Lagrange multiplier updates in Augmented Lagrangian algorithms, (2) a method for directly identifying the active constraint set at each iteration, (3) a simplified formulation of the penalty parameter sub-problem, (4) an efficient backtracking line-search procedure, (5) a novel hybrid line-search trust-region step-size calculation method. The broader impact of these contributions is that, for the first time, an Augmented Lagrangian algorithm is made to be competitive with state-of-the-art Sequential Quadratic Programming and Interior Point algorithms. The present work concludes by showing the applicability of the LASO algorithm to simulate one step of digital human walking and to accelerate the optimization process using parallel computing.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Elsing, Sarah. "Border regulars : an ethnographic enquiry into the becomings of the Thai-Lao border from the vantage point of small-scale trade." Thesis, SOAS, University of London, 2016. http://eprints.soas.ac.uk/23641/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Fryer, Rosemarie. "Quantification of the Bed-Scale Architecture of Submarine Depositional Environments and Application to Lobe Deposits of the Point Loma Formation, California." Thesis, Colorado School of Mines, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10844938.

Повний текст джерела
Анотація:

Submarine-fan deposits form the largest sediment accumulations on Earth and host significant reservoirs for hydrocarbons. While many studies of ancient fan deposits qualitatively describe lateral architectural variability (e.g., axis-to-fringe, proximal-to-distal), these relationships are rarely quantified. In order to enable comparison of key relationships that control the lateral architecture of submarine depositional environments, I digitized published bed-scale outcrop correlation panels from five different environments (channel, levee, lobe, channel-lobe-transition-zone, basin plain). Measured architectural parameters (bed thickness, bed thinning rates, lateral correlation distance, net-to-gross) provide a quantitative framework to compare facies architecture between environments. The results show that sandstone and/or mudstone bed thickness alone or net-to-gross do not reliably differentiate between environments. However, environments are distinguishable using a combination of thinning rate, bed thickness, and correlation distance. For example, channel deposits generally display thicker sandstone beds than mudstone beds whereas levees display the opposite trend. Lobe deposits display the most variability in all parameters, and thus would be the most difficult to identify in the subsurface. I sub-classified lobe deposits to provide a more detailed analysis into unconfined, semiconfined and confined settings. However, the results for semiconfined lobes indicate that the degree of lobe confinement and subenvironment is not easily interpretable at the outcrop scale. This uncertainty could be partially caused by subjectivity of qualitative interpretations of environment, which demonstrates the need for more quantitative studies of bed-scale heterogeneity. These results can be used to constrain forward stratigraphic models and reservoir models of submarine lobe deposits as well as other submarine depositional environments.

This work is paired with a case study to refine the depositional environment of submarine lobe strata of the Upper Cretaceous Point Loma Formation at Cabrillo National Monument near San Diego, California. These fine-grained turbidites have been interpreted as distal submarine lobe deposits. The strike-oriented, laterally-extensive exposure offers a rare opportunity to observe bed-scale architecture and facies changes in turbidites over 1 km lateral distance. Beds show subtle compensation, likely related to evolving seafloor topography, while lobe elements show drastic compensation. This indicates more hierarchical method of compensational stacking as the degree of bed compensation is small compared to the degree of element compensation. Thinning rates and bed thicknesses are not statistically different between lobe elements. This signifies that the lateral exposure is necessary to distinguish lobe elements and it would be extremely difficult to accurately interpret elements in the subsurface using 1D data (e.g., core). The grain size, mudstone to sandstone bed thicknesses, element/bed compensation, and lack of erosion observed in the Cabrillo National Monument exposures of the Point Loma Formation are most similar to values of semiconfined lobe deposits; hence, I reinterpret that these exposures occupy a more medial position, perhaps with some degree of confinement.

Стилі APA, Harvard, Vancouver, ISO та ін.
49

Waters, Rafael. "Energy from Ocean Waves : Full Scale Experimental Verification of a Wave Energy Converter." Doctoral thesis, Uppsala universitet, Elektricitetslära, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9404.

Повний текст джерела
Анотація:
A wave energy converter has been constructed and its function and operational characteristics have been thoroughly investigated and published. The wave energy converter was installed in March of 2006 approximately two kilometers off the Swedish west coast in the proximity of the town Lysekil. Since then the converter has been submerged at the research site for over two and a half years and in operation during three time periods for a total of 12 months, the latest being during five months of 2008. Throughout this time the generated electricity has been transmitted to shore and operational data has been recorded. The wave energy converter and its connected electrical system has been continually upgraded and each of the three operational periods have investigated more advanced stages in the progression toward grid connection. The wave energy system has faced the challenges of the ocean and initial results and insights have been reached, most important being that the overall wave energy concept has been verified. Experiments have shown that slowly varying power generation from ocean waves is possible. Apart from the wave energy converter, three shorter studies have been performed. A sensor was designed for measuring the air gap width of the linear generator used in the wave energy converter. The sensor consists of an etched coil, a search coil, that functions passively through induction. Theory and experiment showed good agreement. The Swedish west coast wave climate has been studied in detail. The study used eight years of wave data from 13 sites in the Skagerrak and Kattegatt, and data from a wave measurement buoy located at the wave energy research site. The study resulted in scatter diagrams, hundred year extreme wave estimations, and a mapping of the energy flux in the area. The average energy flux was found to be approximately 5.2 kW/m in the offshore Skagerrak, 2.8 kW/m in the near shore Skagerrak, and 2.4 kW/m in the Kattegat. A method for evaluating renewable energy technologies in terms of economy and engineering solutions has been investigated. The match between the technologies and the fundamental physics of renewable energy sources can be given in terms of the technology’s utilization. It is argued that engineers should strive for a high utilization if competitive technologies are to be developed.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Lu, Zhaosong. "Algorithm Design and Analysis for Large-Scale Semidefinite Programming and Nonlinear Programming." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7151.

Повний текст джерела
Анотація:
The limiting behavior of weighted paths associated with the semidefinite program (SDP) map $X^{1/2}SX^{1/2}$ was studied and some applications to error bound analysis and superlinear convergence of a class of primal-dual interior-point methods were provided. A new approach for solving large-scale well-structured sparse SDPs via a saddle point mirror-prox algorithm with ${cal O}(epsilon^{-1})$ efficiency was developed based on exploiting sparsity structure and reformulating SDPs into smooth convex-concave saddle point problems. An iterative solver-based long-step primal-dual infeasible path-following algorithm for convex quadratic programming (CQP) was developed. The search directions of this algorithm were computed by means of a preconditioned iterative linear solver. A uniform bound, depending only on the CQP data, on the number of iterations performed by a preconditioned iterative linear solver was established. A polynomial bound on the number of iterations of this algorithm was also obtained. One efficient ``nearly exact' type of method for solving large-scale ``low-rank' trust region subproblems was proposed by completely avoiding the computations of Cholesky or partial Cholesky factorizations. A computational study of this method was also provided by applying it to solve some large-scale nonlinear programming problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії