To see the other types of publications on this topic, follow the link: Scaling.

Dissertations / Theses on the topic 'Scaling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Scaling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sendrowski, Janek. "Feigenbaum Scaling." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96635.

Full text
Abstract:
In this thesis I hope to provide a clear and concise introduction to Feigenbaum scaling accessible to undergraduate students. This is accompanied by a description of how to obtain numerical results by various means. A more intricate approach drawing from renormalization theory as well as a short consideration of some of the topological properties will also be presented. I was furthermore trying to put great emphasis on diagrams throughout the text to make the contents more comprehensible and intuitive.
APA, Harvard, Vancouver, ISO, and other styles
2

Bertrand, Allison R., Todd A. Newton, and Thomas B. Grace. "iNET System Management Scaling." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604307.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
The integration of standard networking technologies into the test range allows for more capable and complex systems. As System Management provides the capability for dynamic allocation of resources, it is critical to support the level of network flexibility envisioned by the integrated Network-Enhanced Telemetry (iNET) project. This paper investigates the practical performance of managing the Telemetry Network System (TmNS) using the Simple Network Management Protocol (SNMP). It discusses the impacts and benefits of System Management as the size of the TmNS scales from small to large and as distributed and centralized management styles are applied. To support dynamic network states, it is necessary to be able to both collect the current status of the network and command (or modify the configuration of) the network. The management data needs to travel both ways over the telemetry link (in limited bandwidth) without interfering with critical data streams. It is important that the TmNS's status is collected in a timely manner so that the engineers are aware of any equipment failures or other problems; it is also imperative that System Management does not adversely affect the real-time delivery of data. This paper discusses measurements of SNMP traffic under various loading conditions. Statistics considered will include the performance of SNMP commands, queries, and events under various test article and telemetry network loads and the bandwidth consumed by SNMP commands, queries, and events under various conditions (e.g., pre-configuration, normal operation, and device error).
APA, Harvard, Vancouver, ISO, and other styles
3

Kulakov, Y., and R. Rader. "Computing Resources Scaling Survey." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55750.

Full text
Abstract:
The results of the survey about usage of scalable environment, peak workloads management and automatic scaling configuration among IT companies are presented and discussed in this paper. The hypothesis that most companies use automatic scaling based on static thresholds is checked. The insight into the most popular setups of manual and automatic scalable systems on the market is given.
APA, Harvard, Vancouver, ISO, and other styles
4

Ricciardi, Anthony Pasquale. "Geometrically Nonlinear Aeroelastic Scaling." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/24913.

Full text
Abstract:
Aeroelastic scaling methodologies are developed for geometrically nonlinear applications. The new methods are demonstrated by designing an aeroelastically scaled model of a suitably nonlinear full-scale joined-wing aircraft. The best of the methods produce scaled models that closely replicate the target aeroelastic behavior. Internal loads sensitivity studies show that internal loads can be insensitive to axial stiffness, even for globally indeterminate structures. A derived transverse to axial stiffness ratio can be used as an indicator of axial stiffness importance. Two findings of the work extend to geometrically linear applications: new sources of local optima are identified, and modal mass is identified as a scaling parameter. Optimization procedures for addressing the multiple optima and modal mass matching are developed and demonstrated. Where justified, limitations of commercial software are avoided through development of custom tools for structural analysis and sensitivities, aerodynamic analysis, and nonlinear aeroelastic trim.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Govindasamy, Saravana P. "Scaling Innovations in Healthcare." Diss., Temple University Libraries, 2019. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/543975.

Full text
Abstract:
Business Administration/Management Information Systems
D.B.A.
This research paper examines the innovation adoption of technology, specifically Artificial Intelligence (AI) implementations in hospitals by exploring the capabilities that enables AI innovations using the dynamic capabilities (sensing, seizing and reconfiguring) framework and clinicians’ intentions to use AI innovations for patient care by applying the technology adoption/acceptance framework Unified Theory of Acceptance and Use of Technology (UTAUT) utilizing qualitative case study analysis and quantitative survey methodology respectively. This multi-disciplinary research has considerable relevance to both healthcare business leaders and clinical practitioners by identifying the key factors that drives the decisions to adopt innovations to improve healthcare organizations' competitiveness to enhance patient care as well as to reduce overall healthcare costs. The main findings are: (1) On an organizational level, healthcare organizations with strong and versatile dynamic capabilities, who build on their existing knowledge and capabilities are better able to integrate the innovations into their internal operations and existing services. The identified barriers provide a clear sense of organizational barriers and resistance points for innovation adoption (2) On an individual level, the impact of quality of care and organization leadership support are the key factors that facilitates the adoption of innovation among the clinicians. (3) Current trends and key impact areas of AI technology in the healthcare industry are identified Key words: Innovation, Innovation Adoption, Dynamic Capabilities, Healthcare, Artificial Intelligence, AI, Technology, Strategic Management
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
6

Jeffsell, Björn. "Game Balance by Scaling Damage : Scaling Game Difficulty by Changing Players Damage Output." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5022.

Full text
Abstract:
There is a lot of different kind of games which creates many different ways to balance the difficulty of games. This study will look at if damage output from a player is a good variable to scale in order to create a better balance and make the game feel more rewarding overall, based on that a game would be enjoyable if a player feels that it is rewarding to play the game. By letting both inexperienced and avid players test a part of a game with different settings for the damage output to see if the players finds the game to be more rewarding if the difficulty is set to a higher setting (lower damage output). The conclusion is that it is that damage output cannot directly affect how overall rewarding a player finds the game, but can affect other variables that in turn make the game feel more rewarding.
APA, Harvard, Vancouver, ISO, and other styles
7

Läuter, Henning, and Ayad Ramadan. "Statistical Scaling of Categorical Data." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/4956/.

Full text
Abstract:
Estimation and testing of distributions in metric spaces are well known. R.A. Fisher, J. Neyman, W. Cochran and M. Bartlett achieved essential results on the statistical analysis of categorical data. In the last 40 years many other statisticians found important results in this field. Often data sets contain categorical data, e.g. levels of factors or names. There does not exist any ordering or any distance between these categories. At each level there are measured some metric or categorical values. We introduce a new method of scaling based on statistical decisions. For this we define empirical probabilities for the original observations and find a class of distributions in a metric space where these empirical probabilities can be found as approximations for equivalently defined probabilities. With this method we identify probabilities connected with the categorical data and probabilities in metric spaces. Here we get a mapping from the levels of factors or names into points of a metric space. This mapping yields the scale for the categorical data. From the statistical point of view we use multivariate statistical methods, we calculate maximum likelihood estimations and compare different approaches for scaling.
APA, Harvard, Vancouver, ISO, and other styles
8

Tsang, Wai-Hung. "Scaling up support vector machines /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20TSANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Urseanu, Maria Ioana. "Scaling up bubble column reactors." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2000. http://dare.uva.nl/document/83970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sachs, Michael Karl. "Earthquake Scaling, Simulation and Forecasting." Thesis, University of California, Davis, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3646390.

Full text
Abstract:

Earthquakes are among the most devastating natural events faced by society. In 2011, just two events, the magnitude 6.3 earthquake in Christcurch New Zealand on February 22, and the magnitude 9.0 Tōhoku earthquake off the coast of Japan on March 11, caused a combined total of $226 billion in economic losses. Over the last decade, 791,721 deaths were caused by earthquakes. Yet, despite their impact, our ability to accurately predict when earthquakes will occur is limited. This is due, in large part, to the fact that the fault systems that produce earthquakes are non-linear. The result being that very small differences in the systems now result in very big differences in the future, making forecasting difficult. In spite of this, there are patterns that exist in earthquake data. These patterns are often in the form of frequency-magnitude scaling relations that relate the number of smaller events observed to the number of larger events observed. In many cases these scaling relations show consistent behavior over a wide range of scales. This consistency forms the basis of most forecasting techniques. However, the utility of these scaling relations is limited by the size of the earthquake catalogs which, especially in the case of large events, are fairly small and limited to a few 100 years of events.

In this dissertation I discuss three areas of earthquake science. The first is an overview of scaling behavior in a variety of complex systems, both models and natural systems. The focus of this area is to understand how this scaling behavior breaks down. The second is a description of the development and testing of an earthquake simulator called Virtual California designed to extend the observed catalog of earthquakes in California. This simulator uses novel techniques borrowed from statistical physics to enable the modeling of large fault systems over long periods of time. The third is an evaluation of existing earthquake forecasts, which focuses on the Regional Earthquake Likelihood Models (RELM) test: the first competitive test of earthquake forecasts in California.

APA, Harvard, Vancouver, ISO, and other styles
11

Rusch, Thomas, Patrick Mair, and Kurt Hornik. "COPS Cluster Optimized Proximity Scaling." WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4465/1/COPS.pdf.

Full text
Abstract:
Proximity scaling (i.e., multidimensional scaling and related methods) is a versatile statistical method whose general idea is to reduce the multivariate complexity in a data set by employing suitable proximities between the data points and finding low-dimensional configurations where the fitted distances optimally approximate these proximities. The ultimate goal, however, is often not only to find the optimal configuration but to infer statements about the similarity of objects in the high-dimensional space based on the the similarity in the configuration. Since these two goals are somewhat at odds it can happen that the resulting optimal configuration makes inferring similarities rather difficult. In that case the solution lacks "clusteredness" in the configuration (which we call "c-clusteredness"). We present a version of proximity scaling, coined cluster optimized proximity scaling (COPS), which solves the conundrum by introducing a more clustered appearance into the configuration while adhering to the general idea of multidimensional scaling. In COPS, an arbitrary MDS loss function is parametrized by monotonic transformations and combined with an index that quantifies the c-clusteredness of the solution. This index, the OPTICS cordillera, has intuitively appealing properties with respect to measuring c-clusteredness. This combination of MDS loss and index is called "cluster optimized loss" (coploss) and is minimized to push any configuration towards a more clustered appearance. The effect of the method will be illustrated with various examples: Assessing similarities of countries based on the history of banking crises in the last 200 years, scaling Californian counties with respect to the projected effects of climate change and their social vulnerability, and preprocessing a data set of hand written digits for subsequent classification by nonlinear dimension reduction. (authors' abstract)
Series: Discussion Paper Series / Center for Empirical Research Methods
APA, Harvard, Vancouver, ISO, and other styles
12

Dewar, R. C. "Configurational studies of scaling phenomena." Thesis, University of Edinburgh, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bell, Paul W. "Statistical inference for multidimensional scaling." Thesis, University of Newcastle Upon Tyne, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Nevell, Roger Thomas. "Scaling the thermal stability test." Thesis, University of Portsmouth, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hammond, Simon P. "Adaptive scaling of evolvable systems." Thesis, University of Birmingham, 2007. http://etheses.bham.ac.uk//id/eprint/121/.

Full text
Abstract:
Neo-Darwinian evolution is an established natural inspiration for computational optimisation with a diverse range of forms. A particular feature of models such as Genetic Algorithms (GA) [18, 12] is the incremental combination of partial solutions distributed within a population of solutions. This mechanism in principle allows certain problems to be solved which would not be amenable to a simple local search. Such problems require these partial solutions, generally known as building-blocks, to be handled without disruption. The traditional means for this is a combination of a suitable chromosome ordering with a sympathetic recombination operator. More advanced algorithms attempt to adapt to accommodate these dependencies during the search. The recent approach of Estimation of Distribution Algorithms (EDA) aims to directly infer a probabilistic model of a promising population distribution from a sample of fitter solutions [23]. This model is then sampled to generate a new solution set. A symbiotic view of evolution is behind the recent development of the Compositional Search Evolutionary Algorithms (CSEA) [49, 19, 8] which build up an incremental model of variable dependencies conditional on a series of tests. Building-blocks are retained as explicit genetic structures and conditionally joined to form higher-order structures. These have been shown to be effective on special classes of hierarchical problems but are unproven on less tightly-structured problems. We propose that there exists a simple yet powerful combination of the above approaches: the persistent, adapting dependency model of a compositional pool with the expressive and compact variable weighting of probabilistic models. We review and deconstruct some of the key methods above for the purpose of determining their individual drawbacks and their common principles. By this reasoned approach we aim to arrive at a unifying framework that can adaptively scale to span a range of problem structure classes. This is implemented in a novel algorithm called the Transitional Evolutionary Algorithm (TEA). This is empirically validated in an incremental manner, verifying the various facets of the TEA and comparing it with related algorithms for an increasingly structured series of benchmark problems. This prompts some refinements to result in a simple and general algorithm that is nevertheless competitive with state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Smith, Micah J. (Micah Jacob). "Scaling collaborative open data science." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117819.

Full text
Abstract:
Thesis: S.M. in Computer Science, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 103-107).
Large-scale, collaborative, open data science projects have the potential to address important societal problems using the tools of predictive machine learning. However, no suitable framework exists to develop such projects collaboratively and openly, at scale. In this thesis, I discuss the deficiencies of current approaches and then develop new approaches for this problem through systems, algorithms, and interfaces. A central theme is the restructuring of data science projects into scalable, fundamental units of contribution. I focus on feature engineering, structuring contributions as the creation of independent units of feature function source code. This then facilitates the integration of many submissions by diverse collaborators into a single, unified, machine learning model, where contributions can be rigorously validated and verified to ensure reproducibility and trustworthiness. I validate this concept by designing and implementing a cloud-based collaborative feature engineering platform, Feature- Hub, as well as an associated discussion platform for real-time collaboration. The platform is validated through an extensive user study and modeling performance is benchmarked against data science competition results. In the process, I also collect and analyze a novel data set on the feature engineering source code submitted by crowd data scientist workers of varying backgrounds around the world. Within this context, I discuss paths forward for collaborative data science.
by Micah J. Smith.
S.M. in Computer Science
APA, Harvard, Vancouver, ISO, and other styles
17

Turner, Amanda Georgina. "Scaling limits of stochastic processes." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Thiéry, Alexandre H. "Scaling analysis of MCMC algorithms." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/57609/.

Full text
Abstract:
Markov Chain Monte Carlo (MCMC) methods have become a workhorse for modern scientific computations. Practitioners utilize MCMC in many different areas of applied science yet very few rigorous results are available for justifying the use of these methods. The purpose of this dissertation is to analyse random walk type MCMC algorithms in several limiting regimes that frequently occur in applications. Scaling limits arguments are used as a unifying method for studying the asymptotic complexity of these MCMC algorithms. Two distinct strands of research are developed: (a) We analyse and prove diffusion limit results for MCMC algorithms in high or infinite dimensional state spaces. Contrarily to previous results in the literature, the target distributions that we consider do not have a product structure; this leads to Stochastic Partial Differential Equation (SPDE) limits. This proves among other things that optimal proposals results already known for product form target distributions extend to much more general settings. We then show how to use these MCMC algorithms in an infinite dimensional Hilbert space in order to imitate a gradient descent without computing any derivative. (b) We analyse the behaviour of the Random Walk Metropolis (RWM) algorithm when used to explore target distributions concentrating on the neighbourhood of a low dimensional manifold of Rn. We prove that the algorithm behaves, after being suitably rescaled, as a diffusion process evolving on a manifold.
APA, Harvard, Vancouver, ISO, and other styles
19

Evans, Huw Gordon James. "Resonance scaling of circle maps." Thesis, University of Edinburgh, 1995. http://hdl.handle.net/1842/14805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Emil, Axelsson. "Up-scaling of algae cultivation." Thesis, Luleå tekniska universitet, Industriell miljö- och processteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-60493.

Full text
Abstract:
Microalgae are one of the oldest types of lifeforms on this planet and dead algae are one source for the oil that we extract from the ground. This oil has a major part in the technology advances of humanity, to levels unimaginable not long ago. Unfortunately, this oil is one of the major reasons for the global warming and other environmental issues caused by humans. Therefore, much effort is made on new technologies to decrease the use of fossil oil and other fossil material in favor for so called renewable sources. In this work focus is on production of biomass that can be used for processing to other bulk materials, mainly chemicals. This is also a highly potential market, the amount of materials derived from fossil sources are at least 422 million metric tons per year. The issue though is that the production costs for algae are still fairly high and can’t compete with the market price of fossil raw materials. Two algae species, Scenedesmus obliquus and Coelastrella sp., were cultivated in 6 pilot size ponds (500 L) and the results were compared to a lab experiment (0.5 L). The lab experiment was earlier performed by the author’s supervisors with the same species. The algae in the ponds were cultivated outdoor with flue gas in semi-closed ponds and the resulting biomass was allowed to sediment spontaneously. Scenedesmus obliquus was successfully cultivated in the pilot, but the system was not suitable for cultivation of Coelastrella sp. The main aim of this work was to evaluate if it is possible to predict the amount of biomass produced in the pilot cultivation based on the results from the previously performed lab cultivation. The conclusion based on the results in this work is that it not possible to predict the biomass production in the pilot based on lab experiments. The properties and behavior of different algae species can be very different in different systems, and the setups in this study differed too much. However, the results indicate that the pilot system has a high efficiency and can maintain a monoculture outdoors for at least 18 days as well as that the supply of flue gas highly affects the growth of the algae Scenedesmus obliquus.
APA, Harvard, Vancouver, ISO, and other styles
21

Gonzalez, Perez Miryam Guadalupe. "Scaling up virtual MIMO systems." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31321.

Full text
Abstract:
Multiple-input multiple-output (MIMO) systems are a mature technology that has been incorporated into current wireless broadband standards to improve the channel capacity and link reliability. Nevertheless, due to the continuous increasing demand for wireless data traffic new strategies are to be adopted. Very large MIMO antenna arrays represents a paradigm shift in terms of theory and implementation, where the use of tens or hundreds of antennas provides significant improvements in throughput and radiated energy efficiency compared to single antennas setups. Since design constraints limit the number of usable antennas, virtual systems can be seen as a promising technique due to their ability to mimic and exploit the gains of multi-antenna systems by means of wireless cooperation. Considering these arguments, in this work, energy efficient coding and network design for large virtual MIMO systems are presented. Firstly, a cooperative virtual MIMO (V-MIMO) system that uses a large multi-antenna transmitter and implements compress-and-forward (CF) relay cooperation is investigated. Since constructing a reliable codebook is the most computationally complex task performed by the relay nodes in CF cooperation, reduced complexity quantisation techniques are introduced. The analysis is focused on the block error probability (BLER) and the computational complexity for the uniform scalar quantiser (U-SQ) and the Lloyd-Max algorithm (LM-SQ). Numerical results show that the LM-SQ is simpler to design and can achieve a BLER performance comparable to the optimal vector quantiser. Furthermore, due to its low complexity, U-SQ could be consider particularly suitable for very large wireless systems. Even though very large MIMO systems enhance the spectral efficiency of wireless networks, this comes at the expense of linearly increasing the power consumption due to the use of multiple radio frequency chains to support the antennas. Thus, the energy efficiency and throughput of the cooperative V-MIMO system are analysed and the impact of the imperfect channel state information (CSI) on the system's performance is studied. Finally, a power allocation algorithm is implemented to reduce the total power consumption. Simulation results show that wireless cooperation between users is more energy efficient than using a high modulation order transmission and that the larger the number of transmit antennas the lower the impact of the imperfect CSI on the system's performance. Finally, the application of cooperative systems is extended to wireless self-backhauling heterogeneous networks, where the decode-and-forward (DF) protocol is employed to provide a cost-effective and reliable backhaul. The associated trade-offs for a heterogeneous network with inhomogeneous user distributions are investigated through the use of sleeping strategies. Three different policies for switching-off base stations are considered: random, load-based and greedy algorithms. The probability of coverage for the random and load-based sleeping policies is derived. Moreover, an energy efficient base station deployment and operation approach is presented. Numerical results show that the average number of base stations required to support the traffic load at peak-time can be reduced by using the greedy algorithm for base station deployment and that highly clustered networks exhibit a smaller average serving distance and thus, a better probability of coverage.
APA, Harvard, Vancouver, ISO, and other styles
22

SABINO, Melina Mongiovi Cunha Lima. "Scaling testing of refactoring engines." Universidade Federal de Campina Grande, 2016. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/883.

Full text
Abstract:
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-06-05T13:58:29Z No. of bitstreams: 1 MELINA MONGIOVI CUNHA LIMA SABINO - TESE (PPGCC) 2016.pdf: 4752189 bytes, checksum: e1034c42632a733df07a498a7eea6d0b (MD5)
Made available in DSpace on 2018-06-05T13:58:29Z (GMT). No. of bitstreams: 1 MELINA MONGIOVI CUNHA LIMA SABINO - TESE (PPGCC) 2016.pdf: 4752189 bytes, checksum: e1034c42632a733df07a498a7eea6d0b (MD5) Previous issue date: 2016
Capes
Definir e implementar refatoramentos não é uma tarefa trivial, pois é difícil definir todas as pré-condições necessárias para garantir que a transformação preserve o comportamento observável do programa. Com isso, ferramentas de refatoramentos podem ter condições muito fracas, condições muito fortes e podem aplicar transformações que não seguem a definição do refatoramento. Na prática, desenvolvedores escrevem casos de testes para checar suas implementações de refatoramentos e se preocupam em evitar esses tipos de bugs, pois 84% das asserções de testes do Eclipse e JRRT testam as ferramentas com relação aos bugs citados anteriormente. No entanto, as ferramentas ainda possuem esses bugs. Existem algumas técnicas automáticas para testar ferramentas de refatoramentos, mas elas podem ter limitações relacionadas com tipos de bugs que podem ser detectados, geração de entradas de testes, automação e performance. Este trabalho propõe uma técnica para escalar testes de ferramentas de refatoramentos. A técnica contém DOLLY um gerador automático de programas Java e C, no qual foram adicionadas mais construções de Java (classes e métodos abstratos e interface) e uma estratégia de pular algumas entradas de testes com o propósito de reduzir o tempo de testar as implementações de refatoramentos. Foi proposto um conjunto de oráculos para avaliar a corretude das transformações, dentre eles SAFEREFACTORIMPACT que identifica falhas relacionadas com mudanças comportamentais. SAFEREFACTORIMPACT gera testes apenas para os métodos impactados pela transformação. Além disso, foi proposto um novo oráculo para identificar transformações que não seguem a definição do refatoramento e uma nova técnica para identificar condições muito fortes. A técnica proposta foi avaliada em 28 implementações de refatoramentos de Java (Eclipse e JRRT) e C (Eclipse) e detectou 119 bugs relacionados com erros de compilação, mudanças comportamentais, condições muito fortes, e transformações que não seguem a definição do refatoramento. Usando pulos de 10 e 25 no gerador de programas, a técnica reduziu em 90% e 96% o tempo para testar as implementações de refatoramentos, enquanto deixou de detectar apenas 3% e 6% dos bugs, respectivamente. Além disso, detectou a primeira falha geralmente em alguns segundos. Por fim, com o objetivo de avaliar a técnica proposta com outras entradas de testes, foram avaliadas implementações do Eclipse e JRRT usando os programas de entrada das suas coleções de testes. Neste estudo, nossa técnica detectou mais 31 bugs não detectados pelos desenvolvedores das ferramentas.
Defining and implementing refactorings is a nontrivial task since it is difficult to define preconditions to guarantee that the transformation preserves the program behavior. There fore, refactoring engines may have overly weak preconditions, overly strong preconditions, and transformation issues related to the refactoring definition. In practice, developers manually write test cases to check their refactoring implementations. We find that 84% of the test suites of Eclipse and JRRT are concerned with identifying these kinds of bugs. However, bugs are still present. Researchers have proposed a number of techniques for testing refactoring engines. Nevertheless, they may have limitations related to the bug type, program generation, time consumption, and number of refactoring engines necessary to evaluate the implementations. In this work, we propose a technique to scale testing of refactoring engines by extending a previous technique. It automatically generates programs as test inputs using Dolly, a Java and C program generator. We add more Java constructs in DOLLY, such abstract classes and methods and interface, and a skip parameter to reduce the time to test the refactoring implementations by skipping some consecutive test inputs. Our technique uses SAFEREFACTORIMPACT to identify failures related to behavioral changes. It generates test cases only for the methods impacted by a transformation. Also, we propose a new oracle to evaluate whether refactoring preconditions are overly strong by disabling a subset of them. Finally, we present a technique to identify transformation issues related to the refactoring definition. We evaluate our technique in 28 refactoring implementations of Java (Eclipse and JRRT) and C (Eclipse) and find 119 bugs related to compilation errors, behavioral changes, overly strong preconditions, and transformation issues. The technique reduces the time in 90% and 96% using skips of 10 and 25 in Dolly while missing only 3% and 6% of the bugs, respectively. Additionally, it finds the first failure in general in a few seconds using skips. Finally, we evaluate our proposed technique by using other test inputs, such as the input programs of Eclipse and JRRT refactoring test suites. We find 31 bugs not detected by the developers.
APA, Harvard, Vancouver, ISO, and other styles
23

Russ, Ricardo. "SCALING CHALLENGES IN DIGITAL VENTURES." Thesis, Umeå universitet, Institutionen för informatik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-150563.

Full text
Abstract:
The number of startups is on the rise, specifically digital startups with products entirely based on software. These companies are facing a resilient challenge when it comes to increasing their user base, revenue or market share. This process is called scaling, which is an essential part for every startup in order to establish themselves in the market. While there are several generic models focusing on scaling a business, there seems to be a lack of scientific research focusing on the challenges during the process of scaling. This paper describes a qualitative study focusing on purely digital companies which have scaled or are trying to scale. Resulting in finding several distinct challenges and barriers related to scaling digital companies, by comparing and contrasting growth literature with the data generated by this study. Besides these challenges, our findings suggest that B2B and B2C companies are facing different challenges during their scaling processes.
APA, Harvard, Vancouver, ISO, and other styles
24

Deigmoeller, Joerg. "Intelligent image cropping and scaling." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/4745.

Full text
Abstract:
Nowadays, there exist a huge number of end devices with different screen properties for watching television content, which is either broadcasted or transmitted over the internet. To allow best viewing conditions on each of these devices, different image formats have to be provided by the broadcaster. Producing content for every single format is, however, not applicable by the broadcaster as it is much too laborious and costly. The most obvious solution for providing multiple image formats is to produce one high resolution format and prepare formats of lower resolution from this. One possibility to do this is to simply scale video images to the resolution of the target image format. Two significant drawbacks are the loss of image details through ownscaling and possibly unused image areas due to letter- or pillarboxes. A preferable solution is to find the contextual most important region in the high-resolution format at first and crop this area with an aspect ratio of the target image format afterwards. On the other hand, defining the contextual most important region manually is very time consuming. Trying to apply that to live productions would be nearly impossible. Therefore, some approaches exist that automatically define cropping areas. To do so, they extract visual features, like moving reas in a video, and define regions of interest (ROIs) based on those. ROIs are finally used to define an enclosing cropping area. The extraction of features is done without any knowledge about the type of content. Hence, these approaches are not able to distinguish between features that might be important in a given context and those that are not. The work presented within this thesis tackles the problem of extracting visual features based on prior knowledge about the content. Such knowledge is fed into the system in form of metadata that is available from TV production environments. Based on the extracted features, ROIs are then defined and filtered dependent on the analysed content. As proof-of-concept, this application finally adapts SDTV (Standard Definition Television) sports productions automatically to image formats with lower resolution through intelligent cropping and scaling. If no content information is available, the system can still be applied on any type of content through a default mode. The presented approach is based on the principle of a plug-in system. Each plug-in represents a method for analysing video content information, either on a low level by extracting image features or on a higher level by processing extracted ROIs. The combination of plug-ins is determined by the incoming descriptive production metadata and hence can be adapted to each type of sport individually. The application has been comprehensively evaluated by comparing the results of the system against alternative cropping methods. This evaluation utilised videos which were manually cropped by a professional video editor, statically cropped videos and simply scaled, non-cropped videos. In addition to and apart from purely subjective evaluations, the gaze positions of subjects watching sports videos have been measured and compared to the regions of interest positions extracted by the system.
APA, Harvard, Vancouver, ISO, and other styles
25

Ekstrom, Nathan Hyrum. "Increasing DOGMA Scaling Through Clustering." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2359.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Karen. "Scaling up public health interventions." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27829.

Full text
Abstract:
The scale-up of effective or efficacious public health interventions to prevent chronic disease is important if population wide impacts are to be achieved. However, scale-up is complex and doesn’t happen as often as it should. This is despite growing interest in the area of research translation and scale-up by researchers and policy makers and a plethora of conceptual frameworks developed to guide the scale-up of efficacious interventions. The objectives of this thesis are to understand how scale-up may be facilitated within a research translation framework as well as in the real-world by understanding the key factors that contribute to facilitating scale-up. A key finding from this thesis is that scale-up in the real-world does not occur in a linear fashion and is often influenced by a range of factors including the political and/or strategic context, values of key actors as well as community needs and the availability of funding. Furthermore, decisions to scale-up are not only determined by the level of evidence available, but also through the convergence of the abovementioned factors into an opportunity for scale-up, ‘the scale-up window’. The opportunities to facilitate scale-up in this thesis include: cementing ‘scale-up’ as the end goal within a research translation framework which places the emphasis on scale-up equally alongside the other research translation activities; conducting research that promotes greater understanding of implementation and scale-up (through replication and scale-up studies) while reducing the traditional focus of smaller efficacy trials that are not conducive for scale-up; encouraging the uptake of pragmatic tools that provide guidance to those considering scale-up, through assessing the potential scalability of interventions considered for scale-up to expedite more informed decision making; and by comprehensively reflecting on and documenting scale-up experiences which contribute to capturing lessons for researchers and policy makers. Finally, the field of scale-up may benefit from greater clarity around the ‘roles’ within research and policy settings on scale-up, which would increase the accountability for scaling up interventions as well as greater delineation between the growing field of implementation science and scale-up.
APA, Harvard, Vancouver, ISO, and other styles
27

Falk, Donald Albert. "Scaling rules for fire regimes." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/290135.

Full text
Abstract:
Forest fire is a keystone ecological process in coniferous forests of southwestern North America. This dissertation examines a fire regime in the Jemez Mountains of northern New Mexico, USA, based on an original data set collected from Monument Canyon Research Natural Area (MCN). First, I examine scale dependence in the fire regime. Statistical descriptors of the fire regime, such as fire frequency and mean fire interval, are scale-dependent. I describe the theory of the event-area (EA) relationship, analogous to the species-area relationship, for events distributed in space and time; the interval-area (IA) relationship, is a related form for fire intervals. The EA and IA also allow estimation of the annual fire frame (AFF), the area within which fire occurs annually on average. The slope of the EA is a metric of spatio-temporal synchrony of events across multiple spatial scales. The second chapter concerns the temporal distribution of fire events. I outline a theory of fire interval probability from first principles in fire ecology and statistics. Fires are conditional events resulting from interaction of multiple contingent factors that must be satisfied for an event to occur. Outcomes of this kind represent a multiplicative process for which a lognormal model is the limiting distribution. I examine the application of this framework to two probability models, the Weibull and lognormal distributions, which can be used to characterize the distribution of fire intervals over time. The final chapter addresses the theory and effects of sample size in fire history. Analytical methods (including composite fire records) are used in fire history to minimize error in inference. I describe a theory of the collector's curve based on accumulation of sets of discrete events and the probability of recording a fire as a function of sample size. I propose a nonlinear regression method for the Monument Canyon data set to correct for differences in sample size among composite fire records. All measures of the fire regime reflected sensitivity to sample size, but these differences can be corrected in part by applying the regression correction, which can increase confidence in quantitative estimates of the fire regime.
APA, Harvard, Vancouver, ISO, and other styles
28

Melo, Hygor Piaget Monteiro. "Nonlinear scaling in social Physics." reponame:Repositório Institucional da UFC, 2016. http://www.repositorio.ufc.br/handle/riufc/22441.

Full text
Abstract:
MELO, H. P. M. Nonlinear scaling in social Physics. 2016. 66 f. Tese (Doutorado em Física) – Centro de Ciências, Universidade Federal do Ceará, Fortaleza, 2016.
Submitted by Giordana Silva (giordana.nascimento@gmail.com) on 2017-04-03T19:53:21Z No. of bitstreams: 1 2017_tese_hpmmelo.pdf: 6027888 bytes, checksum: fbf69954f131edced038058a3bad3bb2 (MD5)
Rejected by Giordana Silva (giordana.nascimento@gmail.com), reason: Mudar nome da descrição do arquivo on 2017-04-03T20:03:38Z (GMT)
Submitted by Giordana Silva (giordana.nascimento@gmail.com) on 2017-04-03T20:04:23Z No. of bitstreams: 1 2016_tese_hpmmelo.pdf: 6027888 bytes, checksum: fbf69954f131edced038058a3bad3bb2 (MD5)
Approved for entry into archive by Giordana Silva (giordana.nascimento@gmail.com) on 2017-04-03T20:04:54Z (GMT) No. of bitstreams: 1 2016_tese_hpmmelo.pdf: 6027888 bytes, checksum: fbf69954f131edced038058a3bad3bb2 (MD5)
Made available in DSpace on 2017-04-03T20:04:54Z (GMT). No. of bitstreams: 1 2016_tese_hpmmelo.pdf: 6027888 bytes, checksum: fbf69954f131edced038058a3bad3bb2 (MD5) Previous issue date: 2016
The applications of statistical mechanics in the study of collective human behavior is not a novelty. However, in the past few decades we shaw a huge spike of interest on the study of society using physics. In this thesis we explore nonlinear scaling laws in social systems using physical techniques. First we perform data analysis and modeling applied to elections. We show that the number of votes of a candidate scales nonlinear with the money spent at the campaign. To our surprise, the correlation revealed a sublinear scaling, which means that the average “price” of one vote grows as you increase the number of votes. Using a mean-field model we find that the sublinearity emerges from the competition and the distribution of votes is causally determined by the distribution of money campaign. Moreover, we show that the model is able to reasonably predict the final number of valid votes through a simple heuristic argument. Lastly, we present our work on allometric scaling of social indicators. We show how homicides, deaths in car crashes, and suicides scales with the population of Brazilian cities. Differently from homicides (superlinear) and fatal events in car crashes (isometric), we find sublinear scaling behavior between the number of suicides and city population, which reveal a possible evidence for social influence on suicides occurrences.
As aplicações da mecânica estatística no estudo do comportamento humano coletivo não são uma novidade. No entanto, nas últimas décadas vimos um aumento enorme do interesse no estudo da sociedade usando a física. Nesta tese, utilizando técnicas da física, nós estudamos leis de escala não-lineares em sistemas sociais. Na primeira parte da tese realizamos a análise de dados e modelagem de eleições públicas. Mostramos que o número de votos de um candidato escala não-linearmente com o dinheiro gasto na campanha. Para nossa surpresa, a correlação revelou uma relação de escala sublinear, o que significa que o "preço" médio de um voto cresce à medida que o número de votos aumenta. Usando um modelo de campo médio descobrimos que a não-linearidade emerge da concorrência e a distribuição de votos é causalmente determinada pela distribuição do dinheiro gasto na campanha. Além disso, mostramos que o modelo é capaz de prever razoavelmente o número final de votos válidos através de um argumento heurístico simples. Por fim, apresentamos o nosso trabalho sobre alometria de indicadores sociais. Nós mostramos como homicídios, mortes em acidentes de carro e suicídios crescem com a população das cidades brasileiras. Diferentemente de homicídios (superlinear) e eventos fatais em acidentes de carro (isométrico), encontramos um comportamento sublinear entre o número de suicídios e a população de cidades, o que revela uma possível evidência de influência social na ocorrência de suicídios.
APA, Harvard, Vancouver, ISO, and other styles
29

Vanherweg, Joseph B. R. "HYBRID ROCKET MOTOR SCALING PROCESS." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1394.

Full text
Abstract:
Hybrid rocket propulsion technology shows promise for the next generation of sounding rockets and small launch vehicles. This paper seeks to provide details on the process of developing hybrid propulsion systems to the academic and amateur rocket communities to assist in future research and development. Scaling hybrid rocket motors for use in sounding rockets has been a challenge due to the inadequacies in traditional boundary layer analysis. Similarity scaling is an amendment to traditional boundary layer analysis which is helpful in removing some of the past scaling challenges. Maintaining geometric similarity, oxidizer and fuel similarity and mass flow rate to port diameter similarity are the most important scaling parameters. Advances in composite technologies have also increased the performance through weight reduction of sounding rockets through and launch vehicles. Technologies such as Composite Overwrapped Pressure Vessels (COPV) for use as fuel and oxidizer tanks on rockets promise great advantages in flight performance and manufacturing cost. A small scale COPV, carbon fiber ablative nozzle and a N class hybrid rocket motor were developed, manufactured and tested to support the use of these techniques in future sounding rocket development. The COPV exhibited failure within 5% of the predicted pressure and the scale motor testing was useful in identifying a number of improvements needed for future scaling work. The author learned that small scale testing is an essential step in the process of developing hybrid propulsion systems and that ablative nozzle manufacturing techniques are difficult to develop. This project has primarily provided a framework for others to build upon in the quest for a method to easily develop hybrid propulsion systems sounding rockets and launch vehicles.
APA, Harvard, Vancouver, ISO, and other styles
30

CUGNATA, FEDERICA. "Bayesian three-way multidimensional scaling." Doctoral thesis, Università Bocconi, 2012. https://hdl.handle.net/11565/4054285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Chouai, Said. "Mechanisms of scaling and scaling prevention in the wet processing of calcitic and dolomitic phosphate rock." Thesis, University of Leeds, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.277350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jones, Synthia S. "Multidimensional scaling of user information satisfaction." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA277230.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, December 1993.
Thesis advisor(s): William J. Haga ; Kishore Sengupta. "December 1993." Bibliography: p. 108-110. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Lihui. "Quantum Mechanical Effects on MOSFET Scaling." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-07072006-111805/.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007.
Philip First, Committee Member ; Ian F. Akyildiz, Committee Member ; Russell Dupuis, Committee Member ; James D. Meindl, Committee Chair ; Willianm R. Callen, Committee Member.
APA, Harvard, Vancouver, ISO, and other styles
34

Zang, Peng. "Scaling solutions to Markov Decision Problems." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42906.

Full text
Abstract:
The Markov Decision Problem (MDP) is a widely applied mathematical model useful for describing a wide array of real world decision problems ranging from navigation to scheduling to robotics. Existing methods for solving MDPs scale poorly when applied to large domains where there are many components and factors to consider. In this dissertation, I study the use of non-tabular representations and human input as scaling techniques. I will show that the joint approach has desirable optimality and convergence guarantees, and demonstrates several orders of magnitude speedup over conventional tabular methods. Empirical studies of speedup were performed using several domains including a clone of the classic video game, Super Mario Bros. In the course of this work, I will address several issues including: how approximate representations can be used without losing convergence and optimality properties, how human input can be solicited to maximize speedup and user engagement, and how that input should be used so as to insulate against possible errors.
APA, Harvard, Vancouver, ISO, and other styles
35

Kutluay, Umit. "Design Scaling Og Aeroballistic Range Models." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605698/index.pdf.

Full text
Abstract:
The aim of this thesis is to develop a methodology for obtaining an optimum configuration for the aeroballistic range models. In the design of aeroballistic range models, there are mainly three similarity requirements to be matched between the model and the actual munition: external geometry, location of the centre of gravity and the ratio of axial mass moment of inertia to the transverse mass moment of inertia. Furthermore, it is required to have a model with least possible weight, so that the required test velocities can be obtained with minimum chamber pressure and by use of minimum propellant while withstanding the enormous launch accelerations. This defines an optimization problem: to find the optimum model internal configuration and select materials to be used in the model such that the centre of gravity location and the inertia ratio are matched as closely as possible while the model withstands the launch forces and has minimum mass. To solve this problem a design methodology is devised and an optimization code is developed based on this methodology. Length, radius and end location of an optimum cylinder which has to be drilled out from the model are selected as the design variables for the optimization problem. Built&ndash
in functions from the Optimization Toolbox of Matlab®
are used in the optimization routine, and also a graphical user interface is designed for easy access to the design variables. The developed code is a very useful tool for the designer, although the results are not meant to be directly applied to the final product, they form the starting points for the detailed design.
APA, Harvard, Vancouver, ISO, and other styles
36

Läuter, Henning, and Ayad Ramadan. "Modeling and Scaling of Categorical Data." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/4957/.

Full text
Abstract:
Estimation and testing of distributions in metric spaces are well known. R.A. Fisher, J. Neyman, W. Cochran and M. Bartlett achieved essential results on the statistical analysis of categorical data. In the last 40 years many other statisticians found important results in this field. Often data sets contain categorical data, e.g. levels of factors or names. There does not exist any ordering or any distance between these categories. At each level there are measured some metric or categorical values. We introduce a new method of scaling based on statistical decisions. For this we define empirical probabilities for the original observations and find a class of distributions in a metric space where these empirical probabilities can be found as approximations for equivalently defined probabilities. With this method we identify probabilities connected with the categorical data and probabilities in metric spaces. Here we get a mapping from the levels of factors or names into points of a metric space. This mapping yields the scale for the categorical data. From the statistical point of view we use multivariate statistical methods, we calculate maximum likelihood estimations and compare different approaches for scaling.
APA, Harvard, Vancouver, ISO, and other styles
37

Finnighan, Grant Adam. "Computer image based scaling of logs." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26698.

Full text
Abstract:
Individual log scaling for the forest industry is a time consuming operation. Presented here are the design and prototype test results of an automated technique that will improve on the current speed of this operation, while still achieving the required accuracy. This is based on a technique that uses a television camera and graphics monitor to enable the operator to spot logs in images, which an attached processor can automatically scale. The system must be first calibrated however. Additional to the time savings are the advantages that the accuracy will be maintained, if not improved, and the operation may now be performed from a sheltered location.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
38

Videl, Markus, and Mattias Palo. "Scaling of popular Sudoku solving algorithms." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146012.

Full text
Abstract:
In this bachelor thesis we study 6 popular Sudoku solving algorithms, found through Google, to find the algorithm that has the slowest growth. The algorithms we tested are: brute-force, a pen-and-paper method, two exact cover reductions in Python and Haskell, a SAT reduction, and a constraint satisfaction algorithm. The algorithms were tried by solving Sudoku puzzles of sizes n = {2, 3, 4, 5} where we measured the solution time. We conclude that the slowest growing algorithm, by far, is the SAT reduction, followed by the exact cover reductions. Brute-force grows, unsurprisingly, the fastest.
APA, Harvard, Vancouver, ISO, and other styles
39

Perry, Beth Gemma. "Re-thinking and re-scaling science?" Thesis, University of Salford, 2009. http://usir.salford.ac.uk/26858/.

Full text
Abstract:
This Ph.D. by Published Works examines the dynamic interaction between a re-thinking and a re-scaling of science. The ten Published Works are located at the intersection between debates on the changing governance of science and science policy governance in the context of a multi-scalar knowledge-based economy. Three critical themes are examined relating to the relationship between excellence and relevance, the roles of universities in knowledge-based coalitions and the relative significance of regional and local knowledge-based developments for science policy governance. The empirical emphasis is on two case studies of regional and local science policies in North West England between 2002 and 2008, in international comparative context. The Published Works collectively construct a powerful story-line of change and continuity within contemporary developments, undermining claims that a paradigm shift has occurred in either the re-thinking or re-scaling of science. Disembedded understandings of excellence predominate, despite the articulation of alternative discourses on the roles of different knowledges for sub-national socioeconomic development. Universities occupy ambiguous roles, with some better able than others to mobilise institutional power to provide relative shelter from external pressures. Sub-national interventions emphasise the re-development of physical spaces and are acquisition-oriented, rather than predicated upon shifts in modes of knowledge production. A critical contribution of the Published Works is the forging of an interdisciplinary agenda around re-thinking and re-scaling science that is both academically excellent and policy relevant. The Ph.D. illustrates the need for more integrated theoretical and practical understandings of the relationship between the governance of science and of science policy governance and both defines and seeks to populate critical gaps in this agenda.
APA, Harvard, Vancouver, ISO, and other styles
40

Xu, Liqun. "Bootstrap for dual scaling of rankings." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0001/NQ35373.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Safer, Hershel M. "Scaling algorithms for distributed max flow." Sloan School of Managment, 1988. http://hdl.handle.net/1721.1/7409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Verberkmoes, Alain. "Tiling models: phase behaviour and scaling." [S.l : Amsterdam : s.n] ; Universiteit van Amsterdam [Host], 2003. http://dare.uva.nl/document/71144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ingram, Stephen F. "Multilevel multidimensional scaling on the GPU." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/409.

Full text
Abstract:
We present Glimmer, a new multilevel visualization algorithm for multidimensional scaling designed to exploit modern graphics processing unit (GPU) hard-ware. We also present GPU-SF, a parallel, force-based subsystem used by Glimmer. Glimmer organizes input into a hierarchy of levels and recursively applies GPU-SF to combine and refine the levels. The multilevel nature of the algorithm helps avoid local minima while the GPU parallelism improves speed of computation. We propose a robust termination condition for GPU-SF based on a filtered approximation of the normalized stress function. We demonstrate the benefits of Glimmer in terms of speed, normalized stress, and visual quality against several previous algorithms for a range of synthetic and real benchmark datasets. We show that the performance of Glimmer on GPUs is substantially faster than a CPU implementation of the same algorithm. We also propose a novel texture paging strategy called distance paging for working with precomputed distance matrices too large to fit in texture memory.
APA, Harvard, Vancouver, ISO, and other styles
44

Golenetskaya, Natalia. "Adressing scaling challenges in comparative genomics." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00865840.

Full text
Abstract:
La génomique comparée est essentiellement une forme de fouille de données dans des grandes collections de relations n-aires. La croissance du nombre de génomes sequencés créé un stress sur la génomique comparée qui croit, au pire géométriquement, avec la croissance en données de séquence. Aujourd'hui même des laboratoires de taille modeste obtient, de façon routine, plusieurs génomes à la fois - et comme des grands consortia attend de pouvoir réaliser des analyses tout-contre-tout dans le cadre de ses stratégies multi-génomes. Afin d'adresser les besoins à tous niveaux il est nécessaire de repenser les cadres algorithmiques et les technologies de stockage de données utilisés pour la génomique comparée. Pour répondre à ces défis de mise à l'échelle, dans cette thèse nous développons des méthodes originales basées sur les technologies NoSQL et MapReduce. À partir d'une caractérisation des sorts de données utilisés en génomique comparée et d'une étude des utilisations typiques, nous définissons un formalisme pour le Big Data en génomique, l'implémentons dans la plateforme NoSQL Cassandra, et évaluons sa performance. Ensuite, à partir de deux analyses globales très différentes en génomique comparée, nous définissons deux stratégies pour adapter ces applications au paradigme MapReduce et dérivons de nouveaux algorithmes. Pour le premier, l'identification d'événements de fusion et de fission de gènes au sein d'une phylogénie, nous reformulons le problème sous forme d'un parcours en parallèle borné qui évite la latence d'algorithmes de graphe. Pour le second, le clustering consensus utilisé pour identifier des familles de protéines, nous définissons une procédure d'échantillonnage itérative qui converge rapidement vers le résultat global voulu. Pour chacun de ces deux algorithmes, nous l'implémentons dans la plateforme MapReduce Hadoop, et évaluons leurs performances. Cette performance est compétitive et passe à l'échelle beaucoup mieux que les algorithmes existants, mais exige un effort particulier (et futur) pour inventer les algorithmes spécifiques.
APA, Harvard, Vancouver, ISO, and other styles
45

Warren, Patrick Bewick. "Scaling laws in cluster-cluster aggregation." Thesis, University of Cambridge, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mehta, A. "Scaling approaches to interacting walk models." Thesis, University of Oxford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lau, Shing-Tak Douglas. "Scaling Dispersion Processes in Surcharged Manholes." Thesis, University of Sheffield, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489715.

Full text
Abstract:
Urban drainage network models are increasingly used in the water industry for hydraulic and water quality simulation. These models require inputs for energy loss and mixing coefficients to make predictions of head loss and the transport of solutes or dissolved substances across hydraulic structures, such as sewer pipes and manholes. Laboratory-derived head loss and energy loss coefficients for manholes may be used in urban drainage modelling. However, the applicability of the laboratory-scale derived parameters to full-scale structures in the urban drainage system, Le. scalability of these parameters, is not clearly understood. The overall aim of the research was to derive generic scaling methodology to describe the impact of physical scale of manholes on the hydraulic and mixing processes using laboratoryand CFD-based analyses. A 1:3.67 scale model of an 800 mm diameter manhole (the prototype) studied by Guymer et al. (2005) has been constructed in the laboratory. Laboratory experiments were conducted to measure head loss and solute dispersion in the scale model. The solute dispersion results were analysed using advection dispersion equation (ADE) and aggregated dead zone (ADZ) models and comparisons of the results with the prototype experimental data were made. The cumulative temporal concentration profiles (CTCPs) for the scale model were also compared with the prototype profiles. However, analysis of the laboratory-derived data failed to quantitatively identify the scale effects because the recorded data of the two manholes was not directly comparable. Computational fluid dynamics (CFD) was used to investigate the effects of scale in the surcharged manhole. A thorough validation study was conducted to provide confidence in the CFD model predictions. A standard modelling protocol for manhole simulations was developed through the validation study. Three differently sized manholes were created using CFD. The scale effects on the flow field, energy loss and solute transport characteristics were investigated. The findings of the study suggest that scale effects exist in the three manholes; however, the degree of the effects is very small. The scale effects were attributable to the dissimilarity in Reynolds number and that led to different characteristic of the jet in the manhole. Methodologies to scale the hydraulic and solute transport processes in surcharged manholes have been presented.
APA, Harvard, Vancouver, ISO, and other styles
48

Harwood, Michael J. "Scaling the pitch for junior cricketers." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/35953.

Full text
Abstract:
Although cricket is played around the world by all ages, very little attention has been focused on junior cricket. The research presented here evaluated the effects on junior cricket of reducing the pitch length, developed a method for scaling the pitch to suit the players and applied this method to the under-11 age group. In the first of four studies it was established that shortening the cricket pitch had positive effects for bowlers, batters and fielders at both club and county standards, consequently resulting in matches that were more engaging. The second study found that top under-10 and under-11 seam bowlers released the ball on average 3.4° further below horizontal on a 16 yard pitch compared with a 19 yard pitch. This was closer to elite adult pace bowlers release angles and should enable junior players to achieve greater success and develop more variety in their bowling. The third study calculated where a good length delivery should be pitched to under-10 and under-11 batters in order to provoke uncertainty, and also examined the influence of pitch length on batters decisions to play front or back foot shots according to the length of the delivery. A shorter pitch should strengthen the coupling between the perception of delivery length and appropriate shot selection, and the increased task demand should lead to improved anticipation; both are key features of skilled batting. In the final study a method of calculating the optimal pitch length for an age group was developed which used age-specific bowling and batting inputs. This was applied to scale the pitch for under-11s giving a pitch length of 16.22 yards (14.83 m), 19% shorter than previously recommended for the age group by the England and Wales Cricket Board. Scaled in this way across the junior age groups, pitch lengths would fit the players better as they develop, enabling more consistent ball release by bowlers and temporal demands for batters, as well as greater involvement for fielders.
APA, Harvard, Vancouver, ISO, and other styles
49

Hou, Chong Ph D. Massachusetts Institute of Technology. "Fiber drawing : beyond the scaling paradigm." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104183.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 93-97).
The emergence of multimaterial fibers that combine a multiplicity of solid materials with disparate electrical, optical, and mechanical properties into a single fiber presents new opportunities for extending fiber applications. Different functional fiber devices have been fabricated with a thermal co-draw approach. In order to make the thermal co-draw feasible, only materials with similar viscosity at the draw temperature are used, which excludes a wide range of metal and semiconductors that have good electrical property but not compatible viscosity profile. From the fiber structure point of view, the nature of the fiber drawing process makes fabricating a large quantity of fiber with identical inner structures feasible. The scalability of thermal drawing approach offers access to large quantities of devices however constrains the devices to be translational symmetric. Lifting this symmetry to create discrete devices in fibers will increase the utility of fiber devices. Also, the surface of the fiber is rarely studied though complex inner structure have been fabricated for different functionalities. Functionalize the fiber surface would give fiber the ability to better interact with the outer environment. This thesis seeks to address the abovementioned considerations, i.e. to expand materials selection for the fiber co-draw process and to explore variance of the fiber structure including breaking the inner structure translational symmetry and functionalize the outer surface. On the material side, a chemical reaction phenomenon is observed and studied in two different fiber drawing situations. In both cases, new composition is formed during the draw and play an important role in the formed fiber devices. On the structure side, relying on the principle of Plateau-Rayleigh instability, the fiber inner structure is designed to form a series of discrete semiconductor spheres contacting two metal buses after a thermal selective breakup process. This gives rise to photodecting devices in a silica-cladding fiber which shows a large working bandwidth. The fiber surface is also studied and successfully patterned with micron-scale features during the draw process. The formed patterned fiber surface shows potential in structural coloration and directional wetting.
by Chong Hou.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Niesen, Urs. "Scaling laws for heterogeneous wireless networks." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54634.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 211-215).
This thesis studies the problem of determining achievable rates in heterogeneous wireless networks. We analyze the impact of location, traffic, and service heterogeneity. Consider a wireless network with n nodes located in a square area of size n communicating with each other over Gaussian fading channels. Location heterogeneity is modeled by allowing the nodes in the wireless network to be deployed in an arbitrary manner on the square area instead of the usual random uniform node placement. For traffic heterogeneity, we analyze the n x n dimensional unicast capacity region. For service heterogeneity, we consider the impact of multicasting and caching. This gives rise to the n x 2n dimensional multicast capacity region and the 2" x n dimensional caching capacity region. In each of these cases, we obtain an explicit information-theoretic characterization of the scaling of achievable rates by providing a converse and a matching (in the scaling sense) communication architecture.
by Urs Niesen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography