Дисертації з теми "Large Scale Applications Implementing"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Large Scale Applications Implementing".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Smaragdakis, Ioannis. "Implementing large-scale object-oriented components /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Повний текст джерелаMartínez, Trujillo Andrea. "Dynamic Tuning for Large-Scale Parallel Applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/125872.
Повний текст джерелаThe current large-scale computing era is characterised by parallel applications running on many thousands of cores. However, the performance obtained when executing these applications is not always what it is expected. Dynamic tuning is a powerful technique which can be used to reduce the gap between real and expected performance of parallel applications. Currently, the majority of the approaches that offer dynamic tuning follow a centralised scheme, where a single analysis module, responsible for controlling the entire parallel application, can become a bottleneck in large-scale contexts. The main contribution of this thesis is a novel model that enables decentralised dynamic tuning of large-scale parallel applications. Application decomposition and an abstraction mechanism are the two key concepts which support this model. The decomposition allows a parallel application to be divided into disjoint subsets of tasks which are analysed and tuned separately. Meanwhile, the abstraction mechanism permits these subsets to be viewed as a single virtual application so that global performance improvements can be achieved. A hierarchical tuning network of distributed analysis modules fits the design of this model. The topology of this tuning network can be configured to accommodate the size of the parallel application and the complexity of the tuning strategy being employed. It is from this adaptability that the model's scalability arises. To fully exploit this adaptable topology, in this work a method is proposed which calculates tuning network topologies composed of the minimum number of analysis modules required to provide effective dynamic tuning. The proposed model has been implemented in the form of ELASTIC, an environment for large-scale dynamic tuning. ELASTIC presents a plugin architecture, which allows different performance analysis and tuning strategies to be applied. Using ELASTIC, experimental evaluation has been carried out on a synthetic and a real parallel application. The results show that the proposed model, embodied in ELASTIC, is able to not only scale to meet the demands of dynamic tuning over thousands of processes, but is also able to effectively improve the performance of these applications.
Dacosta, Italo. "Practical authentication in large-scale internet applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44863.
Повний текст джерелаRoy, Yagnaseni. "Modeling nanofiltration for large scale desalination applications." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100096.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 91-94).
The Donnan Steric Pore Model with dielectric exclusion (DSPM-DE) is implemented over flatsheet and spiral-wound leaves to develop a comprehensive model for nanofiltration modules. This model allows the user to gain insight into the physics of the nanofiltration process by allowing one to adjust and investigate effects of membrane charge, pore radius, and other membrane characteristics. The study shows how operating conditions such as feed flow rate and pressure affect the recovery ratio and solute rejection across the membrane. A comparison is made between the results for the flat-sheet and spiral-wound configurations. The comparison showed that for the spiral-wound leaf, the maximum values of transmembrane pressure, flux and velocity occur at the feed entrance (near the permeate exit), and the lowest value of these quantities are at the diametrically opposite corner. This is in contrast to the flat-sheet leaf, where all the quantities vary only in the feed flow direction. However it is found that the extent of variation of these quantities along the permeate flow direction in the spiral-wound membrane is negligibly small in most cases. Also, for identical geometries and operating conditions, the flatsheet and spiral-wound configurations give similar results. Thus the computationally expensive and complex spiral-wound model can be replaced by the flat-sheet model for a variety of purposes. In addition, the model was utilized to predict the performance of a seawater nanofiltration system which has been validated with the data obtained from a large-scale seawater desalination plant, thereby establishing a reliable model for desalination using nanofiltration.
by Yagnaseni Roy.
S.M.
Huang, Jen-Cheng. "Efficient simulation techniques for large-scale applications." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53963.
Повний текст джерелаVerdugo, Retamal Cristian Andrés. "Photovoltaic power converter for large scale applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/672343.
Повний текст джерелаLa mayoría de los sistemas fotovoltaicos de gran escala tienen una configuraci ón centralizada con convertidores de dos otres niveles de tensión de salida conectados a paneles fotovoltaicos. Con el desarrollo de los convertidores multinivel, nuevas topologías han aparecido para reemplazar las configuraciones usadas actualmente en aplicaciones fotovoltaicas, reduciendo los requerimientos de grandes filtros, incrementando los niveles de tensi ón de operación y mejorando la calidad de la potencia. Uno de los mayores desafíos de los convertidores multinivel en aplicaciones fotovoltaicas de gran escala es la presencia de corrientes de fuga y tensiones de flotación debido al significante aumento de módulos de potencia conectados en serie. Para solucionar este problema, los convertidores multinivel incluyen transformadores de alta o baja frecuencia, los cuales proveen aislación galvánica a los paneles fotovoltaicos. El Convertidor Cascada con Puente H y transformadores de alta frecuencia en una segunda etapa de conversión ha proporcionado una solución prometedora para aplicaciones de gran escala, ya que elimina el problema de tensión de flotación y además proporciona una etapa de control independiente al bus dc. En un esfuerzo de integrar transformadores en el lado de corriente alterna para evitar una segunda etapa de conversión, los Convertidores Multinivel con Transformadores en Cascada (CTMI) han sido propuestos para aplicaciones fotovoltaicas. Estas configuraciones utilizan el secundario del transformador para crear la conexi ón serie, mientras que el primario es conectado a cada módulo de potencia, satisfaciendo los requisitos de aislaci ón y proporcionando diferentes posibilidades de conexiones en los devanados para generar configuraciones sim étricas y asimétricas. Considerando los requisitos de convertidores multinivel para aplicaciones fotovoltaicas de gran escala, el principal propósito de esta tesis es desarrollar una configuración de convertidor multinivel el cual proporcione aislación galvánica a todos sus módulos, además de un control independiente para la potencia generada. La configuraci ón propuesta se llama Convertidor Multi-Modular Aislado (IMMC) y proporciona aislaci ón galvánica a través de transformadores en el lado ac. El IMMC se conforma de dos grupos de módulos de potencia conectados en serie denominadas ramas, las cuales se interconectan en paralelo. Los módulos de potencia se conforman de convertidores fuente de tensi ón trifásicos conectados a grupos individuales de paneles fotovoltaicos, mientras que el lado ac se conecta a transformadores trif ásicos de baja frecuencia. Por lo tanto, varios módulos aislados pueden ser conectados en serie. Debido a que la potencia generada por los paneles fotovoltaicos depende de las condiciones ambientales, los m ódulos son propensos a generar diferentes niveles de potencia. Este escenario debe ser soportado por el IMMC, proporcionando alta flexibilidad en la regulación de potencia. Por lo tanto, esta tesis propone dos estrategias de control cuyo rol es regular el flujo de potencia de cada módulo mediante la tensión en la etapa continua y la corriente de rama. La compensaci ón por amplitud de tensión (AVC) regula la amplitud del índice de modulación, mientras que la compensación de la tensión en cuadratura (QVC) regula el ángulo de fase. Adicionalmente, se demuestra que, al combinar ambas estrategias de control, la capacidad para tolerar desbalances de potencia aumenta, proporcionando una mayor flexibilidad. El convertidor IMMC fue modelado y validado mediante resultados de simulaci ón. Además, una estrategia de control fue propuesta para regular la potencia total generada. Un prototipo de 10kW fue construido para respaldar los resultados presentados en simulación. Este estudio considera un convertidor IMMC conectado a la red el éctrica que opera en diferentes condiciones de potencia, demostrando una alta flexibilidad
Sistemes d'energia elèctrica
Branco, Miguel. "Distributed data management for large scale applications." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/72283/.
Повний текст джерелаVan, Mai Vien. "Large-Scale Optimization With Machine Learning Applications." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263147.
Повний текст джерелаQC 20191105
McKenzie, Donald. "Modeling large-scale fire effects : concepts and applications /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/5602.
Повний текст джерелаLu, Haihao Ph D. Massachusetts Institute of Technology. "Large-scale optimization Methods for data-science applications." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122272.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (pages 203-211).
In this thesis, we present several contributions of large scale optimization methods with the applications in data science and machine learning. In the first part, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. We consider general convex optimization problem, where we presume knowledge of a strict lower bound (like what happened in empirical risk minimization in machine learning). We introduce a new functional measure called the growth constant for the convex objective function, that measures how quickly the level sets grow relative to the function value, and that plays a fundamental role in the complexity analysis. Based on such measure, we present new computational guarantees for both smooth and non-smooth convex optimization, that can improve existing computational guarantees in several ways, most notably when the initial iterate is far from the optimal solution set.
The usual approach to developing and analyzing first-order methods for convex optimization always assumes that either the gradient of the objective function is uniformly continuous (in the smooth setting) or the objective function itself is uniformly continuous. However, in many settings, especially in machine learning applications, the convex function is neither of them. For example, the Poisson Linear Inverse Model, the D-optimal design problem, the Support Vector Machine problem, etc. In the second part, we develop a notion of relative smoothness, relative continuity and relative strong convexity that is determined relative to a user-specified "reference function" (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth or relatively continuous with respect to a correspondingly fairly-simple reference function.
We extend the mirror descent algorithm to our new setting, with associated computational guarantees. Gradient Boosting Machine (GBM) introduced by Friedman is an extremely powerful supervised learning algorithm that is widely used in practice -- it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In the third part, we propose the Randomized Gradient Boosting Machine (RGBM) and the Accelerated Gradient Boosting Machine (AGBM). RGBM leads to significant computational gains compared to GBM, by using a randomization scheme to reduce the search in the space of weak-learners. AGBM incorporate Nesterov's acceleration techniques into the design of GBM, and this is the first GBM type of algorithm with theoretically-justified accelerated convergence rate. We demonstrate the effectiveness of RGBM and AGBM over GBM in obtaining a model with good training and/or testing data fidelity.
by Haihao Lu.
Ph. D. in Mathematics and Operations Research
Ph.D.inMathematicsandOperationsResearch Massachusetts Institute of Technology, Department of Mathematics
Justinia, Taghreed. "Implementing large-scale healthcare information systems : the technological, managerial and behavioural issues." Thesis, Swansea University, 2009. https://cronfa.swan.ac.uk/Record/cronfa42224.
Повний текст джерелаWang, Xudong. "Large-Scale Patterned Oxide Nanostructures: Fabrication, Characterization and Applications." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-11212005-142143/.
Повний текст джерелаWang, Zhong Lin, Committee Chair ; Summers, Christopher J., Committee Co-Chair ; Wong, C. P., Committee Member ; Dupuis, Russell D., Committee Member ; Wagner, Brent, Committee Member
Morari, Alessadro. "Scalable system software for high performance large-scale applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/144564.
Повний текст джерелаDu, Jian, and 杜健. "Distributed estimation in large-scale networks : theories and applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/197090.
Повний текст джерелаpublished_or_final_version
Electrical and Electronic Engineering
Doctoral
Doctor of Philosophy
Deng, Jie. "Profiling large-scale live video streaming and distributed applications." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/43948.
Повний текст джерелаFragkos, I. "Large-scale optimisation in operations management : algorithms and applications." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1413951/.
Повний текст джерелаUppala, Roshni. "Simulating Large Scale Memristor Based Crossbar for Neuromorphic Applications." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1429296073.
Повний текст джерелаBenjaminsson, Simon. "On large-scale neural simulations and applications in neuroinformatics." Doctoral thesis, KTH, Beräkningsbiologi, CB, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122190.
Повний текст джерелаQC 20130515
Pottier, Loïc. "Co-scheduling for large-scale applications : memory and resilience." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN039/document.
Повний текст джерелаThis thesis explores co-scheduling problems in the context of large-scale applications with two main focus: the memory side, in particular the cache memory and the resilience side.With the recent advent of many-core architectures such as chip multiprocessors (CMP), the number of processing units is increasing.In this context, the benefits of co-scheduling techniques have been demonstrated. Recall that, the main idea behind co-scheduling is to execute applications concurrently rather than in sequence in order to improve the global throughput of the platform.But sharing resources often generates interferences.With the arising number of processing units accessing to the same last-level cache, those interferences among co-scheduled applications becomes critical.In addition, with that increasing number of processors the probability of a failure increases too.Resiliency aspects must be taking into account, specially for co-scheduling because failure-prone resources might be shared between applications.On the memory side, we focus on the interferences in the last-level cache, one solution used to reduce these interferences is the cache partitioning.Extensive simulations demonstrate the usefulness of co-scheduling when our efficient cache partitioning strategies are deployed.We also investigate the same problem on a real cache partitioned chip multiprocessors, using the Cache Allocation Technology recently provided by Intel.In a second time, still on the memory side, we study how to model and schedule task graphs on the new many-core architectures, such as Knights Landing architecture.These architectures offer a new level in the memory hierarchy through a new on-packagehigh-bandwidth memory. Current approaches usually do not take intoaccount this new memory level, however new scheduling algorithms anddata partitioning schemes are needed to take advantage of this deepmemory hierarchy.On the resilience, we explore the impact on failures on co-scheduling performance.The co-scheduling approach has been demonstrated in a fault-free context, but large-scale computer systems are confronted by frequent failures, and resilience techniques must be employed for large applications to execute efficiently. Indeed, failures may create severe imbalance between applications, and significantly degrade performance.We aim at minimizing the expected completion time of a set of co-scheduled applications in a failure-prone context by redistributing processors
Nie, Bin. "GPGPU Reliability Analysis: From Applications to Large Scale Systems." W&M ScholarWorks, 2019. https://scholarworks.wm.edu/etd/1563898932.
Повний текст джерелаStroud, Caleb Zachary. "Implementing Differential Privacy for Privacy Preserving Trajectory Data Publication in Large-Scale Wireless Networks." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/84548.
Повний текст джерелаMaster of Science
Murphy, Kris. "A THEORY OF STEERING COMMITTEE CAPABILITIES FOR IMPLEMENTING LARGE SCALE ENTERPRISE-WIDE INFORMATION SYSTEMS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1458218732.
Повний текст джерелаFeldman, Charlotte Hannah. "Smart X-ray optics for large and small scale applications." Thesis, University of Leicester, 2009. http://hdl.handle.net/2381/7833.
Повний текст джерелаMuresan, Adrian. "Scheduling and deployment of large-scale applications on Cloud platforms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00786475.
Повний текст джерелаWang, Jiechao. "Approaches for contextualization and large-scale testing of mobile applications." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49142.
Повний текст джерелаKim, Daeki. "Large scale transportation service network design : models, algorithms and applications." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10366.
Повний текст джерелаHurwitz, Jeremy Scott. "Error-correcting codes and applications to large scale classification systems." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53140.
Повний текст джерелаIncludes bibliographical references (p. 37-39).
In this thesis, we study the performance of distributed output coding (DOC) and error-Correcting output coding (ECOC) as potential methods for expanding the class of tractable machine-learning problems. Using distributed output coding, we were able to scale a neural-network-based algorithm to handle nearly 10,000 output classes. In particular, we built a prototype OCR engine for Devanagari and Korean texts based upon distributed output coding. We found that the resulting classifiers performed better than existing algorithms, while maintaining small size. Error-correction, however, was found to be ineffective at increasing the accuracy of the ensemble. For each language, we also tested the feasibility of automatically finding a good codebook. Unfortunately, the results in this direction were primarily negative.
by Jeremy Scott Hurwitz.
M.Eng.
Lee, John Jaesung. "Efficient object recognition and image retrieval for large-scale applications." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45637.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 91-93).
Algorithms for recognition and retrieval tasks generally call for both speed and accuracy. When scaling up to very large applications, however, we encounter additional significant requirements: adaptability and scalability. In many real-world systems, large numbers of images are constantly added to the database, requiring the algorithm to quickly tune itself to recent trends so it can serve queries more effectively. Moreover, the systems need to be able to meet the demands of simultaneous queries from many users. In this thesis, I describe two new algorithms intended to meet these requirements and give an extensive experimental evaluation for both. The first algorithm constructs an adaptive vocabulary forest, which is an efficient image-database model that grows and shrinks as needed while adapting its structure to tune itself to recent trends. The second algorithm is a method for efficiently performing classification tasks by comparing query images to only a fixed number of training examples, regardless of the size of the image database. These two methods can be combined to create a fast, adaptable, and scalable vision system suitable for large-scale applications. I also introduce LIBPMK, a fast implementation of common computer vision processing pipelines such as that of the pyramid match kernel. This implementation was used to build several successful interactive applications as well as batch experiments for research settings. This implementation, in addition to the two new algorithms introduced by this thesis, are a step toward meeting the speed, adaptability, and scalability requirements of practical large-scale vision systems.
by John Jaesung Lee.
M.Eng.
Zhou, Yi (Software engineer). "Uncertainty Evaluation in Large-scale Dynamical Systems: Theory and Applications." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc700073/.
Повний текст джерелаMahadevan, Karthikeyan. "Estimating reliability impact of biometric devices in large scale applications." Morgantown, W. Va. : [West Virginia University Libraries], 2003. http://etd.wvu.edu/templates/showETD.cfm?recnum=3096.
Повний текст джерелаTitle from document title page. Document formatted into pages; contains vii, 66 p. : ill. (some col.). Vita. Includes abstract. Includes bibliographical references (p. 62-64).
Wehbe, Diala. "Simulations and applications of large-scale k-determinantal point processes." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I012/document.
Повний текст джерелаWith the exponentially growing amount of data, sampling remains the most relevant method to learn about populations. Sometimes, larger sample size is needed to generate more precise results and to exclude the possibility of missing key information. The problem lies in the fact that sampling large number may be a principal reason of wasting time.In this thesis, our aim is to build bridges between applications of statistics and k-Determinantal Point Process(k-DPP) which is defined through a matrix kernel. We have proposed different applications for sampling large data sets basing on k-DPP, which is a conditional DPP that models only sets of cardinality k. The goal is to select diverse sets that cover a much greater set of objects in polynomial time. This can be achieved by constructing different Markov chains which have the k-DPPs as their stationary distribution.The first application consists in sampling a subset of species in a phylogenetic tree by avoiding redundancy. By defining the k-DPP via an intersection kernel, the results provide a fast mixing sampler for k-DPP, for which a polynomial bound on the mixing time is presented and depends on the height of the phylogenetic tree.The second application aims to clarify how k-DPPs offer a powerful approach to find a diverse subset of nodes in large connected graph which authorizes getting an outline of different types of information related to the ground set. A polynomial bound on the mixing time of the proposed Markov chain is given where the kernel used here is the Moore-Penrose pseudo-inverse of the normalized Laplacian matrix. The resulting mixing time is attained under certain conditions on the eigenvalues of the Laplacian matrix. The third one purposes to use the fixed cardinality DPP in experimental designs as a tool to study a Latin Hypercube Sampling(LHS) of order n. The key is to propose a DPP kernel that establishes the negative correlations between the selected points and preserve the constraint of the design which is strictly confirmed by the occurrence of each point exactly once in each hyperplane. Then by creating a new Markov chain which has n-DPP as its stationary distribution, we determine the number of steps required to build a LHS with accordance to n-DPP
Biel, Martin. "Distributed Stochastic Programming with Applications to Large-Scale Hydropower Operations." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263616.
Повний текст джерелаKutlu, Mucahid. "Parallel Processing of Large Scale Genomic Data." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436355132.
Повний текст джерелаWessman, Love, and Niklas Wessman. "Threat modeling of large-scale computer systems : Implementing and evaluating threat modeling at Company X." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280099.
Повний текст джерелаHotmodellering är ett växande område inom cybersäkerhet. När datorsystem växer och blir mer komplicerade, så blir kunskapen om hur man modellerar och skyddar systemen allt viktigare. Hotmodellering är ett verktyg väl anpassat till denna uppgift. I den här rapporten ligger fokuset på att höja cybersäkerheten på Företag X och att bidra till utvecklingen av hotmodellering, vilket i sig bidrar till att stärka forskningen kring cybersäkerhet. Den huvudsakliga frågan som undersöks är vilka resultat som kan uppnås med en implementation av KTH Threat Modeling Method på specifika system hos Företag X. Frågan besvaras genom att implementera metoden på de specificerade systemen. Därefter används erfarenheten av utvecklingen och det framtagna resultatet till att evaluera hotmodellerings metoden. Den framtagna modellen pekar på att den största risken i de undersökta systemen hos Företag X är deras internetanslutna rökdetektorer och smarta mätare som mäter vatten- och elförbrukning. De rekommendationer som ges är bland annat att skydda sig mot SQL-injektionerna genom att hålla systemen uppdaterade och att validera indata. De huvudsakliga intrycken som erhölls från att implementera hotmodelleringsmetoden på Företag X är att metoden är lätt att använda, lära sig, och förstå. Ett annat resultat är att ju mer information hotmodelleraren har kring systemen som utforskas, desto mer exakt kan hotmodellen bli. Metoden är idealiskt lämpad för renodlad, sammansluten mjukvara, snarare än att modellera flera icke-sammanslutna system i en och samma iteration av metoden, vilket är vad denna rapport gör. För att förenkla utlärningsprocessen av metoden så kan en omfattande skriven resurs som exempelvis en bok vara till god hjälp. För att förbättra själva metoden föreslås integration av automatiserade attacksimulerings- och modelleringsverktyg. The KTH Threat Modeling Method är en iterativ process. Modellen kan och bör göras bättre genom att kontinuerligt iterera över arbetet flera gånger, där modellens detaljrikhet ökas för varje iteration. Det som presenteras i denna rapport är första iterationen av denna process. Innebörden av resultaten från denna rapport visar att även om hotmodelleringsmetoden redan är en mogen metod som kan producera meningsfulla hotmodelleringsresultat, finns det fortfarande vissa bitar som kan förbättras eller läggas till, vilket enligt författarna skulle öka metodens styrka i allmänhet.
Papacharalampos, Georgios. "Small scale/large scale MFC stacks for improved power generation and implementation in robotic applications." Thesis, University of the West of England, Bristol, 2016. http://eprints.uwe.ac.uk/27396/.
Повний текст джерелаNytén, Anton. "Low-Cost Iron-Based Cathode Materials for Large-Scale Battery Applications." Doctoral thesis, Uppsala University, Department of Materials Chemistry, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-6842.
Повний текст джерелаThere are today clear indications that the Li-ion battery of the type currently used worldwide in mobile-phones and lap-tops is also destined to soon become the battery of choice in more energy-demanding concepts such as electric and electric hybrid vehicles (EVs and EHVs). Since the currently used cathode materials (typically of the Li(Ni,Co)O2-type) are too expensive in large-scale applications, these new batteries will have to exploit some much cheaper transition-metal. Ideally, this should be the very cheapest - iron(Fe) - in combination with a graphite(C)-based anode. In this context, the obvious Fe-based active cathode of choice appears to be LiFePO4. A second and in some ways even more attractive material - Li2FeSiO4 - has emerged during the course of this work.
An effort has here been made to understand the Li extraction/insertion mechanism on electrochemical cycling of Li2FeSiO4. A fascinating picture has emerged (following a complex combination of Mössbauer, X-ray diffraction and electrochemical studies) in which the material is seen to cycle between Li2FeSiO4 and LiFeSiO4, but with the structure of the original Li2FeSiO4 transforming from a metastable short-range ordered solid-solution into a more stable long-range ordered structure during the first cycle. Density Functional Theory calculations on Li2FeSiO4 and the delithiated on LiFeSiO4 structure provide an interesting insight into the experimental result.
Photoelectron spectroscopy was used to study the surface chemistry of both carbon-treated LiFePO4 and Li2FeSiO4 after electrochemical cycling. The surface-layer on both materials was concluded to be very thin and with incomplete coverage, giving the promise of good long-term cycling.
LiFePO4 and Li2FeSiO4 should both be seen as highly promising candidates as positive-electrode materials for large-scale Li-ion battery applications.
Gutin, Eli. "Practical applications of large-scale stochastic control for learning and optimization." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120191.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 183-188).
This thesis explores a variety of techniques for large-scale stochastic control. These range from simple heuristics that are motivated by the problem structure and are amenable to analysis, to more general deep reinforcement learning (RL) which applies to broader classes of problems but is trickier to reason about. In the first part of this thesis, we explore a less known application of stochastic control in Multi-armed bandits. By assuming a Bayesian statistical model, we get enough problem structure so that we can formulate an MDP to maximize total rewards. If the objective involved total discounted rewards over an infinite horizon, then the celebrated Gittins index policy would be optimal. Unfortunately, the analysis there does not carry over to the non-discounted, finite-horizon problem. In this work, we propose a tightening sequence of 'optimistic' approximations to the Gittins index. We show that the use of these approximations together with the use of an increasing discount factor appears to offer a compelling alternative to state-of-the-art algorithms. We prove that these optimistic indices constitute a regret optimal algorithm, in the sense of meeting the Lai-Robbins lower bound, including matching constants. The second part of the thesis focuses on the collateral management problem (CMP). In this work, we study the CMP, faced by a prime brokerage, through the lens of multi-period stochastic optimization. We find that, for a large class of CMP instances, algorithms that select collateral based on appropriately computed asset prices are near-optimal. In addition, we back-test the method on data from a prime brokerage and find substantial increases in revenue. Finally, in the third part, we propose novel deep reinforcement learning (DRL) methods for option pricing and portfolio optimization problems. Our work on option pricing enables one to compute tighter confidence bounds on the price, using the same number of Monte Carlo samples, than existing techniques. We also examine constrained portfolio optimization problems and test out policy gradient algorithms that work with somewhat different objective functions. These new objectives measure the performance of a projected version of the policy and penalize constraint violation.
by Eli Gutin.
Ph. D.
McVay, Elaine D. "Large scale applications of 2D materials for sensing and energy harvesting." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111925.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references.
In this project we demonstrate the fabrication and characterization of printed reduced graphene oxide strain sensors, Chemical Vapor Deposition (CVD) 2D material transistors, and tungsten diselenide (WSe₂) photovoltaic devices that were produced through a combination of printing and conventional microfabrication processes. Each of these components were designed with the purpose of fitting into a "smart skin" system that could be discretely integrated into and sense its environment. This thesis document will describe the modification-of a 3D printer to give it inkjet capabilities that allow for the direct deposition of graphene oxide flakes onto a 3D printed surface. These graphene oxide flake traces were then reduced, making them more conductive and able to function as strain sensors. Next, this thesis will discuss the development of CVD molybdenum disulfide (MoS₂) and CVD graphene transistors and how they can be modified to function as chemical sensors. Finally, this work will detail steps taken to design, fabricate, and test a WSe₂ photovoltaic device which is composed of a printed active layer. In summary, these devices can fit into the sensing, communication, and energy harvesting blocks required in realizing a ubiquitous sensing system.
by Elaine D. McVay.
S.M.
Ezeozue, Chidube Donald. "Large-scale consensus clustering and data ownership considerations for medical applications." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/86273.
Повний текст джерелаThesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 97-101).
An intersection of events has led to a massive increase in the amount of medical data being collected from patients inside and outside the hospital. These events include the development of new sensors, the continuous decrease in the cost of data storage, the development of Big Data algorithms in other domains and the Health Information Technology for Economic and Clinical Health (HITECH) Act's $20 billion incentive for hospitals to install and use Electronic Health Record (EHR) systems. The data being collected presents an excellent opportunity to improve patient care. However, this opportunity is not without its challenges. Some of the challenges are technical in nature, not the least of which is how to efficiently process such massive amounts of data. At the other end of the spectrum, there are policy questions that deal with data privacy, confidentiality and ownership to ensure that research continues unhindered while preserving the rights and interests of the stakeholders involved. This thesis addresses both ends of the challenge spectrum. First of all, we design and implement a number of methods for automatically discovering groups within large amounts of data, otherwise known as clustering. We believe this technique would prove particularly useful in identifying patient states, segregating cohorts of patients and hypothesis generation. Specifically, we scale a popular clustering algorithm, Expectation-Maximization (EM) for Gaussian Mixture Models to be able to run on a cloud of computers. We also give a lot of attention to the idea of Consensus Clustering which allows multiple clusterings to be merged into a single ensemble clustering. Here, we scale one existing consensus clustering algorithm, which relies on EM for multinomial mixture models. We also develop and implement a more general framework for retrofitting any consensus clustering algorithm and making it amenable to streaming data as well as distribution on a cloud. On the policy end of the spectrum, we argue that the issue of data ownership is essential and highlight how the law in the United States has handled this issue in the past several decades, focusing on common law and state law approaches. We proceed to identify the flaws, especially the fragmentation, in the current system and make recommendations for a more equitable and efficient policy stance. The recommendations center on codifying the policy stance in Federal Law and allocating the property rights of the data to both the healthcare provider and the patient.
by Chidube Donald Ezeozue.
S.M. in Technology and Policy
S.M.
Scarciotti, Giordano. "Approximation, analysis and control of large-scale systems : theory and applications." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/30781.
Повний текст джерелаMoise, Diana Maria. "Optimizing data management for MapReduce applications on large-scale distributed infrastructures." Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0067/document.
Повний текст джерелаData-intensive applications are nowadays, widely used in various domains to extract and process information, to design complex systems, to perform simulations of real models, etc. These applications exhibit challenging requirements in terms of both storage and computation. Specialized abstractions like Google’s MapReduce were developed to efficiently manage the workloads of data-intensive applications. The MapReduce abstraction has revolutionized the data-intensive community and has rapidly spread to various research and production areas. An open-source implementation of Google's abstraction was provided by Yahoo! through the Hadoop project. This framework is considered the reference MapReduce implementation and is currently heavily used for various purposes and on several infrastructures. To achieve high-performance MapReduce processing, we propose a concurrency-optimized file system for MapReduce Frameworks. As a starting point, we rely on BlobSeer, a framework that was designed as a solution to the challenge of efficiently storing data generated by data-intensive applications running at large scales. We have built the BlobSeer File System (BSFS), with the goal of providing high throughput under heavy concurrency to MapReduce applications. We also study several aspects related to intermediate data management in MapReduce frameworks. We investigate the requirements of MapReduce intermediate data at two levels: inside the same job, and during the execution of pipeline applications. Finally, we show how BSFS can enable extensions to the de facto MapReduce implementation, Hadoop, such as the support for the append operation. This work also comprises the evaluation and the obtained results in the context of grid and cloud environments
Asif, Fayyaz Muhammad. "Achieving Robust Self Management for Large Scale Distributed Applications using Management Elements." Thesis, KTH, School of Information and Communication Technology (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24229.
Повний текст джерелаAbstract
Autonomic computing is an approach proposed by IBM that enables a system to self-con gure, self-heal, self-optimize, and self-protect itself, usually referred to as self-* or self-management. Humans should only specify higher level policies to guide the self-* behavior of the system.
Self-Management is achieved using control feedback loops that consist of four stages: monitor, analyze, plan, and execute. Management is more challenging in dynamic distributed environments where resources can join, leave, and fail. To address this problem a Distributed Component Management System (DCMS), a.k.a Niche, is being developed at KTH and SICS (Swedish Institute of Computer Science). DCMS provides abstractions that enable the construction of distributed control feedback loops. Each loop consists of a number of management elements (MEs) that do one or more of the four stages of a control loop mentioned above.
The current implementation of DCMS assumes that management elements (MEs) are deployed on stable nodes that do not fail. This assumption is dicult to guarantee in many environments and application scenarios. One solution to this limitation is to replicate MEs so that if one fails other MEs can continue working and restore the failed one. The problem is that MEs are stateful. We need to keep the state consistent among replicas. We also want to be sure that all events are processed (nothing is lost) and all actions are applied exactly once.
This report explains a proposal for the replication of stateful MEs under DCMS framework. For improved scalability, load-balancing and fault-tolerance, dierent breakthroughs in the eld of replicated state machine has been taken into account and discussed in this report. Chord has been used as an underlying structured overlay network (SON). This report also describes a prototype implementation of this proposal and discusses the results.
Putthividhya, Wanida. "Quality of service (QoS) support for multimedia applications in large-scale networks." [Ames, Iowa : Iowa State University], 2006.
Знайти повний текст джерелаSeal, Sudip K. "Parallel methods for large-scale applications in computational electromagnetics and materials science." [Ames, Iowa : Iowa State University], 2007.
Знайти повний текст джерелаTor, Ali Hakan. "Derivative Free Algorithms For Large Scale Non-smooth Optimization And Their Applications." Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615700/index.pdf.
Повний текст джерелаLiu, X. "Kernel methods for large-scale process monitoring with applications in power systems." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517392.
Повний текст джерелаWiersma, Mark Edward. "Standards and benchmark tests for evaluating large scale manipulators with construction applications." Thesis, Monterey, California. Naval Postgraduate School, 1995. http://hdl.handle.net/10945/26265.
Повний текст джерелаSunderland, Andrew Gareth. "Large scale applications on distributed-memory parallel computers using efficient numerical methods." Thesis, University of Liverpool, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367976.
Повний текст джерелаWollenweber, Fritz Georg. "Programming environments and tools for massively parallel computers and large scale applications." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266943.
Повний текст джерелаEltayeb, Mohammed Soleiman. "Efficient data scheduling for real-time large-scale data-intensive distributed applications." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1095719463.
Повний текст джерела