Dissertations / Theses on the topic 'Electrically large analysis'

To see the other types of publications on this topic, follow the link: Electrically large analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Electrically large analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Keghie, Jules Alliance Fernand [Verfasser]. "Simplified Analysis of Electrically Large Multi-room Systems / Jules Alliance Fernand Keghie." Aachen : Shaker, 2014. http://d-nb.info/1050342194/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Kezhong. "A domain decomposition method for solving electrically large electromagnetic problems." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1189694496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tseng, Huan-Wan. "Hybrid analysis of em radiation and scattering by composite slot-blade cavity backed antennas on the surface of electrically large smooth convex cylinders /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487949508371203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Richard Yi. "Robust stability analysis for large-scale power systems." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/108846.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 145-154).
Innovations in electric power systems, such as renewable energy, demand-side participation, and electric vehicles, are all expected to increase variability and uncertainty, making stability verification more challenging. This thesis extends the technique of robust stability analysis to large-scale electric power systems under uncertainty. In the first part of this thesis, we examine the use of the technique to solve real problems faced by grid operators. We present two case studies: small-signal stability for distributed renewables on the IEEE 118-bus test system, and large-signal stability for a microgrid system. In each case study, we show that robust stability analysis can be used to compute stability margins for entire collections of uncertain scenarios. In the second part of this thesis, we develop scalable algorithms to solve robust stability analysis problems on large-scale power systems. We use preconditioned iterative methods to solve the Newton direction computation in the interior-point method, in order to avoid the O(n6) time complexity associated with a dense-matrix approach. The per-iteration costs of the iterative methods are reduced to O(n3) through a hierarchical block-diagonal-plus-low-rank structure in the data matrices. We provide evidence that the methods converge to an [epsilon]-accurate solution in O(1=[square root of ] [epsilon]) iterations, and characterize two broad classes of problems for which the enhanced convergence is guaranteed.
by Richard Yi Zhang.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Xin 1979. "Full-wave analysis of large conductor systems over substrate." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35597.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (leaves 137-145).
Designers of high-performance integrated circuits are paying ever-increasing attention to minimizing problems associated with interconnects such as noise, signal delay, crosstalk, etc., many of which are caused by the presence of a conductive substrate. The severity of these problems increases as integrated circuit clock frequencies rise into the multiple gigahertz range. In this thesis, a simulation tool is presented for the extraction of full-wave interconnect impedances in the presence of a conducting substrate. The substrate effects are accounted for through the use of full-wave layered Green's functions in a mixed-potential integral equation (MPIE) formulation. Particularly, the choice of implementation for the layered Green's function kernels motivates the development of accelerated techniques for both their 3D volume and 2D surface integrations, where each integration type can be reduced to a sum of D line integrals. In addition, a set of high-order, frequency-independent basis functions is developed with the ability to parameterize the frequency-dependent nature of the solution space, hence reducing the number of unknowns required to capture the interconnects' frequency-variant behavior.
(cont.) Moreover, a pre-corrected FFT acceleration technique, conventional for the treatment of scalar Green's function kernels, is extended in the solver to accommodate the dyadic Green's function kernels encountered in the substrate modeling problem. Overall, the integral-equation solver, combined with its numerous acceleration techniques, serves as a viable solution to full-wave substrate impedance extractions of large and complex interconnect structures.
by Xin Hu.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
6

Palmer, Nathan Patrick. "Data mining techniques for large-scale gene expression analysis." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68493.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 238-256).
Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression levels of tens of thousands of genes, evaluate hundreds of thousands of single-nucleotide polymorphisms, and sequence individual genomes. The data produced by these assays has provided the research and commercial communities with the opportunity to derive improved clinical prognostic indicators, as well as develop an understanding, at the molecular level, of the systemic underpinnings of a variety of diseases. Aside from the statistical methods used to evaluate these assays, another, more subtle challenge is emerging. Despite the explosive growth in the amount of data being generated and submitted to the various publicly available data repositories, very little attention has been paid to managing the phenotypic characterization of their samples (i.e., managing class labels in a controlled fashion). If sense is to be made of the underlying assay data, the samples' descriptive metadata must first be standardized in a machine-readable format. In this thesis, we explore these issues, specifically within the context of curating and analyzing a large DNA microarray database. We address three main challenges. First, we acquire a large subset of a publicly available microarray repository and develop a principled method for extracting phenotype information from freetext sample labels, then use that information to generate an index of the sample's medically-relevant annotation. The indexing method we develop, Concordia, incorporates pre-existing expert knowledge relating to the hierarchical relationships between medical terms, allowing queries of arbitrary specificity to be efficiently answered. Second, we describe a highly flexible approach to answering the question: "Given a previously unseen gene expression sample, how can we compute its similarity to all of the labeled samples in our database, and how can we utilize those similarity scores to predict the phenotype of the new sample?" Third, we describe a method for identifying phenotype-specific transcriptional profiles within the context of this database, and explore a method for measuring the relative strength of those signatures across the rest of the database, allowing us to identify molecular signatures that are shared across various tissues ad diseases. These shared fingerprints may form a quantitative basis for optimal therapy selection and drug repositioning for a variety of diseases.
by Nathan Patrick Palmer.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
7

Ford, Logan H. "Large-scale acoustic scene analysis with deep residual networks." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123026.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-66).
Many of the recent advances in audio event detection, particularly on the AudioSet dataset, have focused on improving performance using the released embeddings produced by a pre-trained model. In this work, we instead study the task of training a multi-label event classifier directly from the audio recordings of AudioSet. Using the audio recordings, not only are we able to reproduce results from prior work, we have also confirmed improvements of other proposed additions, such as an attention module. Moreover, by training the embedding network jointly with the additions, we achieve a mean Average Precision (mAP) of 0.392 and an area under ROC curve (AUC) of 0.971, surpassing the state-of-the-art without transfer learning from a large dataset. We also analyze the output activations of the network and find that the models are able to localize audio events when a finer time resolution is needed. In addition, we use this model in exploring multimodal learning, transfer learning, and realtime sound event detection tasks.
by Logan H. Ford.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
8

Sridharan, Ramesh. "Visualization and analysis of large medical image collections using pipelines." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99849.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Title as it appears in MIT Commencement Exercises program, June 5, 2015: Visualization and analysis of computational pipelines for large medical image collections. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 80-100).
Medical image analysis often requires developing elaborate algorithms that are implemented as computational pipelines. A growing number of large medical imaging studies necessitate development of robust and flexible pipelines. In this thesis, we present contributions of two kinds: (1) an open source framework for building pipelines to analyze large scale medical imaging data that addresses these challenges, and (2) two case studies of large scale analyses of medical image collections using our tool. Our medical image analysis pipeline construction tool, PipeBuilder, is designed for constructing pipelines to analyze complex data where iterative refinement and development are necessary. We provide a lightweight scripting framework that enables the use of existing and novel algorithms in pipelines. We also provide a set of tools to visualize the pipeline's structure, data processing status, and intermediate and final outputs. These visualizations enable interactive analysis and quality control, facilitating computation on large collections of heterogeneous images. We employ PipeBuilder first to analyze white matter hyperintensity in stroke patients. Our study of this cerebrovascular pathology consists of three main components: accurate registration to enable data fusion and population analysis, segmentation to automatically delineate pathology from the images, and statistical analysis to extract clinical insight using the images and the derived measures. Our analysis explores the relationship between the spatial distribution, quantity, and growth of white matter hyperintensity. Our next application of PipeBuilder is to a neuroimaging study of Alzheimer's patients, where we explicitly characterize changes over time using longitudinal data. As with the previous application, we introduce a workflow that involves registration, segmentation, and statistical analysis. Our registration pipeline aligns the large, heterogeneous group of populations while still accurately characterizing small changes in each patient over time. The statistical analysis exploits this alignment to explore the change in white matter hyperintensity over time.
by Ramesh Sridharan.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Lucas, Christopher G. "Patent semantics : analysis, search and visualization of large text corpora." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/33146.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 47-48).
Patent Semantics is system for processing text documents by extracting features capturing their semantic content, and searching, clustering, and relating them by those same features. It is set apart from existing methodologies by combining a visualization scheme that integrates retrieval and clustering, providing a variety of ways to find and relate documents depending on their goals. In addition, the system provides an explanatory mechanism that makes the retrieval an understandable process rather than a black box. The domain in which the system currently works is biochemistry and molecular biology patents but it is not intrinsically constrained to any document set.
by Christopher G. Lucas.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
10

Gorham, LeRoy A. "Large Scene SAR Image Formation." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1452031174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Novak, Erik Lowell 1971. "Measurement and analysis optimization of large aperture laser Fizeau interferometer." Diss., The University of Arizona, 1998. http://hdl.handle.net/10150/282616.

Full text
Abstract:
High-power laser systems, such as the National Ignition Facility (NIF) project at Lawrence Livermore National Laboratories (LLNL), require optics of extremely high quality. Surface errors, especially periodic surface relief structures, can lead to focal spot degradation at best and serious damage to downstream optics at worst. The optics in these systems must be characterized with a high degree of accuracy to ensure proper operation. To provide system optics of sufficient quality, the testing apparatus must measure surface structure with high fidelity and cannot introduce significant errors into the measurements. This paper deals with measurements taken on two WYKO phase-shifting laser Fizeau interferometers to optimize their ability to meet the measurement requirements for optics in high-power laser systems. Increasingly, tolerances on optics are being specified with the power spectral density function (PSD) of the surface height data, and thus the power spectrum is used to characterize the measurement system. The system transfer function, which is the ratio of the measured amplitude of frequency components to the actual, is calculated using several methods. The effects of various parameters on the calculated system transfer function are studied. First, the use of the finite Fourier transform to estimate the power spectrum from surface profile data was studied. Next, simulated measurements were analyzed to determine the effects of rotation, feature location, noise, windowing, and other variables on the calculated power spectral density. After the theoretical analysis, the interferometer transfer function was calculated using two techniques. The effects of wavefront propagation on the measurements were also studied. Measurements were first taken on a 150mm laser Fizeau system where the effect of changing various parameters was studied. Final measurements were taken on the 600mm system to verify system performance. The large aperture laser Fizeau interferometer as built had a system transfer that surpassed the system requirements with regards to transfer function and measurement noise. The system measured frequency amplitudes with 70% fidelity up to half the Nyquist frequency. In addition, the power spectrum of the noise plots was below the system specification of 0.1ν⁻¹·⁵⁵nm²mm over the spatial frequencies ν of interest,¹ more than ten times lower than the specification on the parts to be measured.
APA, Harvard, Vancouver, ISO, and other styles
12

Vatne, Åshild. "Analysis of Large Scale Adoption of Electrical Vehicles and Wind Integration in Nord-Trøndelag." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elkraftteknikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18989.

Full text
Abstract:
With the ‘Agreement on Climate Policy’ (Klimaforliket) signed by the Norwegian government on January 17th 2008, Norway has set a goal to reduce emission caused by transportation with 2.5 – 4 million tons CO2 equivalents compared with the reference for 2020. To reach this goal, high penetration of electrical vehicles is essential, and new technologies and solutions for the infrastructure must be cleared early in the process. With the aim of triggering a discussion on the topic, this thesis presents a methodology for analysing the impact of large-scale adoption of EVs on the electrical grid. A specific portion of a real network was selected and two charging modalities for the electrical vehicles were investigated. The analysis will focus mainly on chargers located at residences, to then explore how the utility can put forward a system for smart charging strategies ("dumb" vs. "smart" charging). Data from a low voltage network was provided by NTE, located in Steinkjer in Nord-Trøndelag. Three different scenarios were analysed. Scenario 1 was given as the base scenario, were the share of EVs where 0%. This was simulated to get a proper comparison. In scenario 2, a share of 10% EVs was implemented in the grid. The share of EVs in scenario 3 was decided to be 60%. The result obtained in the analysis, verified that the smart charging approach causes less strain on the gird. The low voltage network was not capable to handle a large share of EVs (>60%) without any charging scheduling. The smart charge strategy did not cause any extra strain at the grid during peak hours. In addition, the smart charging can introduce the Vehicle-to-Home solution. The EVs can provide ancillary service and support the network with matching supply/demand and reactive power support. A simplified analysis of V2H and reactive compensation was carried out to demonstrate how the grid could benefit from an implementation of EVs.The second part of the analysis, a series of wind measurement was included into the simulation in order to see if wind power can supply the load of the entire residential area. A design for suitable energy storage was also proposed in order for the system to operate as a stand-alone system. Grid stability and power quality was not included in the analysis. The result from the wind integration shows that in order for the network to operate as a stand-alone system in the worst-case scenario, there is a need of an enormous storage. It is assumed based on the results, that the system is self-supplied most part of the year. This thesis proposes a storage consisting of 7 battery-packs from old vehicles, with the capacity of 50 kWh each. This will result in a 30% reduction of the peak demand from the grid, when wind power is integrated.The case study addressed in the thesis, present a methodology for analysis the impact of a large adoption of EVs on the distribution network. The results obtained from this analysis, is considered transferable to similar networks. In order to achieve smart charging, there is need for further research on scheduling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
13

Landivar, Chávez José Luis. "Complexity cost analysis in a large product line." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/38551.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and, (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2006.
Includes bibliographical references (p. 73).
Hewlett-Packard's Industry Standard Servers (ISS) organization offers a large variety of server computers and accessories. The large range of options available to its customers gives way to complex processes and less than optimal resource usage. In the work presented here, a method for quantifying the costs associated with these effects is reviewed and applied to the current ISS offering. Several areas for investigation are identified including Research and Development, Sales and Marketing, Materials, Inventory, Organizational, amongst other costs that erode profits. Each area is explored through interviews and detailed modeling to arrive to a cost estimate. The end goal for estimating the costs associated with complexity is to establish a simple process for evaluating the "real" profitability of a new product introduction (NPI). Such process is enabled by implementing a set of complexity guidelines through the use of a complexity cost calculator. The end result is that HP's ISS division could see savings in the millions of dollars once the program is implemented. In the end, a set of wide reaching conclusions are drawn from the study to assist in future complexity cost analyses.
by José Luis Landivar Chávez.
M.B.A.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

Cooper, Lee Alex Donald. "High Performance Image Analysis for Large Histological Datasets." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250004647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tan, Vincent Yan Fu. "Large-deviation analysis and applications Of learning tree-structured graphical models." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/64486.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 213-228).
The design and analysis of complexity-reduced representations for multivariate data is important in many scientific and engineering domains. This thesis explores such representations from two different perspectives: deriving and analyzing performance measures for learning tree-structured graphical models and salient feature subset selection for discrimination. Graphical models have proven to be a flexible class of probabilistic models for approximating high-dimensional data. Learning the structure of such models from data is an important generic task. It is known that if the data are drawn from tree-structured distributions, then the algorithm of Chow and Liu (1968) provides an efficient algorithm for finding the tree that maximizes the likelihood of the data. We leverage this algorithm and the theory of large deviations to derive the error exponent of structure learning for discrete and Gaussian graphical models. We determine the extremal tree structures for learning, that is, the structures that lead to the highest and lowest exponents. We prove that the star minimizes the exponent and the chain maximizes the exponent, which means that among all unlabeled trees, the star and the chain are the worst and best for learning respectively. The analysis is also extended to learning foreststructured graphical models by augmenting the Chow-Liu algorithm with a thresholding procedure. We prove scaling laws on the number of samples and the number variables for structure learning to remain consistent in high-dimensions. The next part of the thesis is concerned with discrimination. We design computationally efficient tree-based algorithms to learn pairs of distributions that are specifically adapted to the task of discrimination and show that they perform well on various datasets vis-`a-vis existing tree-based algorithms. We define the notion of a salient set for discrimination using information-theoretic quantities and derive scaling laws on the number of samples so that the salient set can be recovered asymptotically.
by Vincent Yan Fu Tan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
16

Preciado, Víctor Manuel. "Spectral analysis for stochastic models of large-scale complex dynamical networks." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45873.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 179-196).
Research on large-scale complex networks has important applications in diverse systems of current interest, including the Internet, the World-Wide Web, social, biological, and chemical networks. The growing availability of massive databases, computing facilities, and reliable data analysis tools has provided a powerful framework to explore structural properties of such real-world networks. However, one cannot efficiently retrieve and store the exact or full topology for many large-scale networks. As an alternative, several stochastic network models have been proposed that attempt to capture essential characteristics of such complex topologies. Network researchers then use these stochastic models to generate topologies similar to the complex network of interest and use these topologies to test, for example, the behavior of dynamical processes in the network. In general, the topological properties of a network are not directly evident in the behavior of dynamical processes running on it. On the other hand, the eigenvalue spectra of certain matricial representations of the network topology do relate quite directly to the behavior of many dynamical processes of interest, such as random walks, Markov processes, virus/rumor spreading, or synchronization of oscillators in a network. This thesis studies spectral properties of popular stochastic network models proposed in recent years. In particular, we develop several methods to determine or estimate the spectral moments of these models. We also present a variety of techniques to extract relevant spectral information from a finite sequence of spectral moments. A range of numerical examples throughout the thesis confirms the efficacy of our approach. Our ultimate objective is to use such results to understand and predict the behavior of dynamical processes taking place in large-scale networks.
by Víctor Manuel Preciado.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Yu Ju. "A comprehensive electromagnetic analysis of AC losses in large superconducting cables." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/41418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Webster, Mort David. "Analysis of uncertainty in large models with application to climate policy." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11028.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 115-118).
by Mort David Webster.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
19

Janpugdee, Panuwat. "An Efficient Discrete Fourier Transform Based Ray Analysis of Large Finite Planar Phased Arrays." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1392817276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Shah, Shahil. "Small and Large Signal Impedance Modeling for Stability Analysis of Grid-connected Voltage Source Converters." Thesis, Rensselaer Polytechnic Institute, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10786614.

Full text
Abstract:

Interactions between grid-connected converters and the networks at their terminals have resulted in stability and resonance problems in converter-based power systems, particularly in applications ranging from wind and PV farms to electric traction and HVDC transmission networks. Impedance-based modeling and analysis methods have found wide acceptance for the evaluation of these resonance problems.

This thesis presents small and large signal impedance modeling of grid-connected single and three phase voltage source converters (VSC) to enable the analysis of resonance conditions involving multiple frequency components, and both the ac and dc power systems at the VSC terminals. A modular impedance modeling approach is proposed by defining the VSC impedance as transfer matrix, which captures the frequency cross-coupling effects and also the coupling between the ac and dc power systems interfaced by the VSC. Ac and dc impedance models are developed for a VSC including the reflection of the network on the other side of the VSC. Signal-flow graphs for linear time-periodic (LTP) systems are proposed to streamline and visually describe the linearization of grid-connected converters including the frequency cross-coupling effects. Relationships between the impedance modeling in dq, sequence, and phasor domains are also developed. The phasor-domain impedance formulation links the impedance methods with the phasor-based state-space modeling approach generally used for bulk power systems. A large-signal impedance based method is developed for predicting the amplitude or severity of resonance under different grid conditions. The small-signal harmonic linearization method is extended for the large-signal impedance modeling of grid-connected converters. It is shown that the large-signal impedance of a converter is predominantly shaped by hard nonlinearities in the converter control system such as PWM saturation and limiters.

This thesis also deals with the problem of synchronizing a generator or microgrid with another power system. A VSC-based synchronizer is proposed for active phase synchronization and a distributed synchronization method is developed for microgrids.

APA, Harvard, Vancouver, ISO, and other styles
21

Dey, Sourav. "Large-Signal Analysis of Buck and Interleaved Buck DC-AC Converters." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1409578634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jiao, Yu Ming. "MPI parallel computing on eigensystems of small signal stability analysis for large interconnected power grids." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=95151.

Full text
Abstract:
Eigenanalysis is widely used in power system stability study. With PC technologies available today, it takes long time to compute the entire eigensystems of large interconnected power grids. Since power transmission lines are connected & disconnected and line loads keep changing frequently, tracking eigensystems in real-time requires parallel computation. Recently, a parallel eigensystem computation method, the Break and Bind (B & B) method, has been proposed by Dr. H. M. Banakar in McGill University. This method is viewing connection of two isolated sub-networks as being equivalent to a rank-one modification (ROM) of the stiffness matrix and considering the two sub-networks as a single entity. Research of this thesis consists of implementing the B & B method based on Message Passing Interface (MPI) parallel programming in #C. The developed MPI codes were executed on super-computers - Krylov cluster of CLUMEQ and Mammouth Series II cluster of RQCHP. The testing results have demonstrated that the eigensystem of a power system composed of around 4,000 generators can be updated within two seconds.
L'analyse des valeurs propres est largement utilisée dans les études de stabilité des réseaux électriques. En utilisant les ordinateurs personnels disponibles aujourd'hui, le calcul de la totalité des valeurs propres de plusieurs grands réseaux électriques interconnectés requiert beaucoup de temps. Étant donné que les lignes de transport d'électricité sont connectées et déconnectées et que les charges ne cessent de varier, le suivi des valeurs propres en temps réel nécessite des calculs en parallèles. Récemment, une méthode de calcul en parallèle des valeurs propres, la Break et Bind (B & B), a été proposée par le Dr. H. M. Banakar à l'Université McGill. Cette méthode voit la connexion de deux sous-réseaux isolés comme étant équivalent à une modification de rang un de la matrice de raideur et considère les deux sous-réseaux comme une entité entière. La recherche de cette thèse consiste à implanter la méthode B & B avec une programmation parallèle en #C basé sur l'interface Message Passing Interface (MPI). Le code de programmation développé en MPI a été exécuté avec des superordinateurs - Krylov de CLUMEQ et Mammouth série II de RQCHP. Les résultats des tests ont démontrés que les valeurs propres d'un système composé d'environ 4,000 alternateurs peuvent être calculées à l'intérieur de deux secondes.
APA, Harvard, Vancouver, ISO, and other styles
23

Shisha, Samer. "Analysis of Inverter-fed Losses on the Solid Rotor of Large-scale Synchronous Machines." Licentiate thesis, KTH, Elektriska maskiner och effektelektronik (stängd 20110930), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-64142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Xiaochuan. "A Domain Decomposition Method for Analysis of Three-Dimensional Large-Scale Electromagnetic Compatibility Problems." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338376950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Planthaber, Gary Lee Jr. "MODBASE : a SciDB-powered system for large-scale distributed storage and analysis of MODIS earth remote sensing data." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77035.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 80-81).
MODBASE, a collection of tools and practices built around the open source SciDB multidimensional data management and analytics software system, provides the Earth Science community with a powerful foundation for direct, ad-hoc analysis of large volumes of Level-1B data produced by the NASA Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. This paper details the reasons for building the MODBASE system, its design and implementation, and puts it to the test on a series of practical Earth Science benchmarks using standard MODIS data granules.
by Gary Lee Planthaber, Jr.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
26

Sakr, Daniel. "Smart Grid deployment and use in a large-scale demonstrator of the Smart and Sustainable City (SunRise) : comprehensive analysis of the electrical consumption." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10088/document.

Full text
Abstract:
Les concepts de ville intelligente et des réseaux intelligents constituent une excellente occasion pour construire des villes inclusives axées sur la qualité de vie. Cependant, ces concepts sont complexes et récents. Leur implémentation nécessite de l'apprentissage à travers des grandes expérimentations. Ce travail concerne s’inscrit dans ce cadre. Il est réalisé dans le cadre du démonstrateur à grande échelle de la Smart City (SunRise) qui est réalisé sur le Campus Scientifique de l'Université de Lille. Il comprend trois parties :La première partie comporte une analyse bibliographique sur les recherches menées dans le domaine de la Smart City. Elle présente les défis de la ville, ensuite, elle traite la mutation numérique et son rôle dans la transformation de la ville en une ville intelligente et la transformation des réseaux urbains en réseaux intelligents. La deuxième partie décrit le système électrique du campus. Elle présente le projet SunRise, qui consiste à construire un démonstrateur des réseaux urbains intelligents sur Campus Scientifique, qui équivaut à une ville d'environ 25 000 habitants. Ensuite, elle présente le campus le système électrique et sa gestion. La dernière partie concerne l'analyse de la consommation électrique. Elle présente la méthodologie développée qui comporte (i) l'enregistrement des consommations et leur transmission au serveur de réseau électrique, (ii) la transmission de données au serveur SunRise, (iii) le nettoyage et le stockage des données, (iv) la construction des profils de consommation des bâtiments et l’analyse de ces profil. Cette méthodologie est appliquée pour l'analyse de la consommation des différents trois bâtiments du campus
The Smart City and Smart Grids constitute a great opportunity to meet the environmental challenges and to build inclusive cities that focus on the quality of life of citizens. However, these concepts are complex and recent. Their implementation requires learning from large experimentations. This work concerns this issue. It is carried within the large-scale demonstrator of the Smart City (SunRise) which is conducted at the Scientific Campus of the University of Lille. It includes three parts:The first part focuses on literature review of researches and achievements in the field of the Smart City and Smart Grids. It presents the city challenges such as the population growth, energy consumption, greenhouse emission and climate change. Then it discusses the digital mutation and its potential role in transforming the City into a Smart City and the conventional Electrical Grid into a Smart Grid. The second part describes the Electrical Grid of the Scientific Campus. It presents the project SunRise, that consists in the construction of a demonstrator of smart urban networks at the Scientific Campus, which is equivalent of a town with around 25 000 inhabitants. Then, it presents the electrical system of the campus as well as its management.The last part concerns analysis of the electrical consumption of the campus. It presents the methodology developed for data analysis including (i) record of the electrical consumption and transmission to the server, (ii) Data transmission, (iii) Data cleaning, (iv) construction of buildings’ consumption profiles and consumption analysis. This methodology is applied for analysis of the global consumption of the campus and three buildings
APA, Harvard, Vancouver, ISO, and other styles
27

Jimenez, Saldana Cristhian Carim. "Large Scale Analysis of Massive Deployment of Converter-based Generation equipped with Grid- forming Strategies." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292690.

Full text
Abstract:
To mitigate the carbon footprint and the need to fulfil the energy goals in terms of sustainability, it is required to deploy a large integration of green energies. Therefore, in the previous and the coming years, there will be a high research and technological interest in the high penetration of converter-based generation. With the replacement and integration of power converters into the bulk power grid, new challenges and issues must be faced to maintain the system´s stability and reliability in terms of procedures for the transmission system operators. The main objective of this thesis project is to analyze and to implement the current- limiting techniques implemented in voltage source converters, which are equipped with grid-forming functionalities so that these electronic devices are safeguarded during a severe transient event as three-phase short-circuit and remain connected to the grid during the fault scenario. The model of the voltage source converter with grid-forming strategies is described as well as the grid-forming strategies (droop control, virtual synchronous machine (VSM) and dispatchable virtual oscillator control (dVOC)) utilized in the outer loop. The low-inertia and zero-inertia system in the IEEE 9-Bus test system were exhibited to be resilient towards 3-phase fault events, and their behavior shows neither significant oscillations during and after the incident of the fault nor noticeable difference in their performance regarding the location. In this test system model, the current-limiting techniques were validated and analyzed results display good effectiveness for the current and frequency. The Hydro-Québec network model was employed to have a more practical approach in the behavior of the current limitation strategies in the power converters in a real power system. The fault location and the percentage of participation of the voltage source converters in the energy generation were the two main scenarios, in which the proposed control strategies for restricting the current work but simultaneously, it is required an appropriate control to keep the system´s stability.
För att minska koldioxidutsläppet och uppnå energimålen med avseende på hållbarhet krävs integrering av hållbara energikällor. Därmed, under de föregående och kommande  åren kommer stort fokus riktas mot forskning kring ökad penetration av kraftelektronikomriktare i kraftsystemet. När kraftelektronikomriktare ersätter traditionella generationsenheter uppkommer nya utmaningar och problem som behöver lösas för att upprätthålla systemets stabilitet och pålitlighet med avseende på tillvägagångssätt för systemansvariga för överföringssystemet.  Avhandlingens huvudmål är att analysera och implementera strömbegränsande metoder för kraftelektronikomriktare av typen voltage source converters med en nätformande (”grid- forming”) funktionalitet. Strömbegränsaren ska säkerställa att kraftelektronikomriktaren skyddas under allvarliga transienta händelser och att kraftelektronikomriktaren förblir ansluten till nätet under händelsen. Modellen av kraftelektronikomriktaren med nätformande egenskaper är beskrivna tillsammans med nätformande kontrollstrategier, virtuella synkronmaskniner (VSM) och användande av avsändande virtuell oscillerande kontroll i den yttre slingan.  Den låga trögheten och noll-tröghetssystemet i IEEE 9-Bus test-system visade sig vara motståndskraftig mot trefasfel eftersom testsystemets beteende visade varken signifikanta oscillationer under och efter felet eller märkbar förändring i dess prestanda beroende på var felet inträffade. I denna testsystemsmodell var strömbegränsande tekniker validerade och de analyserande resultaten visade på god effektivitet för strömmen och för frekvensen.  Hydro-Québec nätverks-modellen användes för att få en mer praktisk inriktning med hänsyn till beteendet hos strömbegränsarna där olika strategier har använts. Felpositionen och andelen av kraftelektronikomriktare i energigenereringen var två huvudsakliga scenarion, där de föreslagna kontrollstrategierna för att begränsa strömmen fungerade men kräver samtidigt att en lämplig kontroll för att behålla systemets stabilitet.
APA, Harvard, Vancouver, ISO, and other styles
28

Rahman, Brian M. "Sensor Placement for Diagnosis of Large-Scale, Complex Systems: Advancement of Structural Methods." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1562859497638274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hawash, Maher Mofeid. "Methods for Efficient Synthesis of Large Reversible Binary and Ternary Quantum Circuits and Applications of Linear Nearest Neighbor Model." PDXScholar, 2013. https://pdxscholar.library.pdx.edu/open_access_etds/1090.

Full text
Abstract:
This dissertation describes the development of automated synthesis algorithms that construct reversible quantum circuits for reversible functions with large number of variables. Specifically, the research area is focused on reversible, permutative and fully specified binary and ternary specifications and the applicability of the resulting circuit to the physical limitations of existing quantum technologies. Automated synthesis of arbitrary reversible specifications is an NP hard, multiobjective optimization problem, where 1) the amount of time and computational resources required to synthesize the specification, 2) the number of primitive quantum gates in the resulting circuit (quantum cost), and 3) the number of ancillary qubits (variables added to hold intermediate calculations) are all minimized while 4) the number of variables is maximized. Some of the existing algorithms in the literature ignored objective 2 by focusing on the synthesis of a single solution without the addition of any ancillary qubits while others attempted to explore every possible solution in the search space in an effort to discover the optimal solution (i.e., sacrificed objective 1 and 4). Other algorithms resorted to adding a huge number of ancillary qubits (counter to objective 3) in an effort minimize the number of primitive gates (objective 2). In this dissertation, I first introduce the MMDSN algorithm that is capable of synthesizing binary specifications up to 30 variables, does not add any ancillary variables, produces better quantum cost (8-50% improvement) than algorithms which limit their search to a single solution and within a minimal amount of time compared to algorithms which perform exhaustive search (seconds vs. hours). The MMDSN algorithm introduces an innovative method of using the Hasse diagram to construct candidate solutions that are guaranteed to be valid and then selects the solution with the minimal quantum cost out of this subset. I then introduce the Covered Set Partitions (CSP) algorithm that expands the search space of valid candidate solutions and allows for exploring solutions outside the range of MMDSN. I show a method of subdividing the expansive search landscape into smaller partitions and demonstrate the benefit of focusing on partition sizes that are around half of the number of variables (15% to 25% improvements, over MMDSN, for functions less than 12 variables, and more than 1000% improvement for functions with 12 and 13 variables). For a function of n variables, the CSP algorithm, theoretically, requires n times more to synthesize; however, by focusing on the middle k (k by MMDSN which typically yields lower quantum cost. I also show that using a Tabu search for selecting the next set of candidate from the CSP subset results in discovering solutions with even lower quantum costs (up to 10% improvement over CSP with random selection). In Chapters 9 and 10 I question the predominant methods of measuring quantum cost and its applicability to physical implementation of quantum gates and circuits. I counter the prevailing literature by introducing a new standard for measuring the performance of quantum synthesis algorithms by enforcing the Linear Nearest Neighbor Model (LNNM) constraint, which is imposed by the today's leading implementations of quantum technology. In addition to enforcing physical constraints, the new LNNM quantum cost (LNNQC) allows for a level comparison amongst all methods of synthesis; specifically, methods which add a large number of ancillary variables to ones that add no additional variables. I show that, when LNNM is enforced, the quantum cost for methods that add a large number of ancillary qubits increases significantly (up to 1200%). I also extend the Hasse based method to the ternary and I demonstrate synthesis of specifications of up to 9 ternary variables (compared to 3 ternary variables that existed in the literature). I introduce the concept of ternary precedence order and its implication on the construction of the Hasse diagram and the construction of valid candidate solutions. I also provide a case study comparing the performance of ternary logic synthesis of large functions using both a CUDA graphic processor with 1024 cores and an Intel i7 processor with 8 cores. In the process of exploring large ternary functions I introduce, to the literature, eight families of ternary benchmark functions along with a Multiple Valued file specification (the Extended Quantum Specification XQS). I also introduce a new composite quantum gate, the multiple valued Swivel gate, which swaps the information of qubits around a centrally located pivot point. In summary, my research objectives are as follows: * Explore and create automated synthesis algorithms for reversible circuits both in binary and ternary logic for large number of variables. * Study the impact of enforcing Linear Nearest Neighbor Model (LNNM) constraint for every interaction between qubits for reversible binary specifications. * Advocate for a revised metric for measuring the cost of a quantum circuit in concordance with LNNM, where, on one hand, such a metric would provide a way for balanced comparison between the various flavors of algorithms, and on the other hand, represents a realistic cost of a quantum circuit with respect to an ion trap implementation. * Establish an open source repository for sharing the results, software code and publications with the scientific community. With the dwindling expectations for a new lifeline on silicon-based technologies, quantum computations have the potential of becoming the future workhorse of computations. Similar to the automated CAD tools of classical logic, my work lays the foundation for creating automated tools for constructing quantum circuits from reversible specifications.
APA, Harvard, Vancouver, ISO, and other styles
30

Henriet, Simon. "On solving the non intrusive load monitoring problem in large buildings : analyses, simulations and factorization based unsupervised learning." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT007.

Full text
Abstract:
La prise de conscience des conséquences du réchauffement climatique a permis de lancer un mouvement de réduction de l'utilisation d'énergie. Sans pour autant stopper toute utilisation d'énergie, le faire de façon la plus efficace possible en réduisant le gaspillage apparaît comme une solution évidente. L'électricité utilisée dans les bâtiments représente une part importante de la consommation d’énergie et doit donc être utilisée de manière efficace. Pour cela, il est nécessaire de pouvoir mesurer et suivre la consommation électrique de chaque appareil au sein d'un bâtiment. Depuis 30 ans, une méthode de suivi des consommations électriques, Non Intrusive Load Monitoring (NILM), propose à partir d’un unique compteur mesurant la consommation totale du bâtiment, de déterminer la contribution de chaque appareil électrique. Cette méthode est basée sur un algorithme de désagrégation des consommations électriques et permet de s’affranchir de l’utilisation d’un compteur de mesure pour chaque appareil électrique du bâtiment. Cette thèse aborde les problèmes algorithmiques que présente le NILM. De manière générale, la problématique est celle de la séparation de sources. Les différentes sources à estimer correspondent ici à la consommation électrique des différents appareils branchés sur un même réseau. La mesure réalisée, aussi appelée observation mélangée, correspond à la somme de toutes les consommations. Ainsi, les principales difficultés du NILM sont : (i) la standardisation de la formulation, (ii) le caractère mal-posé du problème (perte d'information), (iii) les connaissances insuffisantes sur les signaux et (iv) l'implémentation d'un algorithme d'apprentissage. L'objectif principale de cette thèse est de traiter le NILM dans le cadre des grands bâtiments (commerciaux, bureaux, industriels) en utilisant des mesures hautes fréquences du courant et de la tension. Cependant les maisons individuelles et leurs propres types d’appareils électriques ne sont pas exclus de cette étude. Cette thèse est structurée en deux grandes parties.Dans une première partie nous abordons le problème du manque de connaissance des signaux de consommation électriques, à la fois ceux des grands bâtiments et ceux des différents appareils utilisés. La littérature concernant le NILM est principalement orienté sur l'étude des mesures basses fréquences de consommations dans les maisons. Nous proposons ici une analyse statistique des mesures de consommations. Nos résultats nous permettent de proposer une nouvelle classification des appareils électriques en fonction de leur caractéristiques de courant et également de définir des hypothèses pour la résolution du problème de séparation des sources. Le manque de données de consommations disponibles est également un frein pour le développement du NILM. Pour répondre à cela nous développons un modèle génératif permettant de simuler des données hautes fréquences de courant électrique de bâtiments. A partir d'un nombre limité de données réelles nous réalisons des simulations de bâtiments que nous partageons dans la base de données SHED.Dans une seconde partie, nous abordons le problème de la séparation de source. Grâce à nos résultats d'analyse et par manque de données, nous traitons ce problème à l'aide de techniques d'apprentissage non-supervisées. Pour proposons une nouvelle méthode appartenant à la famille des factorisations de matrice appelée Independent-Variation Matrix Factorization (IVMF), qui permet d'exprimer une matrice d'observation de courant comme le produit de deux matrices: les signatures et les activations. IVMF est le premier algorithme décrit pour le traitement du NILM dans le cadre de données hautes fréquences et de grands bâtiments. Enfin, nous montrons que IVMF atteint de meilleurs résultats pour le problème du NILM que des méthodes classiques de séparation de source comme l'Analyse en Composantes Indépendantes ou encore la Factorisation de Matrice Semi Non-négative
With the increasing awareness about the problem of climate change and the high level of energy consumption, a need for energy efficiency has emerged especially for electric power consumptions in buildings. To spur energy savings, industrials have been looking for measurement methods to monitor power consumptions. Appliance load monitoring has thus become an active research field. Monitoring and understanding the electrical consumption of appliances can also be useful for predictive maintenance, power quality analyses, demand forecasting or occupancy detection. Thirty years ago, a method called Non Intrusive Load Monitoring (NILM) has been introduced. It consists of estimating individual appliance energy consumptions from the measurement of the total consumption of the building. Its main advantage over traditional sub-metering methods is to use a single electric power meter at the main breaker of the building and then use a disaggregation algorithm to separate the contributions of each appliance. The goal of this thesis is to address the algorithmic challenge offered by NILM. The NILM problem can be formulated as a source separation problem, where the sources are the individual electric consumptions and the mixed observation is simply the sum of individual consumptions. Its main difficulties are: (i) the standardization of the formulation, (ii) the ill-posedness of the problem, (iii) the lack of knowledge and (iv) the machine learning algorithm design. All our contributions follow from the principal objective that is to solve the NILM problem for huge systems such as commercial or industrial buildings using high frequency current and voltage measurements. However, houses and the specific equipment found inside these buildings are not excluded of the study. This thesis is split into two parts.In the first part, we tackle the lack of knowledge and datasets for NILM in commercial buildings. First of all, the NILM community has mostly focused on both residential NILM application and using low frequency data provided by power meter installed by utility providers. To tackle the lack of knowledge on higher frequency data and on other kind of buildings such as commercial or industrial installations, we propose a statistical analysis based on public and private datasets. Our study on the rank of current matrix conducted for individual devices will serve as the base of a new device taxonomy and to prior assumptions on the rest of this thesis. Secondly, we address the lack of datasets especially for commercial buildings by developping an algorithm for generating synthetic current data based on a modelization of the current flowing through an electrical device. To encourage research on commercial buildings we release a synthesized dataset called SHED that can be used to evaluate NILM algorithms.In the second part, we deal with the NILM software challenges by exploring unsupervised source separation techniques. To overcome the unaddressed difficulties of processing high frequency current signals that are measured in large buildings, we propose a novel technique called Independent-Variation Matrix Factorization (IVMF), which expresses an observation matrix as the product of two matrices: the "signature" and the "activation". Motivated by the nature of the current signals, it uses a regularization term on the temporal variations of the activation matrix and a positivity constraint, and the columns of the signature matrix are constrained to lie in a specific set. To solve the resulting optimization problem, we rely on an alternating minimization strategy involving dual optimization and quasi-Newton algorithms. IVMF is the first proposed algorithm especially designed for high frequency NILM in huge buildings. We finally show that IVMF outperforms competing methods (Independent Component Analysis, Semi Non-negative Matrix Factorization) on NILM datasets
APA, Harvard, Vancouver, ISO, and other styles
31

Maalik, Abdul. "Novel Characteristic-Mode-Based Synthesis and Analysis Method for Reflectarray Antennas." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1593684814222685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tulpule, Pinak J. "Control and optimization of energy flow in hybrid large scale systems - A microgrid for photovoltaic based PEV charging station." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313522717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chadha, Ankit. "Tapped-Inductor Buck DC-DC Converter." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1578488939749599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Eriksson, Ina, and Lina Fredriksson. "Data-driven methods for estimation of dynamic OD matrices." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177782.

Full text
Abstract:
The idea behind this report is based on the fact that it is not only the number of users in the traffic network that is increasing, the number of connected devices such as probe vehicles and mobile sources has increased dramatically in the last decade. These connected devices provide large-scale mobility data and new opportunities to analyze the current traffic situation as they traverse through the network and continuously send out different types of information like Global Positioning System (GPS) data and Mobile Network Data (MND). Travel demand is often described in terms of an Origin Destination (OD) matrix which represents the number of trips from an origin zone to a destination zone in a geographic area. The aim of this master thesis is to develop and evaluate a data-driven method for estimation of dynamic OD matrices using unsupervised learning, sensor fusion and large-scale mobility data. Traditionally, OD matrices are estimated based on travel surveys and link counts. The problem is that these sources of information do not provide the quality required for online control of the traffic network. A method consisting of an offline process and an online process has therefore been developed. The offline process utilizes historical large-scale mobility data to improve an inaccurate prior OD matrix. The online process utilizes the results and tuning parameters from the offline estimation in combination with real-time observations to describe the current traffic situation. A simulation study on a toy network with synthetic data was used to evaluate the data-driven estimation method. Observations based on GPS data, MND and link counts were simulated via a traffic simulation tool. The results showed that the sensor fusion algorithms Kalman filter and Kalman filter smoothing can be used when estimating dynamic OD matrices. The results also showed that the quality of the data sources used for the estimation is of high importance. Aggregating large-scale mobility data as GPS data and MND by using the unsupervised learning method Principal Component Analysis (PCA) improves the quality of the large-scale mobility data and so the estimation results.

Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet

APA, Harvard, Vancouver, ISO, and other styles
35

Foghammar, Nömtak Carl. "Automatic SLAMS detection and magnetospheric classification in MMS data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285533.

Full text
Abstract:
Short Large-Amplitude Magnetic Structures (SLAMS) have been observedby spacecraft near Earth’s quasi-parallel bow shock. They arecharacterized by a short and sudden increase of the magnetic field,usually by a factor of 2 or more. SLAMS studies have previously beenlimited to small sample sizes because SLAMS were identified throughmanual inspection of the spacecraft data. This makes it difficult to drawgeneral conclusions and the subjective element complicates collaborationbetween researchers. A solution is presented in this thesis; anautomatic SLAMS detection algorithm. We investigate several movingwindowmethods and measure their performance on a set of manuallyidentified SLAMS. The best algorithm is then used to identify 98406SLAMS in data from the Magnetospheric Multiscale (MMS) mission. Ofthose, 66210 SLAMS were detected when the Fast Plasma Investigation(FPI) instrument was active. Additionally, we are interested in knowingwhether a detected SLAMS is located in the foreshock or magnetosheath.Therefore, we implement a Gaussian mixture model classifier,based on hierarchical clustering of the FPI data, that can separatebetween the four distinct regions of the magnetosphere that MMSencounters; magnetosphere, magnetosheath, solar wind and (ion) foreshock.The identified SLAMS are compiled into a database which holdstheir start and stop dates, positional coordinates, B-field informationand information from the magnetospheric classifier to allow for easyfiltering to a specific SLAMS population. To showcase the potentialof the database we use it to perform preliminary statistical analysison how the properties of SLAMS are affected by its spatial and/ormagnetospheric location. The database and Matlab implementationare available on github: https://github.com/cfognom/MMS_SLAMS_detection_and_magnetospheric_classification.
Korta magnetiska strukturer med hög amplitud (SLAMS) har observeratsav satelliter nära jordens kvasi-parallella bogchock. En kortoch plötslig höjning av magnetfältsstyrkan är ett typiskt drag förSLAMS, vanligtvis med en faktor 2 eller mer. Forskning om SLAMShar tidigare varit begränsad till mindre fallstudier eftersom SLAMSidentifierats genom manuell inspektion av satellitdata. Detta gör detsvårt att dra generella slutsatser och det subjektiva elementet försvårarsamarbetet mellan forskare. En lösning till detta problem presenteras idenna avhandling; en automatisk identifieringsalgoritm för SLAMS. Viundersöker flera metoder och mäter deras prestanda på en uppsättningmanuellt identifierade SLAMS. Den bästa algoritmen används sedan föratt identifiera 98406 SLAMS i data från MMS-uppdraget. Av dessa upptäcktes66210 SLAMS när FPI-instrumentet var aktivt. Vi är dessutomintresserade av att veta om en upptäckt SLAMS finns i förshocken ellermagnetoskiktet. Därför implementerar vi en Gaussisk klassificeraresom bygger på hierarkisk klustring av FPI-data. Den kan separerade fyra distinkta regionerna av magnetosfären som MMS observerar;magnetosfär, magnetoskikt, solvind och (jon) förchock. De identifieradeSLAMS:en sammanställs till en databas som innehåller deras start- ochstoppdatum, positionskoordinater, B-fältsinformation och informationfrån magnetosfärsklassificeraren för att möjliggöra enkel filtrering tillen specifik SLAMS-population. För att visa potentialen av databasenutför vi en preliminär statistisk undersökning av hur egenskapernaav SLAMS påverkas av deras rumsliga och/eller magnetosfäriska position.Databasen och Matlab-implementationen är tillgängliga på Github:https://github.com/cfognom/MMS_SLAMS_detection_and_magnetospheric_classification.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Wei-Liang, and 陳威良. "Analysis of the Electrically Large Structure by Using FEKO." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/26941210628613039493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wu, Wei-Yang, and 吳維揚. "The Analysis of Electrically Large Left-Handed Metamaterial Based on Mushroom Structure Using FDTD Approach." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/06595476396464472991.

Full text
Abstract:
博士
國立中山大學
電機工程學系研究所
94
A full wave finite-difference time-domain method (FDTD) combined with thin-wire and thin-slot algorithms to analyze a metamaterial fabricated with periodic mushroom structures, is proposed in this dissertation. This proposed method is suitable for analyzing problems involving large structures with fine structural details. A periodic analysis for mushroom structures is presented. Only a single unit mushroom cell is required to present the phenomena of infinite periodicity with the help of periodic boundary conditions (PBCs). The composite right-/left-handed (CRLH) transmission line (TL) approach is introduced and used to approximate CRLH metamaterial through lumped L and C. Finally, several CRLH metamaterial mushroom-based structures are investigated. A 19 by 8 flat microwave lens and a parabolic microwave lens structure composed of 410 unit mushroom cells are investigated. These structures demonstrate negative refractive index (NRI) characteristics while operate in the left-hand (LH) region. The simulation and measurement results of one- and two-dimensional CRLH mushroom-based structures are compared.
APA, Harvard, Vancouver, ISO, and other styles
38

"Modeling and Large Signal Stability Analysis of A DC/AC Microgrid." Master's thesis, 2018. http://hdl.handle.net/2286/R.I.50493.

Full text
Abstract:
abstract: The concept of the microgrid is widely studied and explored in both academic and industrial societies. The microgrid is a power system with distributed generations and loads, which is intentionally planned and can be disconnected from the main utility grid. Nowadays, various distributed power generations (wind resource, photovoltaic resource, etc.) are emerging to be significant power sources of the microgrid. This thesis focuses on the system structure of Photovoltaics (PV)-dominated microgrid, precisely modeling and stability analysis of the specific system. The grid-connected mode microgrid is considered, and system control objectives are: PV panel is working at the maximum power point (MPP), the DC link voltage is regulated at a desired value, and the grid side current is also controlled in phase with grid voltage. To simulate the real circuits of the whole system with high fidelity instead of doing real experiments, PLECS software is applied to construct the detailed model in chapter 2. Meanwhile, a Simulink mathematical model of the microgrid system is developed in chapter 3 for faster simulation and energy management analysis. Simulation results of both the PLECS model and Simulink model are matched with the expectations. Next chapter talks about state space models of different power stages for stability analysis utilization. Finally, the large signal stability analysis of a grid-connected inverter, which is based on cascaded control of both DC link voltage and grid side current is discussed. The large signal stability analysis presented in this thesis is mainly focused on the impact of the inductor and capacitor capacity and the controller parameters on the DC link stability region. A dynamic model with the cascaded control logic is proposed. One Lyapunov large-signal stability analysis tool is applied to derive the domain of attraction, which is the asymptotic stability region. Results show that both the DC side capacitor and the inductor of grid side filter can significantly influence the stability region of the DC link voltage. PLECS simulation models developed for the microgrid system are applied to verify the stability regions estimated from the Lyapunov large signal analysis method.
Dissertation/Thesis
Masters Thesis Engineering 2018
APA, Harvard, Vancouver, ISO, and other styles
39

Zhong, Yu. "Fast algorithms for the design and analysis of large power grids /." 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3314956.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2008.
Source: Dissertation Abstracts International, Volume: 69-05, Section: B, page: 3199. Adviser: Martin D. F. Wong. Includes bibliographical references (leaves 89-93) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
40

Cortes-Medellin, German. "Analysis of segmented reflector antenna for a large millimeter wave radio telescope." 1993. https://scholarworks.umass.edu/dissertations/AAI9329588.

Full text
Abstract:
We have developed a computational tool which serves to characterize the performance of large segmented reflector antennas under different sets of conditions. We have applied this tool to the characterization of a large millimeter telescope. A 50 meter diameter instrument of this type specified to operate to wavelengths as short as 1 mm is being design with an actively controlled main surface consisting of 126 hexagonal segments. To simulate the effect of the necessarily imperfect control system, we generate samples of tilt and piston errors for the segments from which the antenna radiation patterns and aperture efficiencies are calculated. We make a comparison of these results with models of antenna tolerance theory developed by Ruze, which relates the aperture efficiency to the rms phase error. We find that Ruze's formula have a different range of validity when the aperture rms phase error, rather than the rms surface error, is used as a parameter. When appreciable tilt errors are present in large segmented antennas, the aperture rms phase error tends to a constant value, independent of the aperture illumination and of the shape of the segments. We conclude that the antenna rms surface error is a better tracer of the aperture efficiency than is the aperture rms phase error when Ruze's formula is used. We find that this well-known expression stands as a lower limit to the performance of large segmented reflector antennas. We have analyzed the effect that gaps between the segments of the active surface of this antenna as well as the imperfect positioning of the subreflector surface have on the aperture efficiency, antenna gain and radiation pattern of this antenna. We have found that the gaps produce a series of grating lobes distributed in a regular pattern in the far field of this antenna, whose relative position is correlated with the size and shape of the segments. We have found that the large millimeter telescope is very sensitive to axial subreflector positioning errors, requiring that the subreflector actuators be able to maintain is optimum position within a small fraction of a wavelength. With the interest to use a focal plane array in the LMT, we have made a comparative study of the imaging properties of the LMT with that of two aplanatic Cassegrain designs, namely, the Schwarzschild and the Ritchey-Chretien telescope. We found that operating at millimeter wavelengths the three Cassegrain systems have an equivalent performance. This study also revealed the potential benefits of an aplanatic configuration at shorter wavelengths or smaller system focal ratios.
APA, Harvard, Vancouver, ISO, and other styles
41

Tsao, Shu-Wei, and 曹書瑋. "Electrical Analysis & Fabricated Investigation of Amorphous Active Layer Thin Film Transistor for Large Size Display Application." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/56670949870823913179.

Full text
Abstract:
博士
國立中山大學
光電工程學系研究所
99
In this dissertation, the electrical characteristics of generally used hydrogenated amorphous silicon (a-Si:H) TFTs in LCD and newly risen amorphous indium-gallium-zinc oxide (a-IGZO) TFTs were studied. For modern mobile display and large-size flat panel display application, the traditional thin-film transistor-liquid crystal display (TFT-LCD) technology confronts with a lot of challenges and problems. In general, flexible displays must exhibit some bending ability; however, bending applies mechanical strain to electronic circuits and affects device characteristics. Therefore, the electrical characteristics of a-Si:H TFTs fabricated on stainless steel foil substrates with uniaxial bending were investigated at different temperatures. Experimental results showed that the on-state current and threshold voltage degraded under outward bending. This is because outward bending will induce the increase of band tail states, affecting the transport mechanism at different temperatures. In addition, for practical operation, the electrical characteristics of a-Si:H TFTs under flat and bending situations after AC/DC stress at different temperatures were studied. It was found that high temperature and mechanical bending played important roles under AC stress. The dependence between the accumulated sum of bias rising and falling time and the threshold voltage shifts under AC stress was also observed. Because a-Si:H is a photosensitive material, the high intensity backlight illumination will degrade the performance of a-Si:H TFTs. Thus, the photo-leakage current of a-Si:H TFTs under illumination was investigated at different temperatures. Experimental results showed that a-Si:H TFTs exhibited a pool performance at lower temperatures. The indirect recombination rate and the parasitic resistance (Rp) are responsible for the different photo-leakage-current trends of a-Si:H TFTs under varied temperature operations. To investigate the photo-leakage current, the a-Si:H TFTs were exposed to ultraviolet (UV) light irradiation. It was found that the photo current of a-Si:H TFTs was reduced after UV light irradiation. The detail mechanisms on reducing/increasing photo-leakage current by UV light irradiation were discussed. Recently, the oxide-based semiconductor TFT, especially a-IGZO TFT, is considered as one of promising candidates for active matrix flat-panel display. However, the a-IGZO TFT exists significant electrical instability issue and manufacturing problems. As a consequence, we investigated the effect of hydrogen incorporation on a-IGZO TFTs to reduce interface states between active layer and insulator. Experimental results showed that the electrical characteristics of hydrogen-incorporated a-IGZO TFTs were improved. The threshold voltage shift (ΔVth) in hysteresis loop is suppressed from 4 V to 2 V due to the hydrogen-induced passivation of the interface trap states. Finally, we reported the effect of ambient environment on a-IGZO TFT instability. As a-IGZO TFTs were stored in atmosphere environment for 40 days, the transfer characteristics accompanying strange hump were observed during bias-stress. The hump phenomenon is attributed to the absorption of H2O molecule. Additionally, the sufficient electric field is also necessary to cause this anomalous transfer characteristic.
APA, Harvard, Vancouver, ISO, and other styles
42

"Modal analysis of large-scale power systems' voltage stability and voltage collapse." Tulane University, 1992.

Find full text
Abstract:
In this research, a theoretical foundation for modeling, analysis, and testing of a system for voltage collapse is developed. The boundary theorem of the load flow feasibility region (FR) is presented. Based on the proposed boundary theorem, a method of voltage stability analysis referred to as the Eigen-Structure Analysis (ESA) method, is developed that does not require complicated nonlinear programming calculations for evaluation of the closest unfeasible or boundary injection corresponding to a given power network operating point with voltage controlled and load buses. Furthermore the steady-state stability margin and the sensitivity of the stability margin to bus voltages and bus injections are defined. An algorithm for determining the stability margin and its sensitivity to bus voltages and bus injections is proposed which is capable of handling large scale power systems by utilizing the sparse matrix techniques for saving computation time and memory space. The unification of the concept of feasibility region and the concept of multiple load flow solution is also presented in this dissertation The Eigen-Structure Analysis method is applied to a number of test system models. The simulation results confirm the theory and show that the proposed stability margin decreases monotonically to zero when the system approaches voltage collapse. The voltage-weak points and key contributing factors affecting the system voltage instability can be identified according to the values of the sensitivity of the stability margin to bus voltages and bus injections
acase@tulane.edu
APA, Harvard, Vancouver, ISO, and other styles
43

Ke, Huajie. "Fabrication, characterization and analysis of patterned nano-sized material with large magnetic permeability at high frequency." 2013. https://scholarworks.umass.edu/dissertations/AAI3603105.

Full text
Abstract:
Magnetic mesoscopic and nano-sized structures have promising applications such as high-density data storage, magnetic field sensors, and microwave devices. Patterned magnetic structures are especially interesting because their constitutive material, sizes and geometry are easily adjustable in fabrication. This makes manipulation of electromagnetic properties possible and creates many novel features never discovered in conventional bulk materials. The artificial magnetic structures that can be engineered to meet specific application purposes are called magnetic metamaterials. This thesis aims to investigate magnetic materials nanostructured to produce high permeability and low loss performance at gigahertz (GHz) frequency region. Such property is highly desired for communication devices with miniaturized size, reduced energy consumption and enhanced signal detection sensitivity. Antennas, microwave field sensors are the examples of applications. We first analyze the single domain model for ac magnetization to get theoretical understanding and prediction. Then we evaluate all free energy terms for a magnetic dipole to know which energies (or fields) are contributing to the effective magnetic field in our real experiments. Secondly experiment work including fabrication, dc characterization and ac characterization of Permalloy and cobalt nanoscale magnetic structures, as well as FePt nanoparticles are covered. Different microwave techniques regarding sensitive magnetic permeability measurements are discussed in detail for comparison. In the last chapter, micromagnetic simulations are performed to obtain broadband ac magnetization response spectrum for a single Permalloy nanowire and two interacting Permalloy nanowires.
APA, Harvard, Vancouver, ISO, and other styles
44

Li, Yujia. "Development and application of the FETI-DPEM algorithm for analysis of three-dimensional large-scale electromagnetic problems /." 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3363018.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2009.
Source: Dissertation Abstracts International, Volume: 70-06, Section: B, page: 3691. Adviser: Jianming Jin. Includes bibliographical references (leaves 156-162) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
45

(7023038), Thomas E. Craddock. "Ensuring Large-Displacement Stability in ac Microgrids." Thesis, 2019.

Find full text
Abstract:
Aerospace and shipboard power systems, as well as merging terrestrial microgrids, typically include a large ercentage of regulated power-electronic loads. It is well nown that such systems are prone to so-called negative- mpedance instabilities that may lead to deleterious scillations and/or the complete collapse of bus voltage. umerous small-displacement criteria have been developed o ensure dynamic stability for small load perturbations, and echniques for estimating the regions of asymptotic stability bout specic equilibrium points have previously been established. However, these criteria and analysis techniques o not guarantee system stability following large nd/or rapid changes in net load power. More recent research as focused on establishing criteria that ensure arge-displacement stability for arbitrary time varying loads rovided that the net load power is bounded. These yapunov-based techniques and recent advancements in eachability analysis described in this thesis are applied to xample dc and ac microgrids to not only introduce a large- isplacement stability margin, but to demonstrate that the elected systems can be designed to be large-displacement table with practicable constraints and parameters.
APA, Harvard, Vancouver, ISO, and other styles
46

Chiang, Chung-Hua, and 江俊華. "Analysis of Schedule Risk Management and Prevention on Large Mechanical and Electrical Engineering Project - Taking Kaohsiung Underground Railway Project as an Example." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/e3w69g.

Full text
Abstract:
碩士
國立高雄應用科技大學
工業工程與管理系碩士在職專班
104
Project risk management in the implementation of the project life cycle is very important to the progress of construction projects. In the construction process, uncertainties are often encountered. If project risks are not identified, quantified and controlled, these risks are likely to cause project difficulties, and the project will not be completed within contract deadline requirements. As a result, it cannot create the best interests of the project objectives. This study adapted AHP construction questionnaire. Through filling out questionnaires that involve decision-making problems with multiple evaluation criteria by experts in project management fields, assessment factors for priority can be obtained through a systematic approach. By preventing making bad decisions, the purpose of project risk control is achieved. This study uses Electrical and Mechanical Engineering Project of Kaohsiung Railway Underground Program as an example to identify the common seen engineering project risk factors in the aspects of project planning. It is found that subject company uses these 5 items including bad cash flow、unqualified engineer、procurement omission、construction sequence error、imcompliance of as-built and design as main influences factors for planning risk control for an engineering project. By analyzing the important factors and proposing methods to improvement, the purpose of prevention and risk control can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
47

Pereira, Mário Pascoal Santos. "Operational Analysis of Distribution Systems Featuring Large-scale Variable RES: Contributions of Energy Storage Systems and Switchable Capacitor Banks." Master's thesis, 2017. https://repositorio-aberto.up.pt/handle/10216/105327.

Full text
Abstract:
In the last decade, the level of variable renewable energy sources (RESs) integrated in distribution network systems have been continuously growing. This adds more uncertainty to these systems, which also face many traditional sources of uncertainty, and those pertaining to other emerging technologies such as demand response and electric vehicles. As a result, distribution system operators are finding it increasingly difficult to maintain an optimal operation of such network systems. These challenges/limitations are, however, expected to be alleviated when distribution systems undergo the transformation process to smart grids, equipped with appropriate technologies such as energy storage systems (ESSs) and switchable capacitor banks (SCBs). These technologies offer more flexibility in the system, allowing effective management of the uncertainty and variability pertaining to most RESs (such as wind and solar PV power sources). This dissertation presents a stochastic mixed integer linear programming (S-MILP) model, aiming to optimally operate distribution network systems, featuring large-scale variable renewables, and alleviate the negative impacts of RESs on the overall performance of such systems by means of ESSs and SCBs. The optimization model is based on a linearized AC network model. Furthermore, the proposed operational model is formulated in a stochastic environment, particularly accounting for both variability and uncertainty pertaining to demand, wind and solar power productions. Such considerations allow one to make a more realistic analysis, under various operational conditions. The objective function of the proposed model is to minimize the sum of expected costs of operation, unserved power and emissions while meeting the most relevant technical and economic constraints. The analysis covers several issues, but with the perspective of maximizing the utilization level of variable RESs, and most importantly, without endangering the stability and integrity of the system as well as the quality of power delivered to the consumers. In this line, the dissertation presents an extensive analysis concerning the impacts of SCBs and ESSs of different efficiencies (either collectively or individually) in the system. In particular, the overall system performance in terms of costs, losses, voltages and energy mix has been extensively analysed, which is one of the main contributions of this dissertation. Simulation results indicate that strategically placed ESSs and SCBs can substantially increase the usage level of RES power, and simultaneously alleviate the negative impacts of RES intermittency in the considered system. For example, network losses are slashed by more than 70% and total system costs by 69%. Furthermore, the presence of ESSs and SCBs leads to as high as 96.1% share of RESs in the overall energy mix in the considered system. The energy imported through the substation in this case is limited to 3.9%, which means that the system operates in island mode for most of the time during the 24-hour period. This means that distribution network systems can go "carbon-free" by meeting a large portion of the demand using "cleaner" power locally produced.
APA, Harvard, Vancouver, ISO, and other styles
48

Pereira, Mário Pascoal Santos. "Operational Analysis of Distribution Systems Featuring Large-scale Variable RES: Contributions of Energy Storage Systems and Switchable Capacitor Banks." Dissertação, 2017. https://repositorio-aberto.up.pt/handle/10216/105327.

Full text
Abstract:
In the last decade, the level of variable renewable energy sources (RESs) integrated in distribution network systems have been continuously growing. This adds more uncertainty to these systems, which also face many traditional sources of uncertainty, and those pertaining to other emerging technologies such as demand response and electric vehicles. As a result, distribution system operators are finding it increasingly difficult to maintain an optimal operation of such network systems. These challenges/limitations are, however, expected to be alleviated when distribution systems undergo the transformation process to smart grids, equipped with appropriate technologies such as energy storage systems (ESSs) and switchable capacitor banks (SCBs). These technologies offer more flexibility in the system, allowing effective management of the uncertainty and variability pertaining to most RESs (such as wind and solar PV power sources). This dissertation presents a stochastic mixed integer linear programming (S-MILP) model, aiming to optimally operate distribution network systems, featuring large-scale variable renewables, and alleviate the negative impacts of RESs on the overall performance of such systems by means of ESSs and SCBs. The optimization model is based on a linearized AC network model. Furthermore, the proposed operational model is formulated in a stochastic environment, particularly accounting for both variability and uncertainty pertaining to demand, wind and solar power productions. Such considerations allow one to make a more realistic analysis, under various operational conditions. The objective function of the proposed model is to minimize the sum of expected costs of operation, unserved power and emissions while meeting the most relevant technical and economic constraints. The analysis covers several issues, but with the perspective of maximizing the utilization level of variable RESs, and most importantly, without endangering the stability and integrity of the system as well as the quality of power delivered to the consumers. In this line, the dissertation presents an extensive analysis concerning the impacts of SCBs and ESSs of different efficiencies (either collectively or individually) in the system. In particular, the overall system performance in terms of costs, losses, voltages and energy mix has been extensively analysed, which is one of the main contributions of this dissertation. Simulation results indicate that strategically placed ESSs and SCBs can substantially increase the usage level of RES power, and simultaneously alleviate the negative impacts of RES intermittency in the considered system. For example, network losses are slashed by more than 70% and total system costs by 69%. Furthermore, the presence of ESSs and SCBs leads to as high as 96.1% share of RESs in the overall energy mix in the considered system. The energy imported through the substation in this case is limited to 3.9%, which means that the system operates in island mode for most of the time during the 24-hour period. This means that distribution network systems can go "carbon-free" by meeting a large portion of the demand using "cleaner" power locally produced.
APA, Harvard, Vancouver, ISO, and other styles
49

(7485122), Miaomiao Ma. "Accuracy Explicitly Controlled H2-Matrix Arithmetic in Linear Complexity and Fast Direct Solutions for Large-Scale Electromagnetic Analysis." Thesis, 2019.

Find full text
Abstract:
The design of advanced engineering systems generally results in large-scale numerical problems, which require efficient computational electromagnetic (CEM) solutions. Among existing CEM methods, iterative methods have been a popular choice since conventional direct solutions are computationally expensive. The optimal complexity of an iterative solver is O(NNitNrhs) with N being matrix size, Nit the number of iterations and Nrhs the number of right hand sides. How to invert or factorize a dense matrix or a sparse matrix of size N in O(N) (optimal) complexity with explicitly controlled accuracy has been a challenging research problem. For solving a dense matrix of size N, the computational complexity of a conventional direct solution is O(N3); for solving a general sparse matrix arising from a 3-D EM analysis, the best computational complexity of a conventional direct solution is O(N2). Recently, an H2-matrix based mathematical framework has been developed to obtain fast dense matrix algebra. However, existing linear-complexity H2-based matrix-matrix multiplication and matrix inversion lack an explicit accuracy control. If the accuracy is to be controlled, the inverse as well as the matrix-matrix multiplication algorithm must be completely changed, as the original formatted framework does not offer a mechanism to control the accuracy without increasing complexity.
In this work, we develop a series of new accuracy controlled fast H2 arithmetic, including matrix-matrix multiplication (MMP) without formatted multiplications, minimal-rank MMP, new accuracy controlled H2 factorization and inversion, new accuracy controlled H2 factorization and inversion with concurrent change of cluster bases, H2-based direct sparse solver and new HSS recursive inverse with directly controlled accuracy. For constant-rank H2-matrices, the proposed accuracy directly controlled H2 arithmetic has a strict O(N) complexity in both time and memory. For rank that linearly grows with the electrical size, the complexity of the proposed H2 arithmetic is O(NlogN) in factorization and inversion time, and O(N) in solution time and memory for solving volume IEs. Applications to large-scale interconnect extraction as well as large-scale scattering analysis, and comparisons with state-of-the-art solvers have demonstrated the clear advantages of the proposed new H2 arithmetic and resulting fast direct solutions with explicitly controlled accuracy. In addition to electromagnetic analysis, the new H2 arithmetic developed in this work can also be applied to other disciplines, where fast and large-scale numerical solutions are being pursued.
APA, Harvard, Vancouver, ISO, and other styles
50

Shah, Kalpesh. "Power Grid Analysis In VLSI Designs." Thesis, 2007. http://hdl.handle.net/2005/503.

Full text
Abstract:
Power has become an important design closure parameter in today’s ultra low submicron digital designs. The impact of the increase in power is multi-discipline to researchers ranging from power supply design, power converters or voltage regulators design, system, board and package thermal analysis, power grid design and signal integrity analysis to minimizing power itself. This work focuses on challenges arising due to increase in power to power grid design and analysis. Challenges arising due to lower geometries and higher power are very well researched topics and there is still lot of scope to continue work. Traditionally, designs go through average IR drop analysis. Average IR drop analysis is highly dependent on current dissipation estimation. This work proposes a vector less probabilistic toggle estimation which is extension of one of the approaches proposed in literature. We have further used toggles computed using this approach to estimate power of ISCAS89 benchmark circuits. This provides insight into quality of toggles being generated. Power Estimation work is further extended to comprehend with various state of the art methodologies available i.e. spice based power estimation, logic simulation based power estimation, commercially available tool comparisons etc. We finally arrived at optimum flow recommendation which can be used as per design need and schedule. Today’s design complexity – high frequencies, high logic densities and multiple level clock and power gating - has forced design community to look beyond average IR drop. High rate of switching activities induce power supply fluctuations to cells in design which is known as instantaneous IR drop. However, there is no good analysis methodology in place to analyze this phenomenon. Ad hoc decoupling planning and on chip intrinsic decoupling capacitance helps to contain this noise but there is no guarantee. This work also applies average toggle computation approach to compute instantaneous IR drop analysis for designs. Instantaneous IR drop is also known as dynamic IR drop or power supply noise. We are proposing cell characterization methodology for standard cells. This data is used to build power grid model of the design. Finally, the power network is solved to compute instantaneous IR drop. Leakage Power Minimization has forced design teams to do complex power gating – multilevel MTCMOS usage in Power Grid. This puts additonal analysis challenge for Power Grid in terms of ON/OFF sequencing and noise injection due to it. This work explains the state of art here and highlights some of the issues and trade offs using MTCMOS logic. It further suggests a simple approach to quickly access the impact of MTCMOS gates in Power Grid in terms of peak currents and IR drop. Alternatively, the approach suggested also helps in MTCMOS gate optimization. Early leakage optimization overhead can be computed using this approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography