To see the other types of publications on this topic, follow the link: Electrical complex.

Dissertations / Theses on the topic 'Electrical complex'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Electrical complex.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dandache, Abbas Anceau François. "Evaluations électriques et temporelles des PLA complexes (COMPLETE) COMplex PLA Electrical and Temporal Evaluator /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00307779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kravchuk, Tetiana, and Mykola Kravchuk. "Complex alternative source of electrical energy." Thesis, National Aviaton University, 2018. http://er.nau.edu.ua/handle/NAU/38613.

Full text
Abstract:
A new complex alternative source of electric energy was proposed; it combines two renewable forms of energy: solar and wind in the form of helicoidal wind generator. The use of solar panels in combination with a wind installation will be effective in terms of reducing the threshold wind speed required to exit the installation to the nominal rotation speed using energy produced by solar panels.
APA, Harvard, Vancouver, ISO, and other styles
3

Leung, Hing Tong Lucullus. "Development of an electrical impedance tomograph for complex impedance imaging." Thesis, University of South Wales, 1991. https://pure.southwales.ac.uk/en/studentthesis/development-of-an-electrical-impedance-tomograph-for-complex-impedance-imaging(b3f26e76-490d-4364-a270-28cff1dccd70).html.

Full text
Abstract:
This project concerns the development of electrical impedance tomography towards the production of complex impedance images. The prime intention was to investigate the feasibility of developing suitable instrumentation; but not clinical applications. It was aimed to develop techniques for the performance evaluation of data collection systems. To achieve this it was necessary to design and develop a multi· current source type impedance tomography system, to act as a platform for the current study and for future work. The system developed is capable of producing conductivity and permittivity images. It employs microprocessor based data collection electronics, providing portability between a range of possible host computers. The development of the system included a study of constant amplitude current source circuits leading to the design and employment of a novel circuit. In order to aid system testing, a surface mount technology resistor-mesh test object was produced. This has been adopted by the EEC Concerted Action on Impedance Tomography (CAIT) programme as the first standard test object. A computer model of the phantom was produced using the industry standard ASTEC3 circuit simulation package. This development allows the theoretical performance of any system topology, at any level of detail, to be established. The imaging system has been used to produce images from test objects, as well as forearm and lung images on humans. Whilst the conductivity images produced were good, the permittivity in-vivo images were noisy, despite good permittivity images from test objects. A study of the relative merits of multiple and single stimulus type systems was carried out as a result of the discrepancies in the in-vivo and test object images. This study involved a comparison of the author's system with that of Griffiths at the University Hospital of Wales. The results showed that the multi current source type system, whilst able to reduce stray capacitance, creates other more significant errors due to circuit matching; future development in semiconductor device technology may help to overcome this difficulty. It was identified that contact impedances together with the effective capacitance between the measurement electrode pairs in four-electrode systems reduces the measurability of changes in phase. A number of benchmarking indices were developed and implemented, both for system characterisation and for practical/theoretical design comparisons.
APA, Harvard, Vancouver, ISO, and other styles
4

Mercer, Phillip Jr. "Steptorials : stepping through complex applications." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91848.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 31).
The steptorial is a new form of help system designed to assist new users in learning complex applications. Steptorials provide examples of high level user goals within applications using a hierarchical instruction set by breaking this goal into smaller goals, down to the level of individual interface interactions. Steptorials are unique in that they have they control structure of a reversible programming language stepper, allowing the user to step over, step back over, or step into any instruction. Further, the user may choose, at any time, to be shown how to do a step, be guided through it, or to use the application interface without constraint. This allows for varying the autonomy of the user at any step while reducing the risk of trying new operations.
by Phillip Mercer, Jr.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Sanatkar, Mohammad Reza. "Epidemics on complex networks." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/14097.

Full text
Abstract:
Master of Science
Department of Electrical and Computer Engineering
Karen Garrett
Bala Natarajan
Caterina Scoglio
In this thesis, we propose a statistical model to predict disease dispersal in dynamic networks. We model the process of disease spreading using discrete time Markov chain. In this case, the vector of probability of infection is the state vector and every element of the state vector is a continuous variable between zero and one. In discrete time Markov chains, state probability vectors in each time step depends on state probability vector in the previous time step and one step transition probability matrix. The transition probability matrix can be time variant or time invariant. If this matrix’s elements are functions of elements of vector state probability in previous step, the corresponding Markov chain is non linear dynamical system. However, if those elements are independent of vector state probability, the corresponding Markov chain is a linear dynamical system. We especially focus on the dispersal of soybean rust. In our problem, we have a network of US counties and we aim at predicting that which counties are more likely to get infected by soybean rust during a year based on observations of soybean rust up to that time as well as corresponding observations to previous years. Other data such as soybean and kudzu densities in each county, daily wind data, and distance between counties helps us to build the model. The rapid growth in the number of Internet users in recent years has led malware generators to exploit this potential to attack computer users around the word. Internet users are frequent targets of malicious software every day. The ability of malware to exploit the infrastructures of networks for propagation determines how detrimental they can be to the network’s security. Malicious software can make large outbreaks if they are able to exploit the structure of the Internet and interactions between users to propagate. Epidemics typically start with some initial infected nodes. Infected nodes can cause their healthy neighbors to become infected with some probability. With time and in some cases with external intervention, infected nodes can be cured and go back to a healthy state. The study of epidemic dispersals on networks aims at explaining how epidemics evolve and spread in networks. One of the most interesting questions regarding an epidemic spread in a network is whether the epidemic dies out or results in a massive outbreak. Epidemic threshold is a parameter that addresses this question by considering both the network topology and epidemic strength.
APA, Harvard, Vancouver, ISO, and other styles
6

OLIVEIRA, WELLINGTON GALDINO ALVES DE. "STUDY AND APPLICATIONS OF COMPLEX NUMBERS: THE USE OF COMPLEX NUMBERS IN THE ANALYSIS OF ELECTRICAL CIRCUITS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36390@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
O ensino da Teoria dos Números Complexos durante o Ensino Médio é apresentado, por vezes, de uma maneira pouco representativa para os alunos levando em consideração a sua importância. Uma das lacunas que pode ser observada é a falta de exemplos de aplicações no cotidiano dos alunos o que, por fim, acaba não gerando significado no aprendizado para eles. No entanto, a aplicação dos números complexos é bem mais abrangente do que se possa imaginar, principalmente no campo das Ciências Exatas, tomando como exemplo a Engenharia. Este trabalho destina-se a ampliar a visão dos alunos do Ensino Médio apresentando aplicações e a maneira de como os Números Complexos são utilizados em outros contextos, assim como no estudo dos Circuitos Elétricos.
The teaching of Complex Numbers Theory during High School is sometimes presented in a way that is not representative for students considering its importance. One of the gaps that can be observed is the lack of examples of applications in the students daily life, which, in the end, does not generate meaning in the learning for them. However, the application of the complex numbers is much more comprehensive than can be imagined, mainly, in the field of Exact Sciences taking as an example Engineering. This work is intended to broaden the view of high school students presenting applications and how Complex Numbers are used in other contexts, as well as in the study of Electrical Circuits.
APA, Harvard, Vancouver, ISO, and other styles
7

Foo, Catherine 1980. "Complex population models with coalescent simulations." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kennell, Jonathan 1980. "Generative temporal planning with complex processes." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28414.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (leaves 101-104).
Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems. This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander. To support such a system, this thesis presents the Spock generative planner. To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes. This is in contrast to traditional planners, whose operators represent simple atomic or durative actions. Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes. RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators. Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution. Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts. This thesis describes the Spock algorithm in detail, along with example problems and test results.
by Jonathan Kennell.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Mercer, Sean R. "Online microwave measurement of complex dielectric constant." Doctoral thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/8342.

Full text
Abstract:
Includes bibliographical references.
This dissertation examines the problem of on-line measurement of complex dielectric constant for the purpose of dielectric discrimination or product evaluation using microwave techniques. Various methods of signal/sample interaction were studied and consideration was given to the problem of sorting irregularly shaped discrete samples. The use of microwave transmission and reflection measurements was evaluated. The signal reflection methods were deemed to be best suited to applications with constant geometry feed presentation ( ie. a continuous, homogeneous product stream with little variation in surface geometry).
APA, Harvard, Vancouver, ISO, and other styles
10

Hovorka, Ondrej Friedman Gary. "Hysteresis behavior patterns in complex systems /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fox, Emily Beth. "Bayesian nonparametric learning of complex dynamical phenomena." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55111.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 257-270).
The complexity of many dynamical phenomena precludes the use of linear models for which exact analytic techniques are available. However, inference on standard nonlinear models quickly becomes intractable. In some cases, Markov switching processes, with switches between a set of simpler models, are employed to describe the observed dynamics. Such models typically rely on pre-specifying the number of Markov modes. In this thesis, we instead take a Bayesian nonparametric approach in defining a prior on the model parameters that allows for flexibility in the complexity of the learned model and for development of efficient inference algorithms. We start by considering dynamical phenomena that can be well-modeled as a hidden discrete Markov process, but in which there is uncertainty about the cardinality of the state space. The standard finite state hidden Markov model (HMM) has been widely applied in speech recognition, digital communications, and bioinformatics, amongst other fields. Through the use of the hierarchical Dirichlet process (HDP), one can examine an HMM with an unbounded number of possible states. We revisit this HDPHMM and develop a generalization of the model, the sticky HDP-HMM, that allows more robust learning of smoothly varying state dynamics through a learned bias towards self-transitions. We show that this sticky HDP-HMM not only better segments data according to the underlying state sequence, but also improves the predictive performance of the learned model. Additionally, the sticky HDP-HMM enables learning more complex, multimodal emission distributions.
(cont.) We demonstrate the utility of the sticky HDP-HMM on the NIST speaker diarization database, segmenting audio files into speaker labels while simultaneously identifying the number of speakers present. Although the HDP-HMM and its sticky extension are very flexible time series models, they make a strong Markovian assumption that observations are conditionally independent given the discrete HMM state. This assumption is often insufficient for capturing the temporal dependencies of the observations in real data. To address this issue, we develop extensions of the sticky HDP-HMM for learning two classes of switching dynamical processes: the switching linear dynamical system (SLDS) and the switching vector autoregressive (SVAR) process. These conditionally linear dynamical models can describe a wide range of complex dynamical phenomena from the stochastic volatility of financial time series to the dance of honey bees, two examples we use to show the power and flexibility of our Bayesian nonparametric approach. For all of the presented models, we develop efficient Gibbs sampling algorithms employing a truncated approximation to the HDP that allows incorporation of dynamic programming techniques, greatly improving mixing rates. In many applications, one would like to discover and model dynamical behaviors which are shared among several related time series. By jointly modeling such sequences, we may more robustly estimate representative dynamic models, and also uncover interesting relationships among activities.
(cont.) In the latter part of this thesis, we consider a Bayesian nonparametric approach to this problem by harnessing the beta process to allow each time series to have infinitely many potential behaviors, while encouraging sharing of behaviors amongst the time series. For this model, we develop an efficient and exact Markov chain Monte Carlo (MCMC) inference algorithm. In particular, we exploit the finite dynamical system induced by a fixed set of behaviors to efficiently compute acceptance probabilities, and reversible jump birth and death proposals to explore new behaviors. We present results on unsupervised segmentation of data from the CMU motion capture database.
by Emily B. Fox.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
12

Xu, Wenjing. "Crystal structure of paired domain--DNA complex." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/32666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Khurshid, Sarfraz 1972. "Generating structurally complex tests from declarative constraints." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/30083.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.
Includes bibliographical references (leaves 119-126).
This dissertation describes a method for systematic constraint-based test generation for programs that take as inputs structurally complex data, presents an automated SAT-based framework for testing such programs, and provides evidence on the feasibility of using this approach to generate high quality test suites and find bugs in non-trivial programs. The framework tests a program systematically on all nonisomorphic inputs (within a given bound on the input size). Test inputs are automatically generated from a given input constraint that characterizes allowed program inputs. In unit testing of object-oriented programs, for example, an input constraint corresponds to the representation invariant; the test inputs are then objects on which to invoke a method under test. Input constraints may additionally describe test purposes and test selection criteria. Constraints are expressed in a simple (first-order) relational logic and solved by translating them into propositional formulas that are handed to an off-the-shelf SAT solver. Solutions found by the SAT solver are lifted back to the relational domain and reified as tests. The TestEra tool implements this framework for testing Java programs. Ex-periments on generating several complex structures indicate the feasibility of using off-the-shelf SAT solvers for systematic generation of nonisomorphic structures. The tool also uncovered previously unknown errors in several applications including an intentional naming scheme for dynamic networks and a fault-tree analysis system developed for NASA.
by Sarfraz Khurshid.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
14

Deshpande, Pawan. "Decoding algorithms for complex natural language tasks." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41647.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 79-85).
This thesis focuses on developing decoding techniques for complex Natural Language Processing (NLP) tasks. The goal of decoding is to find an optimal or near optimal solution given a model that defines the goodness of a candidate. The task is challenging because in a typical problem the search space is large, and the dependencies between elements of the solution are complex. The goal of this work is two-fold. First, we are interested in developing decoding techniques with strong theoretical guarantees. We develop a decoding model based on the Integer Linear Programming paradigm which is guaranteed to compute the optimal solution and is capable of accounting for a wide range of global constraints. As an alternative, we also present a novel randomized algorithm which can guarantee an arbitrarily high probability of finding the optimal solution. We apply these methods to the task of constructing temporal graphs and to the task of title generation. Second, we are interested in carefully investigating the relations between learning and decoding. We build on the Perceptron framework to integrate the learning and decoding procedures into a single unified process. We use the resulting model to automatically generate tables-of-contents, structures with deep hierarchies and rich contextual dependencies. In all three natural language tasks, our experimental results demonstrate that theoretically grounded and stronger decoding strategies perform better than existing methods. As a final contribution, we have made the source code for these algorithms publicly available for the NLP research community.
by Pawan Deshpande.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
15

Agrawal, Janak. "Distributed parameter estimation for complex energy systems." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129082.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, September, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 81-83).
With multiple energy sources, diverse energy demands, and heterogeneous socioeconomic factors, energy systems are becoming increasingly complex. Multifaceted components have non-linear dynamics and are interacting with each other as well as the environment. In this thesis, we model components in terms of their own internal dynamics and output variables at the interfaces with the neighboring components. We then propose to use a distributed estimation method for obtaining the parameters of the the component's internal model based on the measurements at its interfaces. We check whether theoretical conditions for distributed estimation approach are met and validate the results obtained. The estimated parameters of the system can then be used for advanced control purposes in the HVAC system. We also use the measurements at the terminals to model and verify the components in the energy-space which is a novel approach proposed by our group. The energy space approach reflects conservation of power and rate of change of reactive power. Both power and rate of change of generalized reactive power are obtained from measurements at the input and output ports of the components by measuring flows and efforts associated with their ports. A pair of flow and efforts is measured for electrical and gas ports, as well as for fluids. We show that the energy space model agrees with the conventional state space model with a high accuracy and that standard measurements available in a commercial HVAC can be used for calculating the interaction variables in the energy space model. A novel finding is that unless measurements of both flow and effort variables is used, the sub-model representing rate of change of reactive power can not be validated. This implies that commonly used models in engineering which assume constant effort variables may not be sufficiently accurate to support most efficient control of complex interconnected systems comprising multiple energy conversion processes.
by Janak Agrawal.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
16

Yan, Yu. "Photonic microwave filters with negative and complex coefficients." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/28041.

Full text
Abstract:
Photonic microwave filter has been a topic of interest for over two decades because of the many advantageous features such as large time-bandwidth product, wide tunability, high Q-factor, low loss, light weight, and immunity to electromagnetic interference offered by photonics. Theoretical and experimental studies of photonic microwave filters are presented in this thesis. Three novel techniques to implement photonic microwave filters with negative coefficients and complex coefficients are proposed and experimentally demonstrated. In the first technique, the negative coefficients are generated based on cross-polarization-modulation (XPolM) in a semiconductor optical amplifier (SOA). In the second technique, negative coefficients are generated based on phase modulation and phase modulation to intensity modulation (PM-IM) conversion using linearly-chirped fiber Bragg gratings (LCNBGs) with positive and negative dispersions. In the third technique, a tunable photonic microwave filter with complex coefficients is implemented using a wideband tunable optical RF phase shifter. A technique to improve the dynamic range of a photonic microwave bandpass filter is also investigated.
APA, Harvard, Vancouver, ISO, and other styles
17

Mueller, Jonas Weylin. "Flexible models for understanding and optimizing complex populations." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118097.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 143-152).
Data analysis is often driven by the goals of understanding or optimizing some population of interest. The first of these two objectives aims to produce insights regarding characteristics of the underlying population, often to facilitate scientific understanding. Crucially, this requires models which produce results that are highly interpretable to the analyst. On the other hand, notions of interpretability are not necessarily as central for determining how to optimize populations, where the aim is to build data-driven systems which learn how to act upon individuals in a manner that maximally improves certain outcomes of interest across the population. In this thesis, we develop interpretable yet flexible modeling frameworks for addressing the former goal, as well as black-box nonparametric methods for addressing the latter. Throughout, we demonstrate various empirical applications of our algorithms, primarily in the biological context of modeling gene expression in large cell populations. For better understanding populations, we introduce two nonparametric models that can accurately reflect interesting characteristics of complex distributions without reliance on restrictive assumptions, while simultaneously remaining highly interpretable through their use of the Wasserstein (optimal transport) metric to summarize changes over an entire population. One approach is principal differences analysis, a projection-based technique that interpretably characterizes differences between two arbitrary high-dimensional probability distributions. Another approach is the TRENDS model, which quantifies the underlying effects of temporal progression in an evolving sequence of distributions that also vary due to confounding noise. While the aforementioned techniques fall under the frequentist regime, we subsequently present a Bayesian framework for the task of optimizing populations. Drawing upon the Gaussian process toolkit, our method learns how to best conservatively intervene upon heterogeneous populations in settings with limited data and substantial uncertainty about the underlying relationship between actions and outcomes.
by Jonas W. Mueller.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Ye M. Eng Massachusetts Institute of Technology. "Fab trees for designing complex 3D printable materials." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85798.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Title as it appears in Degrees awarded booklet, September 2013: Material design by fab trees for 3D printing. Cataloged from PDF version of thesis.
Includes bibliographical references (pages 67-68).
With more 3D printable materials being invented, 3D printers nowadays could replicate not only geometries, but also appearance and physical properties. On the software side, the tight coupling between geometry and material specification, and the lack of tools in specifying materials volumetrically, however, hinder the full usage of the multi-material capability of 3D printers. The heavy dependency on traditional modeling software also makes casual users, who are becoming one of the most important user groups, unwelcome in this rising area. This thesis aims to solve the above problems by proposing fab trees for creating and combining procedural material specifications defined in OpenFL, a language for fabrication similar to the shading language for rendering. The fab tree representation allows users 1) to decouple material specification from geometry; hence, to be able to reuse the created materials on different models; 2) to easily create complicated materials systematically; 3) to have enough freedom to design materials procedurally, and fully utilize the functionality of today's multi-material 3D printers. In addition, I provide a fully functional user interface to explore desired visualization methods and user interactions for casual users in the 3D printing context.
by Ye Wang.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
19

Marinov, Darko 1976. "Automatic testing of software with structurally complex inputs." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30161.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 123-132).
Modern software pervasively uses structurally complex data such as linked data structures. The standard approach to generating test suites for such software, manual generation of the inputs in the suite, is tedious and error-prone. This dissertation proposes a new approach for specifying properties of structurally complex test inputs; presents a technique that automates generation of such inputs; describes the Korat tool that implements this technique for Java; and evaluates the effectiveness of Korat in testing a set of data-structure implementations. Our approach allows the developer to describe the properties of valid test inputs using a familiar implementation language such as Java. Specifically, the user provides an imperative predicate--a piece of code that returns a truth value--that returns true if the input satisfies the required property and false otherwise. Korat implements our technique for solving imperative predicates: given a predicate and a bound on the size of the predicate's inputs, Korat automatically generates the bounded-exhaustive test suite that consists of all inputs, within the given bound, that satisfy the property identified by the predicate. To generate these inputs, Korat systematically searches the bounded input space by executing the predicate on the candidate inputs. Korat does this efficiently by pruning the search based on the predicate's executions and by generating only nonisomorphic inputs. Bounded-exhaustive testing is a methodology for testing the code on all inputs within the given small bound.
(cont.) Our experiments on a set of ten linked and array- based data structures show that Korat can efficiently generate bounded-exhaustive test suites from imperative predicates even for very large input spaces. Further, these test suites can achieve high statement, branch, and mutation coverage. The use of our technique for generating structurally complex test inputs also enabled testers in industry to detect faults in real, production-quality applications.
by Darko Marinov.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Moy, Milyn C. (Milyn Cecilia) 1975. "Real-time hand gesture recognition in complex environments." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50054.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (leaves 65-68).
by Milyn C. Moy.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
21

Gogia, Sumit. "Insight : interactive machine learning for complex graphics selection." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106021.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 89-91).
Modern vector graphics editors support the creation of a wonderful variety of complex designs and artwork. Users produce highly realistic illustrations, stylized representational art, even nuanced data visualizations. In light of these complex graphics, selections, representations of sets of objects that users want to manipulate, become more complex as well. Direct manipulation tools that artists and designers find accessible and useful for editing graphics such as logos and icons do not have the same applicability in these more complex cases. Given that selection is the first step for nearly all editing in graphics, it is important to enable artists and designers to express these complex selections. This thesis explores the use of interactive machine learning techniques to improve direct selection interfaces. To investigate this approach, I created Insight, an interactive machine learning selection tool for making a relevant class of complex selections: visually similar objects. To make a selection, users iteratively provide examples of selection objects by clicking on them in the graphic. Insight infers a selection from the examples at each step, allowing users to quickly understand results of actions and reactively shape the complex selection. The interaction resembles the direct manipulation interactions artists and designers have found accessible, while helping express complex selections by inferring many parameter changes from simple actions. I evaluated Insight in a user study of digital designers and artists, finding that Insight enabled users to effectively and easily make complex selections not supported by state-of-the-art vector graphics editors. My results contribute to existing work by both indicating a useful approach for providing complex representation access to artists and designers, and showing a new application for interactive machine learning.
by Sumit Gogia.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
22

Du, Dawei. "Biogeography-based optimization for combinatorial problems and complex systems." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1400504249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Peralta, Michael Olivas. "Statistical simulation of complex correlated semiconductor devices." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284048.

Full text
Abstract:
The various devices (transistors, resistors, etc.) in an integrated semiconductor circuit have very highly coupled or correlated parametric inter-relationships. Adding to the complexity, are changes in the parametric values as the sizes and spacings between the devices change. This coupling is not in the form of interaction fields or forces but rather takes place through the correlation of parameters between different devices. These parametric correlations occur because of the processing of the semiconductor wafers through its manufacturing stages. The devices on each wafer have many n-type or p-type doped semiconductor layers in common because of being processed at the same temperature, or in the same gaseous environments, or in the same implantation sessions. In addition, each doped layer has variations over its different regions. All this results in very complex parametric interrelationships between the various devices within the integrated circuit. In turn these have very influential effects on the variation of key circuit characteristics. In spite of the tremendous importance of knowing and predicting these relationships, accurate methods of predicting these complex relationships between devices have evaded the semiconductor industry. The current methods used, such as statistically independent Monte Carlo simulation and Corner Models, either severely underestimate or severely overestimate the variation of key integrated circuit characteristics of interest. Either way, the current methods are very inaccurate. In order to meet this challenge, the methods covered in this dissertation have been developed and applied to the case at hand. They are based on applications of probability, statistics, stochastic, and random field theory, and various computer algorithms. Because of the accuracy, the ease with which device correlations are specified, and the use of computer algorithms, it is expected that the techniques described in this dissertation will be the way that accurate statistical integrated circuit simulations will be done by everyone in the industry. In addition, many of the concepts developed here can be applied to other complex correlated systems not necessarily involving semiconductors.
APA, Harvard, Vancouver, ISO, and other styles
24

Hamrick, Jessica B. (Jessica Blake Chandler). "Physical reasoning in complex scenes is sensitive to mass." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77012.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 57-58).
Many human activities require precise judgments about the dynamics and physical properties - for example, mass - of multiple objects. Classic work suggests that people's intuitive models of physics in mass-sensitive situations are relatively poor and error-prone, based on highly simplified heuristics that apply only in special cases. These conclusions seem at odds with the breadth and sophistication of naive physical reasoning in real-world situations. Our work measures the boundaries of people's physical reasoning in mass-sensitive scenarios and tests the richness of intuitive physics knowledge in more complex scenes. We asked participants to make quantitative judgments about stability and other physical properties of virtual 3D towers composed of heavy and light blocks. We found their judgments correlated highly with a model observer that uses simulations based on realistic physical dynamics and sampling-based approximate probabilistic inference to efficiently and accurately estimate these properties. Several alternative heuristic accounts provide substantially worse fits. In a separate task, participants observed virtual 3D billiards-like movies and judged which balls were lighter. In contrast to the previous experiments, we found their judgments to be more consistent with simple, visual heuristics than a simulation-based model that updates its beliefs about mass in response to prediction errors. We conclude that rich internal physics models are likely to play a key role in guiding human common-sense reasoning in prediction-based tasks and emphasize the need for further investigation in inference-based tasks.
by Jessica B. Hamrick.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
25

Yu, Joseph Hon. "Auto-configuration of Savants in a complex, variable network." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33378.

Full text
Abstract:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 63-64).
In this thesis, present a system design that enables Savants to automatically configure both their network settings and their required application programs when connected to an intelligent data management and application system. Savants are intelligent routers in a large network used to manage the data and events related to communications with electronic identification tags [10]. The ubiquitous nature of the identification tags and the access points that communicate with them requires an information and management system that is equally ubiquitous and able to deal with huge volumes of data. The Savant systems were designed to be such a ubiquitous information and management system. Deploying any ubiquitous system is difficult, and automation is required to streamline its deployment and improve system management, reliability, and performance. My solution to this auto-configuration problem uses NETCONF as a standard language and protocol for configuration communication among Savants. It also uses the Content-Addressable Network (CAN) as a discovery service to help Savants locate configuration information, since a new Savant may not have information about the network structure. With these tools, new Savants can configure themselves automatically with the help of other Savants.
(cont.) Specifically, they can configure their network settings, download and set up software, and integrate with network distributed applications. Future work could expand upon my project by studying an implementation, making provisions for resource-limited Savants, or improving security.
by Joseph Hon Yu.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
26

Dukellis, John N. (John Nicholas) 1977. "Applications of auction algorithms to complex problems with constraints." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/28455.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 87-88).
Linear and nonlinear assignment problems are addressed by the use of auction algorithms. The application of auction to the standard linear assignment problem is reviewed. The extension to nonlinear problems is introduced and illustrated with two examples. Techniques that are employed for model reduction include discretization, classification, and imposition of assignment constraints. The tradeoff between solution speed and optimality for the nonlinear problem is analyzed and demonstrated for the sample problem.
by John N. Dukellis.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
27

Oates, John H. (John Harvey). "Propagation and scattering of electromagnetic waves in complex environments." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Rueda, Guerrero María Ximena. "Robustness of complex supply chain networks to targeted attacks." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119719.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 65-67).
In this thesis, we study the robustness of complex supply chain systems from a network science perspective. Through the simulation of targeted attacks to nodes and edges using different hierarchical measures from network science to select the most relevant components, we evaluate the extent to which local centrality measures can estimate the relevance of a node in maintaining the connectivity and the efficient communication across the network. We perform the experiments on two real-world supply chain data sets, and on an ensemble of networks generated from network growth models that share simple topological properties with the real-world networks. It is found that all models produce more robust networks than the data sets of choice. In addition, the removal of high average neighbor degree nodes seems to have little impact on the connectivity of the network, and a highly varying impact on the efficiency of the network. Finally, robustness against targeted node and edge removal is found to be more associated to the number of nodes and links in the network than to more complex network measures such as the degree distribution.
by María Ximena Rueda Guerrero.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Van, Roy Benjamin. "Learning and value function approximation in complex decision processes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9960.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 127-133).
by Benjamin Van Roy.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
30

Pendock, Graeme. "An investigation of minerals using microwave measurement of complex permittivity." Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/8339.

Full text
Abstract:
Includes bibliography.
Microwave measurement techniques have found many industrial and commercial applications. This measurement potential of microwaves, together with observations that different minerals show different microwave heating characteristics, suggests the possibility of applying microwave techniques to various forms of mineral analysis. Simple, low cost, on-line mineral analysis techniques are of interest to the mining industry. The objectives of this research project were to cover the background theory of microwave interaction with minerals and to investigate different microwave measurement techniques that could possibly be applied to mineral measurement. Measurements were then to be performed on selected minerals in order to observe any differences between them. Finally, to comment on the feasibility of using microwave measurement techniques for the differentiation, identification and analysis of minerals.
APA, Harvard, Vancouver, ISO, and other styles
31

Bergkvist, Johannes. "Testing degradation in a complex vehicle electrical system using Hardware-In-the-Loop." Thesis, Linköping University, Vehicular Systems, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16549.

Full text
Abstract:

Functionality in the automotive industry is becoming more complex, withcomplex communication networks between control systems. Information isshared among many control systems and extensive testing ensures high quality.

Degradations testing, that has the objective to test functionality with some faultpresent, is performed on single control systems, but is not frequently performed on the entire electrical system. There is a wish for testing degradation automatically on the complete electrical system in a so called Hardware-In-the-Loop laboratory.

A technique is needed to perform these tests on a regular basis.Problems with testing degradation in complex communication systems will bedescribed. Methods and solutions to tackle these problems are suggested, thatfinally end up with two independent test strategies. One strategy is suited to test degradation on new functionality. The other strategy is to investigate effects in the entire electrical system. Both strategies have been implemented in a Hardware-In-the-Loop laboratory and evaluated.

APA, Harvard, Vancouver, ISO, and other styles
32

Saarela, A. (Asmo). "Deployment of the agile risk management with Jira into complex product development ecosystem." Bachelor's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201710042941.

Full text
Abstract:
This work is a descriptive case study of deployment of the agile risk management with Jira into complex product development ecosystem. The context is a Development Team in Finland that is leading distributed product development for a small cell radio base station (RBS). The quality management requirements in the ISO 9001:2015 standard have been updated and the main impact is the deployment of the risk based thinking. The Development Team in Finland has gone through a transformation from release project driven development to continuous software development. This major change requires that development processes and development support processes are also updated. Focus is in the improvement of the risk management process, methods and tools. Rationale for risk management improvement is based on the findings in the internal quality audit and the management’s intent is also taken into account. The implementation contains a new risk management process, customized issue type for Jira tool, risk management hands on training and replacement of the old spreadsheet based risk management tools. The new process replaces two earlier processes making things simpler. The unified approach covers both normal risks in the product development and product security and privacy related risks. Agile product development emphasizes the empowerment of the cross-functional team. The results show that deployment of the agile risk management has shifted the risk management ownership to development teams instead of the release management function. The benefits of risk management for software industry and for the case study are included to motivate the use of the risk management in the daily operations
Tämä työ on kuvaileva tapaustutkimus ketterän riskienhallinnan käyttöönotosta Jiralla monimutkaisessa tuotekehitysekosysteemissä. Työ liittyy Suomessa olevaan kehitystiimiin, joka johtaa hajautettua tuotekehitystä piensolutukiasemalle. Laatujärjestelmän vaatimukset ovat päivitetty ISO 9001:2015 standardissa ja päivityksen suurin vaikutus on riskipohjaisen ajattelun käyttöönotto. Kehitystiimi Suomessa on läpikäynyt muutoksen julkaisuprojektiohjatusta kehityksestä jatkuvaan ohjelmistokehitykseen. Tämä suuri muutos vaati kehitys- ja kehityksen tukiprosessien päivittämistä. Huomio on riskienhallintaprosessin, menetelmien ja työkalujen parantamisessa. Perusteluina riskienhallinnan parantamiselle on havainnot sisäisestä laatujärjestelmän tarkastamisesta ja johdon tahtotila on myös huomioitu. Toteutus sisältää uuden riskienhallintaprosessin, räätälöidyt tehtävätyypit Jira työkaluun, riskienhallinnan käytännön perehdytyksen ja olemassa olevien taulukkolaskentapohjaisten riskienhallintatyökalujen korvaamisen. Uusi prosessi korvaa kaksi aikaisempaa prosessia yksinkertaistaen asioita. Yhdistetty lähestymistapa kattaa sekä normaalit riskit tuotekehityksessä, että tuotteen tietoturva. ja yksityisyysriskit. Ketterä tuotekehitys korostaa vaikutusvallan lisäämistä rajat ylittävissä tiimeissä. Tulosten perusteella ketterän riskienhallinnan käyttöönotto on siirtänyt riskien hallinnan omistajuuden kehitystiimeihin julkaisun hallintatoiminnosta. Riskienhallinnan hyödyt ohjelmistoteollisuudelle ja tapaustutkimukselle on sisällytetty motivoimaan riskienhallintaa päivittäisessä toiminnassa
APA, Harvard, Vancouver, ISO, and other styles
33

Kesavareddigari, Himaja. "Analysis and Control of the Propagation of Failures and Misinformation over Complex Networks." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1588320404278331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sufian, Amr T. "Monitoring honey and complex liquids by optical chromaticity." Thesis, University of Liverpool, 2014. http://livrepository.liverpool.ac.uk/19633/.

Full text
Abstract:
The potential of optically monitoring complex composite liquids such as honey has been demonstrated using optical light properties. The novel approach has the potential for distinguishing between various honey samples to quantify and discriminate for deviations from the norm and for early warning of contamination/adulteration in honey using readily available, cost effective and portable instrumentation that can be used robustly on the field, which replaces individual absolute measuring instruments. The novel approach was developed based upon chromatically processing test data with three different optical methods simultaneously (transmission, polarization and fluorescence) for use as primary, secondary chromatic maps and an assessment flow chart to provide a rapid decision capability on the condition of the honey according to quality attributes. As such it can provide insight into conditions important to the food industry. Novel methods for compensation, calibration, normalization and ambient light rejection procedures have been developed to allow operation in a range of lightening conditions such as in the field and factory. The chromatic approach sensitivity for identifying the correct classification of high quality honey samples was 91% and the sensitivity for identifying very poor quality samples was 75%. The portable honey monitoring system was tested for field trials at various locations across Yemen for monitoring the condition of honey samples. The sensitivity for correct classifications of the high quality honey sample samples was 88% and the sensitivity for identifying very poor quality honey samples was 63%. Chromatic methodology provided robustness for field use.
APA, Harvard, Vancouver, ISO, and other styles
35

Anderson, Christopher Jean B. Randall. "Determining the complex permittivity of materials with the Waveguide-Cutoff method." Waco, Tex. : Baylor University, 2006. http://hdl.handle.net/2104/4206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hermas, Wael. "Approach to coverage-driven functional verification of complex multimillion gate ASICs." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/27370.

Full text
Abstract:
Today ASICs' designs are very complex and each consists of multi-millions gates. This creates a difficult and almost an impossible task for the verification engineers to verify thoroughly the whole design. The complexity of the verification effort grows exponentially. The most common verification methodology used today is Deterministic Testing. Based on today large designs, hundreds of deterministic test cases are required to verify the whole design. This is very time consuming and it requires lots of engineers' manpower. Since time to market is very essential, the engineers are forced to send the ASIC for fabrication even though lots of functionalities are not covered. A solution to this problem is Coverage-Driven Functional Verification (CDV). The CDV approach is based on ASICs functionalities. The verification process is done in the early stages of the design. In fact it is done right after the ASICs specifications are outlined and in parallel with RTL development. Functional Coverage is performed by collecting coverage items that correspond to ASICs functionalities. Using Functional coverage would minimize the number of test cases and enhance the verification process. In this thesis, the "Ethernet IP Core" Verilog design from OpenCores will be used. The Ethernet IP Core is a 10/100 Media Access Controller (MAC). It consists of a synthesizable Verilog RTL core that provides all features necessary to implement the Layer 2 protocol of the Ethernet standard. Both the Deterministic Testing Approach and the Coverage-Driven Functional Verification will be applied on the Ethernet IP Core design. Both verification methodologies will be compared and a conclusion will be driven to prove that CDV is the way for verifying today complex multi-millions gates ASICs. Cadence Specman (e language - IEEE 1647 e) is chosen as the verification tool because it provides different features for the Coverage-Driven Functional Verification Methodology.
APA, Harvard, Vancouver, ISO, and other styles
37

Niessen, Christopher Charles. "A VLSI systolic array processor for complex singular value decomposition." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/34099.

Full text
Abstract:
Thesis (B.S. and M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 219-221).
The singular value decomposition is one example of a variety of more complex routines that are finding use in modern high performance signal processing systems. In the interest of achieving the maximum possible performance, a systolic array processor for computing the singular value decomposition of an arbitrary complex matrix was designed using a silicon compiler system. This system allows for ease of design by specification of the processor architecture in a high level language, utilizing parts from a variety of cell libraries, while still benefiting from the power of custom VLSI. The level of abstraction provided by this system allowed more complex functional units to be built up from existing simple library parts. A novel fast interpolation cell for computation of square roots and inverse square roots was designed, allowing for a new algebraic approach to the singular value decomposition problem. The processors connect together in a systolic array to maximize computational efficiency while minimizing overhead due to high communication requirements.
by Christopher Charles Niessen.
B.S.and M.S.
APA, Harvard, Vancouver, ISO, and other styles
38

Louie, Raymond (Raymond T. ). 1976. "Hybrid intelligent systems integration into complex multi-source information systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86533.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 98-100).
by Raymond Louie.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
39

Hwang, Jung-Taik. "A fragmentation technique for parsing complex sentences for machine translation." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10204.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (leaves 110-111).
by Jung-Taik Hwang.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
40

Leedekerken, Jacques Chadwick. "Mapping of complex marine environments using an unmanned surface craft." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68492.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 185-199).
Recent technology has combined accurate GPS localization with mapping to build 3D maps in a diverse range of terrestrial environments, but the mapping of marine environments lags behind. This is particularly true in shallow water and coastal areas with man-made structures such as bridges, piers, and marinas, which can pose formidable challenges to autonomous underwater vehicle (AUV) operations. In this thesis, we propose a new approach for mapping shallow water marine environments, combining data from both above and below the water in a robust probabilistic state estimation framework. The ability to rapidly acquire detailed maps of these environments would have many applications, including surveillance, environmental monitoring, forensic search, and disaster recovery. Whereas most recent AUV mapping research has been limited to open waters, far from man-made surface structures, in our work we focus on complex shallow water environments, such as rivers and harbors, where man-made structures block GPS signals and pose hazards to navigation. Our goal is to enable an autonomous surface craft to combine data from the heterogeneous environments above and below the water surface - as if the water were drained, and we had a complete integrated model of the marine environment, with full visibility. To tackle this problem, we propose a new framework for 3D SLAM in marine environments that combines data obtained concurrently from above and below the water in a robust probabilistic state estimation framework. Our work makes systems, algorithmic, and experimental contributions in perceptual robotics for the marine environment. We have created a novel Autonomous Surface Vehicle (ASV), equipped with substantial onboard computation and an extensive sensor suite that includes three SICK lidars, a Blueview MB2250 imaging sonar, a Doppler Velocity Log, and an integrated global positioning system/inertial measurement unit (GPS/IMU) device. The data from these sensors is processed in a hybrid metric/topological SLAM state estimation framework. A key challenge to mapping is extracting effective constraints from 3D lidar data despite GPS loss and reacquisition. This was achieved by developing a GPS trust engine that uses a semi-supervised learning classifier to ascertain the validity of GPS information for different segments of the vehicle trajectory. This eliminates the troublesome effects of multipath on the vehicle trajectory estimate, and provides cues for submap decomposition. Localization from lidar point clouds is performed using octrees combined with Iterative Closest Point (ICP) matching, which provides constraints between submaps both within and across different mapping sessions. Submap positions are optimized via least squares optimization of the graph of constraints, to achieve global alignment. The global vehicle trajectory is used for subsea sonar bathymetric map generation and for mesh reconstruction from lidar data for 3D visualization of above-water structures. We present experimental results in the vicinity of several structures spanning or along the Charles River between Boston and Cambridge, MA. The Harvard and Longfellow Bridges, three sailing pavilions and a yacht club provide structures of interest, having both extensive superstructure and subsurface foundations. To quantitatively assess the mapping error, we compare against a georeferenced model of the Harvard Bridge using blueprints from the Library of Congress. Our results demonstrate the potential of this new approach to achieve robust and efficient model capture for complex shallow-water marine environments. Future work aims to incorporate autonomy for path planning of a region of interest while performing collision avoidance to enable fully autonomous surveys that achieve full sensor coverage of a complete marine environment.
by Jacques Chadwick Leedekerken.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Larsen, Erik Ph D. Massachusetts Institute of Technology. "Pitch representations in the auditory nerve : two concurrent complex tones." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45830.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 39-43).
Pitch differences between concurrent sounds are important cues used in auditory scene analysis and also play a major role in music perception. To investigate the neural codes underlying these perceptual abilities, we recorded from single fibers in the cat auditory nerve in response to two concurrent harmonic complex tones with missing fundamentals and equal-amplitude harmonics. We investigated the efficacy of rate-place and interspike-interval codes to represent both pitches of the two tones, which had fundamental frequency (FO) ratios of 15/14 or 11/9. We relied on the principle of scaling invariance in cochlear mechanics to infer the spatiotemporal response patterns to a given stimulus from a series of measurements made in a single fiber as a function of FO. Templates created by a peripheral auditory model were used to estimate the FOs of double complex tones from the inferred distribution of firing rate along the tonotopic axis. This rate-place representation was accurate for FOs above about 900 Hz. Surprisingly, rate-based FO estimates were accurate even when the two-tone mixture contained no resolved harmonics, so long as some harmonics were resolved prior to mixing. We also extended methods used previously for single complex tones to estimate the FOs of concurrent complex tones from interspike-interval distributions pooled over the tonotopic axis. The interval-based representation was accurate for FOs below about 900 Hz, where the two-tone mixture contained no resolved harmonics. Together, the rate-place and interval-based representations allow accurate pitch perception for concurrent sounds over the entire range of human voice and cat vocalizations.
by Erik Larsen.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
42

Kamon, Mattan 1969. "Efficient techniques for inductance extraction of complex 3-D geometries." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Hartwell, Glenn D. "Improved geo-spatial resolution using a modified approach to the complex ambiguity function (CAF)." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FHartwell.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2005.
Thesis Advisor(s): Herschel H. Loomis, Jr. Includes bibliographical references (p. 99-101). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
44

Yilmaz, Sener. "Generalized Area Tracking Using Complex Discrete Wavelet Transform: The Complex Wavelet Tracker." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608643/index.pdf.

Full text
Abstract:
In this work, a new method is proposed that can be used for area tracking. This method is based on the Complex Discrete Wavelet Transform (CDWT) developed by Magarey and Kingsbury. The CDWT has its advantages over the traditional Discrete Wavelet Transform such as approximate shift invariance, improved directional selectivity, and robustness to noise and illumination changes. The proposed method is a generalization of the CDWT based motion estimation method developed by Magarey and Kingsbury. The Complex Wavelet Tracker extends the original method to estimate the true motion of regions according to a parametric motion model. In this way, rotation, scaling, and shear type of motions can be handled in addition to pure translation. Simulations have been performed on the proposed method including both quantitative and qualitative tests. Quantitative tests are performed on synthetically created test sequences and results have been compared to true data. The performance is compared with intensity-based methods. Qualitative tests are performed on real sequences and evaluations are presented empirically. The results are compared with intensity-based methods. It is observed that the proposed method is very accurate in handling affine deformations for long term sequences and is robust to different target signatures and illumination changes. The accuracy of the proposed method is compatible with intensity-based methods. In addition to this, it can handle a wider range of cases and is robust to illuminaton changes compared to intensity-based methods. The method can be implemented in real-time and could be a powerful replacement of current area trackers.
APA, Harvard, Vancouver, ISO, and other styles
45

Sosa, Daniel N. "Principled "convergence" non-coding rare variant association testing in complex disease." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113171.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 29-31).
Although many genetic loci pertinent to complex diseases have been identified and despite the fact that complex diseases remain an immense burden to healthcare globally, many details about the mechanism of these diseases are still unknown. Thus far, genome-wide association studies (GWAS) have only explained a small proportion of disease heritability, indicating that there is a large number of additional loci that contribute to complex diseases like type 2 diabetes (T2D), which is the primary case study in this work. We overcome some of the limitations of rare variant studies by conducting weighted aggregate association tests in a framework we call "Convergence". We compare potential cell type specific regulatory loci assigned to genes, which serve as the basis for grouping variants and integrated predictors of functional consequence of variants, which serve as variant weights. We demonstrate that this methodology is able to detect significant association to T2D for genes relevant for body weight homeostasis, adipocyte proliferation, and inflammation. As a result, this work provides a principled framework for improving the efficacy of RVAS by successfully converging the abundant epigenetic information available to understand complex disease.
by Daniel N. Sosa.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
46

Marti, Uttara P. "Enhancement of electromagnetic propagation through complex media for Radio Frequency Identification." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33287.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 93-94).
In this thesis, I present and examine the fundamental limitations involved in Radio Frequency Identification (RFID) as well as provide a means to improve reader-tag communication in ultra high frequency RFID systems. The ultimate goal in an RFID system is to maximize the communication link between the reader and the tags while, at the same time, minimizing the effect of product material, geometry and orientation. Reader-tag communication has improved significantly over the past five years, however, tag operations continue to be extremely sensitive to their environment. Ultra high frequencies present unique problems in transmission, generation and circuit design that; are not encountered at lower frequencies. Based on the fundamental constraints on these passive RFID systems, such as electromagnetics, power limitations and government regulations, I analyzed electromagnetic propagation through materials as applied to RFID tagged cases and pallets. Applying the electromagnetic concept of conductive parallel plates to enhance electromagnetic power to RFID tagged cases and pallets, I suggest an alternative to the current pallet structure.
by Uttara P. Marti.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhu, Zhenhai 1970. "Efficient techniques for wideband impedance extraction of complex 3-D geometries." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Preciado, Víctor Manuel. "Spectral analysis for stochastic models of large-scale complex dynamical networks." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45873.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 179-196).
Research on large-scale complex networks has important applications in diverse systems of current interest, including the Internet, the World-Wide Web, social, biological, and chemical networks. The growing availability of massive databases, computing facilities, and reliable data analysis tools has provided a powerful framework to explore structural properties of such real-world networks. However, one cannot efficiently retrieve and store the exact or full topology for many large-scale networks. As an alternative, several stochastic network models have been proposed that attempt to capture essential characteristics of such complex topologies. Network researchers then use these stochastic models to generate topologies similar to the complex network of interest and use these topologies to test, for example, the behavior of dynamical processes in the network. In general, the topological properties of a network are not directly evident in the behavior of dynamical processes running on it. On the other hand, the eigenvalue spectra of certain matricial representations of the network topology do relate quite directly to the behavior of many dynamical processes of interest, such as random walks, Markov processes, virus/rumor spreading, or synchronization of oscillators in a network. This thesis studies spectral properties of popular stochastic network models proposed in recent years. In particular, we develop several methods to determine or estimate the spectral moments of these models. We also present a variety of techniques to extract relevant spectral information from a finite sequence of spectral moments. A range of numerical examples throughout the thesis confirms the efficacy of our approach. Our ultimate objective is to use such results to understand and predict the behavior of dynamical processes taking place in large-scale networks.
by Víctor Manuel Preciado.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
49

Moss, Christopher D. Q. (Christopher Doniert Q. ). 1973. "Numerical methods for electromagnetic wave propagation and scattering in complex media." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/26909.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Vita.
Includes bibliographical references (p. 227-242).
Numerical methods are developed to study various applications in electromagnetic wave propagation and scattering. Analytical methods are used where possible to enhance the efficiency, accuracy, and applicability of the numerical methods. Electromagnetic induction (EMI) sensing is a popular technique to detect and discriminate buried unexploded ordnance (UXO). Time domain EMI sensing uses a transient primary magnetic field to induce currents within the UXO. These currents induce a secondary field that is measured and used to determine characteristics of the UXO. It is shown that the EMI response is difficult to calculate in early time when the skin depth is small. A new numerical method is developed to obtain an accurate and fast solution of the early time EMI response. The method is combined with the finite element method to provide the entire time domain response. The results are compared with analytical solutions and experimental data, and excellent agreement is obtained. A fast Method of Moments is presented to calculate electromagnetic wave scattering from layered one dimensional rough surfaces. To facilitate the solution, the Forward Backward method with Spectral Acceleration is applied. As an example, a dielectric layer on a perfect electric conductor surface is studied. First, the numerical results are compared with the analytical solution for layered flat surfaces to partly validate the formulation. Second, the accuracy, efficiency, and convergence of the method are studied for various rough surfaces and layer permittivities. The Finite Difference Time Domain (FDTD) method is used to study metamaterials exhibiting both negative permittivity and permeability in certain frequency bands.
(cont.) The structure under study is the well-known periodic arrangement of rods and split-ring resonators, previously used in experimental setups. For the first time, the numerical results of this work show that fields propagating inside the metamaterial with a forward power direction exhibit a backward phase velocity and negative index of refraction. A new metamaterial design is presented that is less lossy than previous designs. The effects of numerical dispersion in the FDTD method are investigated for layered, anisotropic media. The numerical dispersion relation is derived for diagonally anisotropic media. The analysis is applied to minimize the numerical dispersion error of Huygens' plane wave sources in layered, uniaxial media. For usual discretization sizes, a typical reduction of the scattered field error on the order of 30 dB is demonstrated. The new FDTD method is then used to study the Angular Correlation Function (ACF) of the scattered fields from continuous random media with and without a target object present. The ACF is shown to be as much as 10 dB greater when a target object is present for situations where the target is undetectable by examination of the radar cross section only.
by Christopher D. Moss.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
50

Singh, Vineeta. "Segmentation of Regions with Complex Boundaries." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479821009146599.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography