To see the other types of publications on this topic, follow the link: Very large scale motion.

Dissertations / Theses on the topic 'Very large scale motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Very large scale motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sack, Warren. "Design for very large-scale conversations." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/62349.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2000.
Includes bibliographical references (leaves 184-200).
On the Internet there are now very large-scale conversations (VLSCs) in which hundreds, even thousands, of people exchange messages across international borders in daily, many-to-many communications. It is my thesis that VLSC is an emergent communication medium that engenders new social and linguistic connections between people. VLSC poses fundamental challenges to the analytic tools and descriptive methodologies of linguistics and sociology previously developed to understand conversations of a much smaller scale. Consequently, the challenge for software design is this: How can the tools of social science be appropriated and improved upon to create better interfaces for participants and interested observers to understand and critically reflect upon conversation? This dissertation accomplishes two pieces of work. Firstly, the design, implementation, and demonstration of a proof-of-concept, VLSC interface is presented. The Conversation Map system provides a means to explore and question the social and linguistic structure of very large-scale conversations (e.g., Usenet newsgroups). Secondly, the thinking that went into the design of the Conversation Map system is generalized and articulated as an aesthetics, ethics, and epistemology of design for VLSC. The goal of the second, theoretical portion of the thesis is to provide a means to describe the emergent phenomenon of VLSC and a vocabulary for critiquing software designed for VLSC and computer-mediated conversation in general.
Warren Sack.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
2

Hughes, John Barry. "Analogue techniques for very large scale integrated circuits." Thesis, University of Southampton, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gross, Peter Alan. "Rapid single flux quantum very large scale integration." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/49734.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2002.
ENGLISH ABSTRACT: Very Large Scale Integration (VLSI) of the Rapid Single Flux Quantum (RSFQ) superconducting logic family is researched. Insight into the design methodologies used for large-scale digital systems and related logistics are reviewed. A brief overview of basic RSFQ logic gates with in mind their application in a cell based layout scheme suited for RSFQ is given. A standard cell model is then proposed, incorporating these cells, on which, a library of low temperature superconducting (L TS) cells are laid out. Research is made into computer techniques for storing and manipulating large-scale circuit netlists. On this base, a method of technology mapping Boolean circuits to an RSFQ equivalent is achieved. Placements on-chip are made, optimized for minimum net length, routed and exported to a popular electronic mask format. Finally, the convergent technology fields of solid state cooling and high-temperature superconducting electronics (HTS) are investigated. This leads to a proposal for a low profile, low cost, HTS cryopackaging concept.
AFRIKAANSE OPSOMMING: Grootskaalse integrasie (VLSI) van die "Rapid Single Flux Quantum" (RSFQ) supergeleidende familie van logiese hekke word uiteengesit. Insig in die ontwerpmetodes vir grootskaaIse digitale stelsels en verwante aspekte word ondersoek. 'n Kort oorsig van basiese RSFQ logiese hekke word gegee, met hulle toepassing in 'n uitlegskema wat geskik is vir RSFQ. 'n Standaard sel model, wat bogenoemde selle insluit, word voorgestel en 'n selbiblioteek word uitgele vir lae temperatuur supergeleidende bane. Ondersoek word ingestel na die manipulasie van die beskrywing van elektroniese bane en 'n manier om logiese Boolese baanbeskrywings om te skakel na fisiese RSFQ bane. Die fisiese plasing van selle word bespreek ten einde die verbindingslengte tussen selle te minimeer. Die finale uitleg word omgeskakel na 'n staandaard elektroniese formaat vir baanuitlegte. Die konvergerende tegnologievelde van "soliede toestand" verkoeling en hoe-temperatuur supergeleidende elektroniese bane word bespreek. Ten slotte word 'n nuwe tipe, lae profiel en lae koste kriogeniese verpakking voorgestel.
APA, Harvard, Vancouver, ISO, and other styles
4

Berghold, Gerd. "Towards very large scale DFT electronic structure calculations." [S.l. : s.n.], 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB9519379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rosas, José Humberto Ablanedo. "Algorithms for very large scale set covering problems /." Full text available from ProQuest UM Digital Dissertations, 2007. http://0-proquest.umi.com.umiss.lib.olemiss.edu/pqdweb?index=0&did=1609001671&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1244747021&clientId=22256.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qin, Tian. "Nano-electromechanical relay-based very-large-scale integrated circuits." Thesis, University of Bristol, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.723513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Wei. "A reconfigurable architecture for very large scale microelectronic systems." Thesis, University of Edinburgh, 1986. http://hdl.handle.net/1842/14510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sena, Giuseppe A. (Giuseppe Antonio). "Very large scale finite differences in modeling of seismic waves." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/58055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jha, Krishna Chandra. "Very large-scale neighborhood search heuristics for combinatorial optimization problems." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0004352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahadian, Joseph F. (Joseph Farzin). "Development of a monolithic very large scale optoelectronic integrated circuit technology." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9120.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 507-527).
Optical interconnects have been proposed for use in high-speed digital systems as a means of overcoming the performance limitations of electrical interconnects at length scales ranging from one millimeter to one hundred meters. To achieve this goal, an optoelectronic very large scale integration (OE-VLSI) technology is needed which closely couples large numbers of optoelectronic devices, such as light emitters and photodetectors, with complex electronics. This thesis has been concerned with the development of an optoelectronic integration technology known as Epitaxy-on-Electronics (EoE). EoE produces monolithic optoelectronic integrated circuits (OEICs) by combining conventional epitaxial growth and fabrication techniques with commercial GaAs VLSI electronics. Proceeding from previous feasibility demonstrations, the growth and fabrication practices underlying the EoE integration process have been extensively revised and extended. The effectiveness of the resulting process has been demonstrated by fabricating the first monolithic, VLSI-complexity OEICs featuring light-emitting diodes (LEDs). As part of a research foundry project, components of this type were designed and tested by a number of groups involved in optical interconnect system development. To further realize the potential of the EoE technology, and to make its capabilities accessible to a broader user community, the focus of this work was extended beyond the development of the integration process to encompass a study of high-speed photodetectors implemented in the GaAs VLSI process, to examine the role of the EoE technology within optical interconnect applications, to formulate an analytical framework for the design of digital optical interconnects, and to implement compact, low power laser driver and optical receiver circuitry needed to implement these interconnects.
by Joseph F. Ahadian.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Bampi, Sergio. "A modified lightly doped drain mosfet for very large scale integration." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1987. http://hdl.handle.net/10183/17967.

Full text
Abstract:
Reducing MOSFET dimensions while maintaining a constant supply voltage leads to higher electric fields inside the active regions of VLSI transistors. Operation of micron and submicron MOSFETs in the presence of high-field effects has required design innovations so that a constant supply voltage, acceptable punchthrough voltage, and long-term reliability are possible as device scaling continues. Drain engineering is necessary to cope with the susceptibility of MOSFETs to hot-carrier-related degradation. Reducing the electric fields at the drain end of the channel is critical to device reliability because degradation is related to carrier heating as they traverse regions with field strength in excess of 100 kV/cm. Optimized lightly doped drain (LDD) structures that spread the high electric field at the drain ensure the reliable 5 V operation of micron-sized n-channel MOSFETs. Recent experimental evidence revealed that LDDFETs are less reliable than conventional transistors if the n¯ region is too lightly doped. The JMOS transistor, a new n-MOS structure, is introduced to resolve the reliability problems in LDD devices with peak doping densities below 1 x 1018cm-³. A JFET is merged into the n-MOS structure to reduce the high fields under the gate. Two-dimensional simulations and experimental results demonstrate for the first time the operation of this device and its potential for VLSI applications requiring maximum supply voltage. A major experimental finding is that the JMOS can sustain 5 V operation even for submicron effective channel lengths because of the designer-controlled reduction of the maximum electrical field in the region under the gate traversed by carriers. The modification introduced in the LDD design is advantageous in terms of lower gate and substrate currents. Reliability can potentially be improved but at the expense of performance; however, the advantages of 5 V operation in micron-sized devices can outweigh this performance loss.
APA, Harvard, Vancouver, ISO, and other styles
12

Gudmunsson, Gylfi Thor. "Parallelism and distribution for very large scale content-based image retrieval." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S082/document.

Full text
Abstract:
Les volumes de données multimédia ont fortement crus ces dernières années. Facebook stocke plus de 100 milliards d'images, 200 millions sont ajoutées chaque jour. Cela oblige les systèmes de recherche d'images par le contenu à s'adapter pour fonctionner à ces échelles. Les travaux présentés dans ce manuscrit vont dans cette direction. Deux observations essentielles cadrent nos travaux. Premièrement, la taille des collections d'images est telle, plusieurs téraoctets, qu'il nous faut obligatoirement prendre en compte les contraintes du stockage secondaire. Cet aspect est central. Deuxièmement, tous les processeurs sont maintenant multi-cœurs et les grilles de calcul largement disponibles. Du coup, profiter de parallélisme et de distribution semble naturel pour accélérer tant la construction de la base que le débit des recherches par lots. Cette thèse décrit une technique d'indexation multidimensionnelle s'appelant eCP. Sa conception prend en compte les contraintes issues de l'usage de disques et d'architectures parallèles et distribuées. eCP se fonde sur la technique de quantification vectorielle non structurée et non itérative. eCP s'appuie sur une technique de l'état de l'art qui est toutefois orientée mémoire centrale. Notre première contribution se compose d'extensions destinées à permettre de traiter de très larges collections de données en réduisant fortement le coût de l'indexation et en utilisant les disques au mieux. La seconde contribution tire profit des architectures multi-cœurs et détaille comment paralléliser l'indexation et la recherche. Nous évaluons cet apport sur près de 25 millions d'images, soit près de 8 milliards de descripteurs SIFT. La troisième contribution aborde l'aspect distribué. Nous adaptons eCP au paradigme Map-Reduce et nous utilisons Hadoop pour en évaluer les performances. Là, nous montrons la capacité de eCP à traiter de grandes bases en indexant plus de 100 millions d'images, soit 30 milliards de SIFT. Nous montrons aussi la capacité de eCP à utiliser plusieurs centaines de cœurs
The scale of multimedia collections has grown very fast over the last few years. Facebook stores more than 100 billion images, 200 million are added every day. In order to cope with this growth, methods for content-based image retrieval must adapt gracefully. The work presented in this thesis goes in this direction. Two observations drove the design of the high-dimensional indexing technique presented here. Firstly, the collections are so huge, typically several terabytes, that they must be kept on secondary storage. Addressing disk related issues is thus central to our work. Secondly, all CPUs are now multi-core and clusters of machines are a commonplace. Parallelism and distribution are both key for fast indexing and high-throughput batch-oriented searching. We describe in this manuscript a high-dimensional indexing technique called eCP. Its design includes the constraints associated to using disks, parallelism and distribution. At its core is an non-iterative unstructured vectorial quantization scheme. eCP builds on an existing indexing scheme that is main memory oriented. Our first contribution is a set of extensions for processing very large data collections, reducing indexing costs and best using disks. The second contribution proposes multi-threaded algorithms for both building and searching, harnessing the power of multi-core processors. Datasets for evaluation contain about 25 million images or over 8 billion SIFT descriptors. The third contribution addresses distributed computing. We adapt eCP to the MapReduce programming model and use the Hadoop framework and HDFS for our experiments. This time we evaluate eCP's ability to scale-up with a collection of 100 million images, more than 30 billion SIFT descriptors, and its ability to scale-out by running experiments on more than 100 machines
APA, Harvard, Vancouver, ISO, and other styles
13

Agarwal, Richa. "Composite very large-scale neighborhood structure for the vehicle-routing problem." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1001111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gudmundsson, Gylfi Thor. "Parallelism and distribution for very large scale content-based image retrieval." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00926069.

Full text
Abstract:
The scale of multimedia collections has grown very fast over the last few years. Facebook stores more than 100 billion images, 200 million are added every day. In order to cope with this growth, methods for content-based image retrieval must adapt gracefully. The work presented in this thesis goes in this direction. Two observations drove the design of the high-dimensional indexing technique presented here. Firstly, the collections are so huge, typically several terabytes, that they must be kept on secondary storage. Addressing disk related issues is thus central to our work. Secondly, all CPUs are now multi-core and clusters of machines are a commonplace. Parallelism and distribution are both key for fast indexing and high-throughput batch-oriented searching. We describe in this manuscript a high-dimensional indexing technique called eCP. Its design includes the constraints associated to using disks, parallelism and distribution. At its core is an non-iterative unstructured vectorial quantization scheme. eCP builds on an existing indexing scheme that is main memory oriented. Our first contribution is a set of extensions for processing very large data collections, reducing indexing costs and best using disks. The second contribution proposes multi-threaded algorithms for both building and searching, harnessing the power of multi-core processors. Datasets for evaluation contain about 25 million images or over 8 billion SIFT descriptors. The third contribution addresses distributed computing. We adapt eCP to the MapReduce programming model and use the Hadoop framework and HDFS for our experiments. This time we evaluate eCP's ability to scale-up with a collection of 100 million images, more than 30 billion SIFT descriptors, and its ability to scale-out by running experiments on more than 100 machines.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Xing. "Efficient Full-Wave Simulation for Very Large Scale Off-Chip Interconnects." Diss., The University of Arizona, 2006. http://hdl.handle.net/10150/195106.

Full text
Abstract:
The requirement to simulate larger and more complex interconnect circuits is being driven by the rapid developments that are taking place in the integrated circuit industry, where more complex circuits are continually being designed. Since full-wave analyses rigorously account for all the higher-order modes, in addition to the transmission line mode (i.e., the Transverse ElectroMagnetic (TEM) mode), they provide more accurate results than conventional 2-D analysis tools, which are based on the assumption that only a TEM mode exists. Furthermore, a full-wave analysis is required to accurately model the physics of complex 3-D interconnects.In order to address this need a Full-Wave Layered Interconnect Simulator (UA-FWLIS) was previously developed. UA-FWLIS is a Method of Moments (MoM) based tool for the analysis of stripline interconnects. However, UA-FWLIS could only handle a maximum of 10000 unknowns for signal traces in a single layer. Our final goal is to simulate complex practical systems, which have hundreds of thousands of unknowns and consist of multiple layers with vias interconnecting the different layers. In this dissertation, we extend the prototype full-wave simulator so that it can handle reactions between signal traces and vias, as well as reactions between two multiple layers. This is accomplished by employing analytical techniques to the reactions elements, thereby avoiding the use of inefficient numerical integration algorithms. This leads to substantial reductions in the matrix filling time, e.g., two orders of magnitude reductions for moderate size problems.In addition to improving the matrix filling time, we also dramatically reduce the matrix solution time by employing sparse matrix solution techniques. We demonstrate that sparse reaction matrices are produced when modeling stripline interconnects provided that a parallel-plate Green's function is employed in the analysis. We found that by applying sparse matrix storage techniques and a sparse matrix solver, it is possible to dramatically improve the matrix solution time when compared with a commercial MoM-based simulator. This also makes it possible to solve much larger problems. The contribution of this dissertation empowers the current full-wave simulator to handle more realistic problems and makes full-wave simulations of very large scale stripline interconnect structures feasible.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Yan Ph D. Massachusetts Institute of Technology. "A hierarchical Markov chain based solver for very-large-scale capacitance extraction." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71502.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-80).
This thesis presents two hierarchical algorithms, FastMarkov and FD-MTM, for computing the capacitance of very-large-scale layout with non-uniform media. Fast- Markov is Boundary Element Method based and FD-MTM is Finite Difference based. In our algorithms, the layout is first partitioned into small blocks and the capacitance matrix of each block is solved using standard deterministic methods, BEM for Fast- Markov and FDM for FD-MTM. We connect the blocks by enforcing the boundary condition on the interfaces, forming a Markov Chain containing the capacitive characteristic of the layout. Capacitance of the full layout is then extracted with the random walk method. By employing the "divide and conquer" strategy, our algorithm does not need to assemble or solve a linear system of equations at the level of the full layout and thus eliminates the memory problem. We also propose a modification to the FastMarkov algorithm (FastMarkov with boundary fix) to address the block interface issue when using the finite difference method. We implemented FastMarkov with boundary fix in C++ and parallelized the solver with Message Passing Interface. Compared with standard FD capacitance solver, our solver is able to achieve a speedup almost linear to the number of blocks the layout is partitioned into. On top of it, FastMarkov is easily parallelizable because the computation of the capacitance matrix of one block is independent of other blocks and one path of random walk is independent of other paths. Results and comparisons are presented for parallel plates example and for a large Intel example.
by Yan Zhao.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Smith, Russell Julian. "Streaming motions of Abell clusters : new evidence for a high-amplitude bulk flow on very large scales." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4826/.

Full text
Abstract:
Streaming motions of galaxies and clusters provide the only method for probing the distribution of mass, as opposed to light, on scales of 20 - 100 h(^-1)Mpc. This thesis presents a new survey of the local peculiar velocity field, based upon Fundamental Plane (FP) distances for an all-sky sample of 56 clusters to cz = 12000 km s(^-1). Central velocity dispersions have been determined from new spectroscopic data for 429 galaxies. From new R-band imaging data the FP photometric parameters (effective diameter and effective surface brightness) have been measured for 324 galaxies. The new spectroscopic and photometric data have been carefully combined with an extensive body of measurements compiled from the literature, to yield a closely homogeneous catalogue of FP data for 725 early type galaxies. Fitting the inverse FP relation to the merged catalogue yields distance estimates with a scatter of 22% per galaxy, resulting in cluster distance errors of 2-13%. The distances are consistent, on a cluster-by-cluster basis, with those determined from Tully-Fisher-Fisher studies and from earlier FP determinations. The distances are marginally inconsistent with distance estimates based on brightest cluster galaxies, but this disagreement can be traced to a few highly discrepant clusters. The resulting peculiar velocity field is dominated by a bulk streaming component, with amplitude of 810 ± 180km s(^-1) (directed towards l = 260 ,b = -5 ), a result which is robust against a range of potential systematic effects. The flow direction is ~35 from the CMB dipole and ~15 from the X-ray cluster dipole direction. Two prominent superclusters (the Shapley Concentration and the Horologium-Reticulum Supercluster) may contribute significantly to the generation of this flow. More locally, there is no far- side infall into the 'Great Attractor' (GA), apparently due to the opposing pull of the Shapley Concentration. A simple model of the flow in this direction suggests that the GA region generates no more than ~60% of the Local Group's motion in this direction. Contrary to some previous studies, the Perseus-Pisces supercluster is found to exhibit no net streaming motion. On small scales the velocity field is extremely quiet, with an rms cluster peculiar velocity of < 270 km s(^-1) in the frame defined by the bulk-flow. The results of this survey suggest that very distant mass concentrations contribute significantly to the local peculiar velocity field. This result is difficult to accommodate within currently popular cosmological models, which have too little large-scale power to generate the observed flow. The results may instead favour models with excess fluctuation power on 60-150h(^-1)Mpc scales.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Bo. "Real-time Simulation and Rendering of Large-scale Crowd Motion." Thesis, University of Canterbury. Computer Science and Software Engineering, 2013. http://hdl.handle.net/10092/7870.

Full text
Abstract:
Crowd simulations are attracting increasing attention from both academia and the industry field and are implemented across a vast range of applications, from scientific demonstrations to video games and films. As such, the demand for greater realism in their aesthetics and the amount of agents involved is always growing. A successful crowd simulation must simulate large numbers of pedestrians' behaviours as realistically as possible in real-time. The thesis looks at two important aspects of crowd simulation and real-time animation. First, this thesis introduces a new data structure called Extended Oriented Bounding Box (EOBB) and related methods for fast collision detection and obstacle avoidance in the simulation of crowd motion in virtual environments. The EOBB is extended to contain a region whose size is defined based on the instantaneous velocity vector, thus allowing a bounding volume representation of both geometry and motion. Such a representation is also found to be highly effective in motion planning using the location of vertices of bounding boxes in the immediate neighbourhood of the current crowd member. Second, we present a detailed analysis of the effectiveness of spatial subdivision data structures, specifically for large-scale crowd simulation. For large-scale crowd simulation, computational time for collision detection is huge, and many studies use spatial partitioning data structure to reduce the computational time, depicting their strengths and weaknesses, but few compare multiple methods in an effort to present the best solution. This thesis attempts to address this by implementing and comparing four popular spatial partitioning data structures with the EOBB.
APA, Harvard, Vancouver, ISO, and other styles
19

Pizarro, Oscar. "Large scale structure from motion for autonomous underwater vehicle surveys." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/39185.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 2004.
Includes bibliographical references (p. 177-190).
Our ability to image extended underwater scenes is severely limited by attenuation and backscatter. Generating a composite view from multiple overlapping images is usually the most practical and flexible way around this limitation. In this thesis we look at the general constraints associated with imaging from underwater vehicles for scientific applications - low overlap, non-uniform lighting and unstructured motion - and present a methodology for dealing with these constraints toward a solution of the problem of large area 3D reconstruction. Our approach assumes navigation data is available to constrain the structure from motion problem. We take a hierarchical approach where the temporal image sequence is broken into subsequences that are processed into 3D reconstructions independently. These submaps are then registered to infer their overall layout in a global frame. From this point a bundle adjustment refines camera and structure estimates. We demonstrate the utility of our techniques using real data obtained during a SeaBED AUV coral reef survey. Test tank results with ground truth are also presented to validate the methodology.
by Oscar Pizarro.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Yu, Chen. "SCHEDULING AND RESOURCE MANAGEMENT FOR COMPLEX SYSTEMS: FROM LARGE-SCALE DISTRIBUTED SYSTEMS TO VERY LARGE SENSOR NETWORKS." Doctoral diss., Orlando, Fla. : University of Central Florida, 2009. http://purl.fcla.edu/fcla/etd/CFE0002907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Boukorca, Ahcène. "Hypergraphs in the Service of Very Large Scale Query Optimization. Application : Data Warehousing." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2016. http://www.theses.fr/2016ESMA0026/document.

Full text
Abstract:
L'apparition du phénomène Big-Data, a conduit à l'arrivée de nouvelles besoins croissants et urgents de partage de données qui a engendré un grand nombre de requêtes que les SGBD doivent gérer. Ce problème a été aggravé par d 'autres besoins de recommandation et d 'exploration des requêtes. Vu que le traitement de données est toujours possible grâce aux solutions liées à l'optimisation de requêtes, la conception physique et l'architecture de déploiement, où ces solutions sont des résultats de problèmes combinatoires basés sur les requêtes, il est indispensable de revoir les méthodes traditionnelles pour répondre aux nouvelles besoins de passage à l'échelle. Cette thèse s'intéresse à ce problème de nombreuses requêtes et propose une approche, implémentée par un Framework appelé Big-Quereis, qui passe à l'échelle et basée sur le hypergraph, une structure de données flexible qui a une grande puissance de modélisation et permet des formulations précises de nombreux problèmes d•combinatoire informatique. Cette approche est. le fruit. de collaboration avec l'entreprise Mentor Graphies. Elle vise à capturer l'interaction de requêtes dans un plan unifié de requêtes et utiliser des algorithmes de partitionnement pour assurer le passage à l'échelle et avoir des structures d'optimisation optimales (vues matérialisées et partitionnement de données). Ce plan unifié est. utilisé dans la phase de déploiement des entrepôts de données parallèles, par le partitionnement de données en fragments et l'allocation de ces fragments dans les noeuds de calcule correspondants. Une étude expérimentale intensive a montré l'intérêt de notre approche en termes de passage à l'échelle des algorithmes et de réduction de temps de réponse de requêtes
The emergence of the phenomenon Big-Data conducts to the introduction of new increased and urgent needs to share data between users and communities, which has engender a large number of queries that DBMS must handle. This problem has been compounded by other needs of recommendation and exploration of queries. Since data processing is still possible through solutions of query optimization, physical design and deployment architectures, in which these solutions are the results of combinatorial problems based on queries, it is essential to review traditional methods to respond to new needs of scalability. This thesis focuses on the problem of numerous queries and proposes a scalable approach implemented on framework called Big-queries and based on the hypergraph, a flexible data structure, which bas a larger modeling power and may allow accurate formulation of many problems of combinatorial scientific computing. This approach is the result of collaboration with the company Mentor Graphies. It aims to capture the queries interaction in an unified query plan and to use partitioning algorithms to ensure scalability and to optimal optimization structures (materialized views and data partitioning). Also, the unified plan is used in the deploymemt phase of parallel data warehouses, by allowing data partitioning in fragments and allocating these fragments in the correspond processing nodes. Intensive experimental study sbowed the interest of our approach in terms of scaling algorithms and minimization of query response time
APA, Harvard, Vancouver, ISO, and other styles
22

Sayers, I. L. "An investigation of 'design for testability' techniques in very large scale integrated circuits." Thesis, University of Newcastle Upon Tyne, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Le, Riguer E. M. J. "Generic VLSI architectures : chip designs for image processing applications." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rabbitt, Michael John. "The effect of internal gravity waves on large scale atmospheric flows." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Park, Dong-Jun. "Video event detection framework on large-scale video data." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2754.

Full text
Abstract:
Detection of events and actions in video entails substantial processing of very large, even open-ended, video streams. Video data presents a unique challenge for the information retrieval community because properly representing video events is challenging. We propose a novel approach to analyze temporal aspects of video data. We consider video data as a sequence of images that form a 3-dimensional spatiotemporal structure, and perform multiview orthographic projection to transform the video data into 2-dimensional representations. The projected views allow a unique way to rep- resent video events and capture the temporal aspect of video data. We extract local salient points from 2D projection views and perform detection-via-similarity approach on a wide range of events against real-world surveillance data. We demonstrate our example-based detection framework is competitive and robust. We also investigate the synthetic example driven retrieval as a basis for query-by-example.
APA, Harvard, Vancouver, ISO, and other styles
26

Voo, Thart Fah. "Tunable techniques for robust high frequency analogue VLSI." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Jafar, Mutaz 1960. "THERMAL MODELING/SIMULATION OF LEVEL 1 AND LEVEL 2 VLSI PACKAGING." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/276959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Voranantakul, Suwan 1962. "CONDUCTIVE AND INDUCTIVE CROSSTALK COUPLING IN VLSI PACKAGES." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/277037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jian, Yong-Dian. "Support-theoretic subgraph preconditioners for large-scale SLAM and structure from motion." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52272.

Full text
Abstract:
Simultaneous localization and mapping (SLAM) and Structure from Motion (SfM) are important problems in robotics and computer vision. One of the challenges is to solve a large-scale optimization problem associated with all of the robot poses, camera parameters, landmarks and measurements. Yet neither of the two reigning paradigms, direct and iterative methods, scales well to very large and complex problems. Recently, the subgraph-preconditioned conjugate gradient method has been proposed to combine the advantages of direct and iterative methods. However, how to find a good subgraph is still an open problem. The goal of this dissertation is to address the following two questions: (1) What are good subgraph preconditioners for SLAM and SfM? (2) How to find them? To this end, I introduce support theory and support graph theory to evaluate and design subgraph preconditioners for SLAM and SfM. More specifically, I make the following contributions: First, I develop graphical and probabilistic interpretations of support theory and used them to visualize the quality of subgraph preconditioners. Second, I derive a novel support-theoretic metric for the quality of spanning tree preconditioners and design an MCMC-based algorithm to find high-quality subgraph preconditioners. I further improve the efficiency of finding good subgraph preconditioners by using heuristics and domain knowledge available in the problems. Our results show that the support-theoretic subgraph preconditioners significantly improve the efficiency of solving large SLAM problems. Third, I propose a novel Hessian factor graph representation, and use it to develop a new class of preconditioners, generalized subgraph preconditioners, that combine the advantages of subgraph preconditioners and Hessian-based preconditioners. I apply them to solve large SfM problems and obtain promising results. Fourth, I develop the incremental subgraph-preconditioned conjugate gradient method for large-scale online SLAM problems. The main idea is to combine the advantages of two state-of-the-art methods, incremental smoothing and mapping, and the subgraph-preconditioned conjugate gradient method. I also show that the new method is efficient, optimal and consistent. To sum up, preconditioning can significantly improve the efficiency of solving large-scale SLAM and SfM problems. While existing preconditioning techniques do not utilize the problem structure and have no performance guarantee, I take the first step toward a more general setting and have promising results.
APA, Harvard, Vancouver, ISO, and other styles
30

Koelmans, Albertus Maria. "STRICT : a language and tool set for the design of very large scale integrated circuits." Thesis, University of Newcastle Upon Tyne, 1996. http://hdl.handle.net/10443/2076.

Full text
Abstract:
An essential requirement for the design of large VLSI circuits is a design methodology which would allow the designer to overcome the complexity and correctness issues associated with the building of such circuits. We propose that many of the problems of the design of large circuits can be solved by using a formal design notation based upon the functional programming paradigm, that embodies design concepts that have been used extensively as the framework for software construction. The design notation should permit parallel, sequential, and recursive decompositions of a design into smaller components, and it should allow large circuits to be constructed from simpler circuits that can be embedded in a design in a modular fashion. Consistency checking should be provided as early as possible in a design. Such a methodology would structure the design of a circuit in much the same way that procedures, classes, and control structures may be used to structure large software systems. However, such a design notation must be supported by tools which automatically check the consistency of the design, if the methodology is to be practical. In principle, the methodology should impose constraints upon circuit design to reduce errors and provide' correctness by construction' . It should be possible to generate efficient and correct circuits, by providing a route to a large variety of design tools commonly found in design systems: simulators, automatic placement and routing tools, module generators, schematic capture tools, and formal verification and synthesis tools.
APA, Harvard, Vancouver, ISO, and other styles
31

Hong, Won-kook. "Single layer routing : mapping topological to geometric solutions." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Taylor, Sylvia C. "Interactions of large-scale tropical motion systems during the 1996-1997 Australian monsoon." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA356568.

Full text
Abstract:
Thesis (M.S. in Meteorology) Naval Postgraduate School, September 1998.
"September 1998." Thesis advisor(s): Chih-Pei Chang. Includes bibliographical references (p. 105-106). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
33

Nowrozi, Mojtaba Faiz. "A systematic study of LPCVD refractory metal/silicide interconnect materials for very large scale integrated circuits." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184396.

Full text
Abstract:
Recently, refractory materials have been proposed as a strong alternative to poly-silicon and aluminum alloys as metallization systems for Very Large Scale Integrated (VLSI) circuits because of their improved performance at smaller Integrated Circuit (IC) feature size and higher interconnect current densities. However, processing and reliability problems associated with the use of refractory materials have limited their widespread acceptance. The hot-wall low pressure chemical vapor deposition (LPCVD) of Molybdenum and Tungsten from their respective hexacarbonyl sources has been studied as a potential remedy to such problems, in addition to providing the potential for higher throughput and better step coverage. Using deposition chemistries based on carbonyl sources, Mo and W deposits have been characterized with respect to their electrical, mechanical, structural, and chemical properties as well as their compatibility with conventional IC processing. Excellent film step coverage and uniformity were obtained by low temperature (300-350 C) deposition at pressures of 400-600 mTorr. As-deposited films were observed to be amorphous, with a resistivity of 250 and 350 microohm-cm for Mo and W respectively. On annealing at high temperatures in a reducing or inert atmosphere, the films crystallize with attendant reduction in resistivity to 9.3 and 12 microohm-cm for Mo and W, respectively. The average grain size also increases as a function of time and temperature to a maximum of 2500-3000 A. The metals and their silicides that are deposited, using silane as silicon source, are integratable to form desired metal-silicide gate contact structures. Thus, use of the low resistivity of the elemental metal coupled with the oxidation resistance of its silicide manifests the quality and economy of the process. MOS capacitors with Mo and W as the gate material have been fabricated on n-type (100) silicon. A work function of 4.7 +/- 0.1 eV was measured by means of MOS capacitance-voltage techniques. The experimental results further indicate that the characteristics of W-gate MOS devices related to the charges in SiO₂ are comparable to those of poly-silicon; while, the resistivity is about two orders of magnitude lower than poly-silicon. It is therefore concluded that hot-wall low pressure chemical vapor deposition of Mo and W from their respective carbonyl sources is a viable technique for the deposition of reliable, high performance refractory metal/silicide contact and interconnect structures on very large scale integrated circuits.
APA, Harvard, Vancouver, ISO, and other styles
34

Matsumori, Barry Alan. "QUALIFICATION RESEARCH FOR RELIABLE, CUSTOM LSI/VLSI ELECTRONICS." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Howells, Michael C. "A cluster-proof approach to yield enhancement of large area binary tree architectures /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hum, Herbert Hing-Jing. "A linear unification processor /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Dagenais, Michel R. "Timing analysis for MOSFETS, an integrated approach." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75459.

Full text
Abstract:
Timing and electrical verification is an essential part of the design of VLSI digital MOS circuits. It consists of determining the maximum operating frequency of a circuit, and verifying that the circuit will always produce the expected logical behavior at or under this frequency. This complex task requires considerable computer and human resources.
The classical simulation approach cannot be used to insure the timing and electrical correctness of the large circuits that are now being designed. The huge number of possible states in large circuits renders this method impractical. Worst-case analysis tools alleviate the problem by restricting the analysis to a limited set of states which correspond to the worst-case operating conditions. However, existing worst-case analysis tools for MOS circuits present several problems. Their accuracy is inherently limited since they use a switch-level model. Also, these procedures have a high computational complexity because they resort to path enumeration to find the latest path in each transistor group. Finally, they lack the ability to analyze circuits with arbitrarily complex clocking schemes.
In this text, a new procedure for circuit-level timing analysis is presented. Because it works at electronic circuit level, the procedure can detect electrical errors, and attains an accuracy that is impossible to attain by other means. Efficient algorithms, based on graph theory, have been developed to partition the circuits in a novel way, and to recognize series and parallel combinations. This enables the efficient computation of worst-case, earliest and latest, waveforms in the circuit, using specially designed algorithms. The new procedure extracts automatically the timing requirements from these waveforms and can compute the clocking parameters, including the maximum clock frequency, for arbitrarily complex clocking schemes.
A computer program was written to demonstrate the effectiveness of the new procedure and algorithms developed. It has been used to determine the clocking parameters of circuits using different clocking schemes. The accuracy obtained on these parameters is around 5 to 10% when compared with circuit-level simulations. The analysis time grows linearly with the circuit size and is approximately 0.5s per transistor, on a microVAX II computer. This makes the program suitable for VLSI circuits.
APA, Harvard, Vancouver, ISO, and other styles
38

Chu, Chung-kwan, and 朱頌君. "Computationally efficient passivity-preserving model order reduction algorithms in VLSI modeling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38719551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Zhao, Wenhui, and 趙文慧. "Efficient circuit simulation via adaptive moment matching and matrix exponential techniques." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/197488.

Full text
Abstract:
This dissertation presents two efficient circuit simulation techniques for very large scale integrated (VLSI) circuits. Model order reduction (MOR) plays a significant role in VLSI circuit simulation as nowadays the system model may contain millions of equations or variables. MOR is needed to reduce the order of the original system to allow the simulation to be performed with an acceptable amount of time, reasonable storage and reliable accuracy. Multi-point moment matching is one of the state-of-the-art methods for MOR. However, the moment order and expansion points are usually selected in a heuristic way, which cannot guarantee the global accuracy of the reduced-order model (ROM). Therefore, it is important to utilize an adaptive algorithm in exercising multi-point moment matching. In this regard, we propose a novel automatic adaptive multi-point moment matching algorithm for MOR of linear descriptor systems. The algorithm implements both adaptive frequency expansion point selection and automatic moment order control guided by a transfer function-based error metric. Without a priori information of the system response, the proposed algorithm leads to a much higher global accuracy compared with standard multipoint moment matching without adaptation. The moments are computed via a generalized Sylvester equation which is subsequently solved by a newly proposed generalized alternating direction implicit (GADI) method. Another technique for circuit simulation proposed in this thesis is the matrix exponential (MEXP) method. MEXP method has been demonstrated to be a competitive candidate for transient simulation of VLSI circuits. Nevertheless, the performance of MEXP based on ordinary Krylov subspace is unsatisfactory for stiff circuits, because the underlying Arnoldi process tends to oversample the high magnitude part of the system spectrum while under-sampling the low magnitude part that is important to the final accuracy. In this thesis, we explore the use of extended Krylov subspace to generate more accurate and efficient approximation for MEXP.We also develop a formulation, called generalized extended Krylov subspace, that allows unequal positive and negative dimensions in the subspace for better performance, and propose an adaptive scheme based on the generalized extended Krylov subspace to select the ratio between the positive and negative dimensions.
published_or_final_version
Electrical and Electronic Engineering
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
40

Aaramaa, S. (Sanja). "Developing a requirements architecting method for the requirement screening process in the Very Large-Scale Requirements Engineering Context." Doctoral thesis, Oulun yliopisto, 2017. http://urn.fi/urn:isbn:9789526217079.

Full text
Abstract:
Abstract Requirements engineering (RE) is an important process in systems development. This research was carried out in the context of Very Large-Scale Requirements Engineering (VLSRE) within the scope of a requirement screening (RS) process. The RS process is defined as a front-end process for screening incoming requests, which are received in a constant flow. The goal of the RS process is to efficiently identify the most promising requests for further analysis, development and implementation while filtering out non-valuable ones as early as possible. The objective of this study was to understand the challenges related to the RS process and develop solutions to address those challenges. A qualitative research approach was utilised to achieve the research goals. The overall research process follows an action research method, in which each action research cycle includes at least one individually defined and executed case study. Action research and case studies are research methods that are well suited to studying real-life phenomena in their natural settings. This research was carried out in two case companies in the information and communication technology domain. Data from 45 interviews were analysed for preparing publications I–V, which are included in this thesis. In addition, during the longitudinal action research study described in this thesis, data from 26 interviews and 132 workshops were utilised to develop solutions for the RS process, which is an industrial implementation of the VLSRE process. The conducted action research contributes to the field of software engineering, in which such research efforts are currently lacking. This research has identified a number of significant challenges that different stakeholders face related to requirements processing and decision making in the VLSRE context. Examples of these challenges are the great number of incoming requirements, the lack of information for decision making and the feasibility of utilised tools. To address the identified challenges, a requirements architecting method was developed. The method includes a dynamic requirement template, which gathers structured information content for eliciting requests, documenting and communicating requirements and forming features while considering the needs of different stakeholders. The method was piloted, validated and deployed in industry
Tiivistelmä Tutkimus toteutettiin laajamittaisen vaatimusmäärittelyprosessin kontekstissa keskittyen vaatimusten seulontaprosessiin. Vaatimusten seulontaprosessi määritellään tuotekehityksen alkuvaiheen prosessiksi, jossa käsitellään jatkuvana vuona tulevia kehityspyyntöjä. Vaatimusten seulontaprosessissa pyritään tunnistamaan tehokkaasti lupaavimmat pyynnöt jatkoanalyysiä, tuotekehitystä ja toteutusta ajatellen sekä suodattamaan pois niin aikaisessa vaiheessa, kun mahdollista ne pyynnöt, joilla ei ole arvontuotto-odotuksia. Tutkimuksen tavoite oli ymmärtää haasteita, jotka liittyvät vaatimusten seulontaprosessiin sekä kehittää ratkaisuja näihin haasteisiin. Tutkimuksessa käytettiin laadullisen tutkimuksen menetelmiä. Kokonaisuutena tutkimusprosessi noudattaa toimintatutkimuksen periaatteita siten, että jokainen sykli tai sen vaihe sisältää yhden tai useamman itsenäisesti määritellyn tapaustutkimuksen suunnittelun ja läpiviennin. Valitut tutkimusmenetelmät soveltuvat hyvin tilanteisiin, joissa tutkimuskohteina ovat reaalimaailman ilmiöt niiden luonnollisissa ympäristöissä havainnoituina. Tutkimusaineisto kerättiin kahdesta informaatio- ja kommunikaatioteknologia-alan kohdeorganisaatiosta. Väitöskirjaan sisällytettyihin julkaisuihin I-V on analysoitu 45 haastattelun aineisto. Näiden lisäksi väitöskirjassa kuvatun pitkäkestoisen toimintatutkimuksen aikana hyödynnettiin 26 haastattelun ja 132 työpajan aineistoa kehitettäessä ratkaisuja vaatimusten seulontaprosessin haasteisiin. Vaatimusten seulontaprosessi on laajamittaisen vaatimusmäärittelyprosessin teollinen toteutus. Tutkimuksessa tunnistettiin useita merkittäviä haasteta, joita eri sidosryhmillä on liittyen vaatimusten seulontaprosessiin ja päätöksentekoon laajamittaisessa vaatimusmäärittelyprosessissa. Vaatimusten suuri määrä, päätöksentekoon tarvittavan tiedon puute ja käytössä olevien työkalujen soveltumattomuus ovat esimerkkejä tunnistetuista haasteista. Ratkaisuna haasteisiin kehitettiin vaatimusten seulonta- ja analyysimenetelmä. Kehitetty menetelmä sisältää dynaamisen vaatimusdokumentin, jonka avulla voidaan kerätä kehityspyyntöjen tietosisältö jäsennellysti, dokumentoida ja kommunikoida vaatimukset sekä muodostaa niistä tuotteisiin toteutettavia ominaisuuksia ottaen huomioon eri sidosryhmien tarpeet. Kehitetty menetelmä on koestettu, validoitu ja soveltuvin osin otettu käyttöön teollisuudessa
APA, Harvard, Vancouver, ISO, and other styles
41

Qiu, Fengjing. "Analog very large scale integrated circuits design of two-phase and multi-phase voltage doublers with frequency regulation." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175632756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Scaffidi, Charles, and Richard Stafford. "Replacement of the Hubble Space Telescope (HST) Telemetry Front-End Using Very-Large-Scale Integration (VLSI)-Based Components." International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/611857.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The Hubble Space Telescope (HST) Observatory Management System (HSTOMS), located at the Goddard Space Flight Center (GSFC), provides telemetry, command, analysis and mission planning functions in support of the HST spacecraft. The Telemetry and Command System (TAC) is an aging system that performs National Aeronautics and Space Administration (NASA) Communications (Nascom) block and telemetry processing functions. Future maintainability is of concern because of the criticality of this system element. HSTOMS has embarked on replacing the TAC by using functional elements developed by the Microelectronics Systems Branch of the GSFC. This project, known as the Transportable TAC (TTAC) because of its inherent flexibility, is addressing challenges that have resulted from applying recent technological advances into an existing operational environment. Besides presenting a brief overview of the original TAC and the new TTAC, this paper also describes the challenges faced and the approach to overcoming them.
APA, Harvard, Vancouver, ISO, and other styles
43

Bishop, Gregory Raymond H. ""On stochastic modelling of very large scale integrated circuits : an investigation into the timing behaviour of microelectronic systems" /." Title page, contents and abstract only, 1993. http://web4.library.adelaide.edu.au/theses/09PH/09phb6222.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Twigg, Christopher M. "Floating Gate Based Large-Scale Field-Programmable Analog Arrays for Analog Signal Processing." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11601.

Full text
Abstract:
Large-scale reconfigurable and programmable analog devices provide a new option for prototyping and synthesizing analog circuits for analog signal processing and beyond. Field-programmable analog arrays (FPAAs) built upon floating gate transistor technologies provide the analog reconfigurability and programmability density required for large-scale devices on a single integrated circuit (IC). A wide variety of synthesized circuits, such as OTA followers, band-pass filters, and capacitively coupled summation/difference circuits, were measured to demonstrate the flexibility of FPAAs. Three generations of devices were designed and tested to verify the viability of such floating gate based large-scale FPAAs. Various architectures and circuit topologies were also designed and tested to explore the trade-offs present in reconfigurable analog systems. In addition, large-scale FPAAs have been incorporated into class laboratory exercises, which provide students with a much broader range of circuit and IC design experiences than have been previously possible. By combining reconfigurable analog technologies with an equivalent large-scale digital device, such as a field-programmable gate array (FPGA), an extremely powerful and flexible mixed signal development system can be produced that will enable all of the benefits possible through cooperative analog/digital signal processing (CADSP).
APA, Harvard, Vancouver, ISO, and other styles
45

Kelley, Brian T. "VLSI computing architectures for high speed seismc migration." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/13919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hong, Seong-Kwan. "Performance driven analog layout compiler." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/15037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bragg, Julian Alexander. "A biomorphic analog VLSI implementation of a mammalian motor unit." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/20693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tan, Chong Guan. "Another approach to PLA folding." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

高雲龍 and Wan-lung Ko. "A new optimization model for VLSI placement." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B29812938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Rasafar, Hamid 1954. "THE HIGH FREQUENCY AND TEMPERATURE DEPENDENCE OF DIELECTRIC PROPERTIES OF PRINTED CIRCUIT BOARD MATERIALS." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276509.

Full text
Abstract:
New VLSI and VHSIC devices require increased performance from electronic packages. The major challenge that must be met in materials/process development for high complexity and high speed integrated circuits is the processing of even larger amounts of signals with low propagation delay. Hence, materials with low dielectric constant and low dissipation factor are being sought. In this investigation the dielectric properties of the most commonly used composite materials for printed circuit boards, Teflon-glass and Epoxy-glass, were measured in the frequency and temperature intervals of 100 HZ-1 GHZ and 25-260°C, respectively. From the measured results, it is concluded that Teflon-glass is more suitable for the board level packaging of high performance circuits due to its lower dielectric constant and low dissipation factor.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography