Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Sequential processing (Computer science).

Dissertationen zum Thema „Sequential processing (Computer science)“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Sequential processing (Computer science)" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Parashkevov, Atanas. „Advances in space and time efficient model checking of finite state systems“. Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09PH/09php223.pdf.

Der volle Inhalt der Quelle
Annotation:
Bibliography: leaves 211-220 This thesis examines automated formal verification techniques and their associated space and time implementation complexity when applied to finite state concurrent systems. The focus is on concurrent systems expressed in the Communicating Sequential Processes (CSP) framework. An approach to the compilation of CSP system descriptions into boolean formulae in the form of Ordered Binary Decision Diagrams (OBDD) is presented, further utilised by a basic algorithm that checks a refinement or equivalence relation between a pair of processes in any of the three CSP semantic models. The performance bottlenecks of the basic refinement checking algorithms are identified and addressed with the introduction of a number of novel techniques and algorithms. Algorithms described in this thesis are implemented in the Adelaide Tefinement Checking Tool.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bari, Himanshu. „Design and implementation of a library to support the Common Component Architecture (CCA) over Legion“. Diss., Online access via UMI:, 2004. http://wwwlib.umi.com/dissertations/fullcit/1424173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhang, Shujian. „Evaluation in built-in self-test“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ34293.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Moffat, Nicholas. „Identifying and exploiting symmetry for CSP refinement checking“. Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711620.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Pajic, Slobodan. „Sequential quadratic programming-based contingency constrained optimal power flow“. Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0430103-152758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Simpson, Andrew C. „Safety through security“. Thesis, University of Oxford, 1996. http://ora.ox.ac.uk/objects/uuid:4a690347-46af-42a4-91fe-170e492a9dd1.

Der volle Inhalt der Quelle
Annotation:
In this thesis, we investigate the applicability of the process algebraic formal method Communicating Sequential Processes (CSP) [Hoa85] to the development and analysis of safetycritical systems. We also investigate how these tasks might be aided by mechanical verification, which is provided in the form of the proof tool Failures-Divergences Refinement (FDR) [Ros94]. Initially, we build upon the work of [RWW94, Ros95], in which CSP treatments of the security property of non-interference are described. We use one such formulation to define a property called protection, which unifies our views of safety and security. As well as applying protection to the analysis of safety-critical systems, we develop a proof system for this property, which in conjunction with the opportunity for automated analysis provided by FDR, enables us to apply the approach to problems of a sizable complexity. We then describe how FDR can be applied to the analysis of mutual exclusion, which is a specific form of non-interference. We investigate a number of well-known solutions to the problem, and illustrate how such mutual exclusion algorithms can be interpreted as CSP processes and verified with FDR. Furthermore, we develop a means of verifying the faulttolerance of such algorithms in terms of protection. In turn, mutual exclusion is used to describe safety properties of geographic data associated with Solid State Interlocking (SSI) railway signalling systems. We show how FDR can be used to describe these properties and model interlocking databases. The CSP approach to compositionality allows us to decompose such models, thus reducing the complexity of analysing safety invariants of SSI geographic data. As such, we describe how the mechanical verification of Solid State Interlocking geographic data, which was previously considered to be an intractable problem for the current generation of mechanical verification tools, is computationally feasible using FDR. Thus, the goals of this thesis are twofold. The first goal is to establish a formal encapsulation of a theory of safety-critical systems based upon the relationship which exists between safety and security. The second goal is to establish that CSP, together with FDR, can be applied to the modelling of Solid State Interlocking geographic databases. Furthermore, we shall attempt to demonstrate that such modelling can scale up to large-scale systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Koufogiannakis, Christos. „Approximation algorithms for covering problems“. Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957320821&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268338860&clientId=48051.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 70-77). Also issued in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bari, Wasimul. „Analyzing binary longitudinal data in adaptive clinical trials /“. Internet access available to MUN users only, 2003. http://collections.mun.ca/u?/theses,167453.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Thomas, Jonathan. „Asynchronous Validity Resolution in Sequentially Consistent Shared Virtual Memory“. Fogler Library, University of Maine, 2001. http://www.library.umaine.edu/theses/pdf/Thomas.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shao, Yang. „Sequential organization in computational auditory scene analysis“. Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1190127412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Van, Delden Sebastian Alexander. „Larger-first partial parsing“. Doctoral diss., University of Central Florida, 2003. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/2038.

Der volle Inhalt der Quelle
Annotation:
University of Central Florida College of Engineering Thesis
Larger-first partial parsing is a primarily top-down approach to partial parsing that is opposite to current easy-fzrst, or primarily bottom-up, strategies. A rich partial tree structure is captured by an algorithm that assigns a hierarchy of structural tags to each of the input tokens in a sentence. Part-of-speech tags are first assigned to the words in a sentence by a part-of-speech tagger. A cascade of Deterministic Finite State Automata then uses this part-of-speech information to identify syntactic relations primarily in a descending order of their size. The cascade is divided into four specialized sections: (1) a Comma Network, which identifies syntactic relations associated with commas; (2) a Conjunction Network, which partially disambiguates phrasal conjunctions and llly disambiguates clausal conjunctions; (3) a Clause Network, which identifies non-comma-delimited clauses; and (4) a Phrase Network, which identifies the remaining base phrases in the sentence. Each automaton is capable of adding one or more levels of structural tags to the tokens in a sentence. The larger-first approach is compared against a well-known easy-first approach. The results indicate that this larger-first approach is capable of (1) producing a more detailed partial parse than an easy first approach; (2) providing better containment of attachment ambiguity; (3) handling overlapping syntactic relations; and (4) achieving a higher accuracy than the easy-first approach. The automata of each network were developed by an empirical analysis of several sources and are presented here in detail.
Ph.D.
Doctorate;
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering and Computer Science
215 p.
xiv, 212 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Jang, Geon-Ho. „Design and implementation of pulse sequences for application in MRI /“. free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9953868.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Malkoc, Veysi. „Sequential alignment and position verification system for functional proton radiosurgery“. CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2535.

Der volle Inhalt der Quelle
Annotation:
The purpose of this project is to improve the existing version of the Sequential Alignment and Position Verification System (SAPVS) for functional proton radiosurgery and to evaluate its performance after improvement .
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Dourado, Camila da Silva 1982. „Mineração de dados climáticos para análise de eventos extremos de precipitação“. [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/256801.

Der volle Inhalt der Quelle
Annotation:
Orientadores: Stanley Robson de Medeiros Oliveira, Ana Maria Heuminski de Avila
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola
Made available in DSpace on 2018-08-22T14:12:46Z (GMT). No. of bitstreams: 1 Dourado_CamiladaSilva_M.pdf: 21521693 bytes, checksum: d0c749dfa3c77ac47a96acd234b8d3c3 (MD5) Previous issue date: 2013
Resumo: O conhecimento das condições climáticas, identificando regiões com maiores riscos de ocorrências de eventos extremos, que possam impactar os diversos setores socioeconômicos e ambientais, tornou-se um grande desafio. No Brasil as maiores ocorrências de eventos extremos estão relacionadas aos fenômenos hidrológicos. Em particular, o estado da Bahia apresenta alta variabilidade temporal e espacial no clima, desde áreas consideradas áridas ou com risco de aridização (ao Norte) a regiões com clima úmido na faixa litorânea. O estado tem sido alvo nesses últimos anos de diferentes eventos extremos de chuva, com enchentes em algumas áreas e secas severas em outras. Neste contexto, o objetivo deste trabalho foi utilizar técnicas de mineração de dados para analisar a frequência das ocorrências dos eventos extremos de precipitação durante o período de 1981 a 2010 no estado da Bahia, com o propósito de subsidiar a tomada de decisão referente a ações preventivas e mitigadoras dos impactos socioeconômico e ambientais. Para isto, foram utilizados dados climáticos de precipitação fornecidos pelo Sistema de Informações Hidrológicas da Agência Nacional de Águas. Aplicando-se a tarefa de agrupamento (clusterização), por meio do algoritmo k-means, as séries históricas de dados climáticos foram agrupadas em cinco zonas pluviometricamente homogêneas. Posteriormente, foram realizadas análises em diferentes escalas temporais (anual, mensal e diária) identificando através da Técnica dos Quantis limiares superiores e inferiores de intensidade de chuva em cada região homogênea, para cada escala temporal. Na escala mensal, foram identificados padrões sequenciais das ocorrências dos eventos extremos positivos e negativos ao longo dos trinta anos. Os resultados reforçam a potencialidade da técnica de mineração de dados em agrupar zonas homogêneas por similaridade pluvial, com o uso do algoritmo k-means. Revelam ainda, para todas as escalas temporais utilizadas, uma alta variabilidade pluviométrica. Os anos registrados com maior ocorrência de eventos extremos negativos estão na década de 90 e os anos registrados com mais eventos extremos positivos na década de 2000
Abstract: The knowledge of climate conditions, identifying areas with the greatest risk of occurrence of extreme events, that may impact the various socioeconomic and environmental sectors, has become a major challenge. In Brazil the largest occurrences of extreme events are related to hydrological phenomena. In particular, the state of Bahia presents a high temporal and spatial variability of climate, from areas considered arid or with risk to become arid - (in the North) to regions with humid along the coast. The state has been targeted of different extreme rainfall events recently, with floods in some areas and severe droughts in others. In this context, the aim of this study was to use data mining techniques to analyze the frequency of occurrences of extreme precipitation events during the period from 1981 to 2010 in the state of Bahia, in order to support decision making regarding the preventive and mitigative environmental and socioeconomic impacts. To accomplish that, it was used climate data of precipitation supplied by the Hydrological Information System of the National Water Agency. By applying the task of grouping (clustering) by means of the k-means algorithm, the time series of climate data were grouped into five homogeneous rainfall zones. Subsequently, analyzes were performed on different time scales (annually, monthly and daily) identifying by quantile methods the upper and lower thresholds of rainfall intensity in each homogeneous region, for each time scale. At the monthly scale, sequential patterns of occurrences of extreme positive and negative events were identified over the thirty years. The results reinforce the potential of the data mining technique to group homogeneous zones by similarity of rain, using the k-means algorithm. They also reveal, for all time scales used, high rainfall variability. The years with the highest recorded extreme negative events are in the 90's and those registered with more extreme positive events are in the 2000's
Mestrado
Planejamento e Desenvolvimento Rural Sustentável
Mestra em Engenharia Agrícola
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Uijt, de Haag Maarten. „An investigation into the application of block processing techniques for the Global Positioning System“. Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1181171187.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Mazur, Tomasz Krzysztof. „Model Checking Systems with Replicated Components using CSP“. Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:6694fac7-00b4-4b25-b054-813d7a6a4cdb.

Der volle Inhalt der Quelle
Annotation:
The Parameterised Model Checking Problem asks whether an implementation Impl(t) satisfies a specification Spec(t) for all instantiations of parameter t. In general, t can determine numerous entities: the number of processes used in a network, the type of data, the capacities of buffers, etc. The main theme of this thesis is automation of uniform verification of a subclass of PMCP with the parameter of the first kind, using techniques based on counter abstraction. Counter abstraction works by counting how many, rather than which, node processes are in a given state: for nodes with k local states, an abstract state (c(1), ..., c(k)) models a global state where c(i) processes are in the i-th state. We then use a threshold function z to cap the values of each counter. If for some i, counter c(i) reaches its threshold, z(i) , then this is interpreted as there being z(i) or more nodes in the i-th state. The addition of thresholds makes abstract models independent of the instantiation of the parameter. We adapt standard counter abstraction techniques to concurrent reactive systems modelled using the CSP process algebra. We demonstrate how to produce abstract models of systems that do not use node identifiers (i.e. where all nodes are indistinguishable). Every such abstraction is, by construction, refined by all instantiations of the implementation. If the abstract model satisfies the specification, then a positive answer to the particular uniform verification problem can be deduced. We show that by adding node identifiers we make the uniform verification problem undecidable. We demonstrate a sound abstraction method that extends standard counter abstraction techniques to systems that make full use of node identifiers (in specifications and implementations). However, on its own, the method is not enough to give the answer to verification problems for all parameter instantiations. This issue has led us to the development of a type reduction theory, which, for a given verification problem, establishes a function phi that maps all (sufficiently large) instantiations T of the parameter to some fixed type T and allows us to deduce that if Spec(T) is refined by phi(Impl(T)), then Spec(T) is refined by Impl(T). We can then combine this with our extended counter abstraction techniques and conclude that if the abstract model satisfies Spec(T), then the answer to the uniform verification problem is positive. We develop a symbolic operational semantics for CSP processes that satisfy certain normality requirements and we provide a set of translation rules that allow us to concretise symbolic transition graphs. The type reduction theory relies heavily on these results. One of the main advantages of our symbolic operational semantics and the type reduction theory is their generality, which makes them applicable in other settings and allows the theory to be combined with abstraction methods other than those used in this thesis. Finally, we present TomCAT, a tool that automates the construction of counter abstraction models and we demonstrate how our results apply in practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Costello, Roger Lee. „Responsive sequential processes /“. The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487588249825353.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Nelson, Alexander J. „Software signature derivation from sequential digital forensic analysis“. Thesis, University of California, Santa Cruz, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10140317.

Der volle Inhalt der Quelle
Annotation:

Hierarchical storage system namespaces are notorious for their immense size, which is a significant hindrance for any computer inspection. File systems for computers start with tens of thousands of files, and the Registries of Windows computers start with hundreds of thousands of cells. An analysis of a storage system, whether for digital forensics or locating old data, depends on being able to reduce the namespaces down to the features of interest. Typically, having such large volumes to analyze is seen as a challenge to identifying relevant content. However, if the origins of files can be identified—particularly dividing between software and human origins—large counts of files become a boon to profiling how a computer has been used. It becomes possible to identify software that has influenced the computer's state, which gives an important overview of storage system contents not available to date.

In this work, I apply document search to observed changes in a class of forensic artifact, cell names of the Windows Registry, to identify effects of software on storage systems. Using the search model, a system's Registry becomes a query for matching software signatures. To derive signatures, file system differential analysis is extended from between two storage system states to many sequences of states. The workflow that creates these signatures is an example of analytics on data lineage, from branching data histories. The signatures independently indicate past presence or usage of software, based on consistent creation of measurably distinct artifacts. A signature search engine is demonstrated against a machine with a selected set of applications installed and executed. The optimal search engine according to that machine is then turned against a separate corpus of machines with a set of present applications identified by several non-Registry forensic artifact sources, including the file systems, memory, and network captures. The signature search engine corroborates those findings, using only the Windows Registry.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Hsieh, Wilson Cheng-Yi. „Extracting parallelism from sequential programs“. Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14752.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

King, Myron Decker. „An efficient sequential BTRS implementation“. Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/46603.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (leaves 73-74).
This thesis describes the implementation of BTRS, a language based on guarded atomic actions (GAA). The input language to the compiler which forms the basis of this work is a hierarchical tree of modules containing state, interface methods, and rules which fire atomically to cause state transitions. Since a schedule need not be specified, the program description is inherently nondeterministic, though the BTRS language does allow the programmer to remove nondeterminism by specifying varying degrees of scheduling constraints. The compiler outputs a (sequential) single-threaded C implementation of the input description, choosing a static schedule which adheres to the input constraints. The resulting work is intended to be used as the starting point for research into efficient software synthesis from guarded atomic actions, and ultimately a hardware inspired programming methodology for writing parallel software. This compiler is currently being used to generate software for a heterogeneous system in which the software and hardware components are both specified in BTRS.
by Myron Decker King.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Xu, Zhi S. M. Massachusetts Institute of Technology. „Private sequential search and optimization“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112054.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 107-108).
We propose and analyze two models to study an intrinsic trade-off between privacy and query complexity in online settings: 1. Our first private optimization model involves an agent aiming to minimize an objective function expressed as a weighted sum of finitely many convex cost functions, where the weights capture the importance the agent assigns to each cost function. The agent possesses as her private information the weights, but does not know the cost functions, and must obtain information on them by sequentially querying an external data provider. The objective of the agent is to obtain an accurate estimate of the optimal solution, x*, while simultaneously ensuring privacy, by making x* difficult to infer for the data provider, who does not know the agent's private weights but only observes the agent's queries. 2. The second private search model we study is also about protecting privacy while searching for an object. It involves an agent attempting to determine a scalar true value, x*, based on querying an external database, whose response indicates whether the true value is larger than or less than the agent's submitted queries. The objective of the agent is again to obtain an accurate estimate of the true value, x*, while simultaneously hiding it from an adversary who observes the submitted queries but not the responses. The main results of this thesis provide tight upper and lower bounds on the agent's query complexity (i.e., number of queries) as a function of desired levels of accuracy and privacy, for both models. We also explicitly construct query strategies whose worst-case query complexity is optimal up to an additive constant.
by Zhi Xu.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Hebb, Christopher Louis. „Website usability evaluation using sequential analysis“. [Bloomington, Ind.] : Indiana University, 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3167801.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Indiana University, Dept. of Instructional Systems Technology, 2005.
Source: Dissertation Abstracts International, Volume: 66-04, Section: A, page: 1328. Adviser: Theodore W. Frick. "Title from dissertation home page (viewed Nov. 13, 2006)."
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Jin, Stone Qiaodan (Qiaodan Jordan). „An ARM-based sequential sampling oscilloscope“. Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100591.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 141).
Sequential equivalent-time sampling allows a system to acquire repetitious waveforms with frequencies beyond the Nyquist rate. This thesis documents the prototype of a digital ARM-based sequential sampling oscilloscope with peripheral hardware and software. Discussed are the designs and obstacles of various analog circuits and signal processing methods. By means of sequential sampling, alongside analog and digital signal processing techniques, we are able to utilize a 3MSPS ADC for a capture rate of 24MSPS. For sinusoids between 6-12MHz, waveforms acquired display at least 10dB of SNR improvement for unfiltered signals and at least 60dB of SNR improvement for aggressively filtered signals.
by Qiaodan (Jordan) Jin Stone.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Macindoe, Owen. „Sidekick agents for sequential planning problems“. Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84892.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 127-131).
Effective Al sidekicks must solve the interlinked problems of understanding what their human collaborator's intentions are and planning actions to support them. This thesis explores a range of approximate but tractable approaches to planning for AI sidekicks based on decision-theoretic methods that reason about how the sidekick's actions will effect their beliefs about unobservable states of the world, including their collaborator's intentions. In doing so we extend an existing body of work on decision-theoretic models of assistance to support information gathering and communication actions. We also apply Monte Carlo tree search methods for partially observable domains to the problem and introduce an ensemble-based parallelization strategy. These planning techniques are demonstrated across a range of video game domains.
by Owen Macindoe.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Parvathala, Rajeev (Rajeev Krishna). „Representation learning for non-sequential data“. Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119581.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-90).
In this thesis, we design and implement new models to learn representations for sets and graphs. Typically, data collections in machine learning problems are structured as arrays or sequences, with sequential relationships between successive elements. Sets and graphs both break this common mold of data collections that have been extensively studied in the machine learning community. First, we formulate a new method for performing diverse subset selection using a neural set function approximation method. This method relies on the deep sets idea, which says that any set function s(X) has a universal approximator of the form f([sigma]x[xi]X [phi](x)). Second, we design a new variational autoencoding model for highly structured, sparse graphs, such as chemical molecules. This method uses the graphon, a probabilistic graphical model from mathematics, as inspiration for the decoder. Furthermore, an adversary is employed to force the distribution of vertex encodings to follow a target distribution, so that new graphs can be generated by sampling from this target distribution. Finally, we develop a new framework for performing encoding of graphs in a hierarchical manner. This approach partitions an input graph into multiple connected subgraphs, and creates a new graph where each node represents one such subgraph. This allows the model to learn a higher level representation for graphs, and increases robustness of graphical encoding to varying graph input sizes.
by Rajeev Parvathala.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Arıkan, Erdal. „Sequential decoding for multiple access channels“. Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15190.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 111-112.
by Erdal Arikan.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Koita, Rizwan R. (Rizwan Rahim). „Strategies for sequential design of experiments“. Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35998.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Liando, Johnny 1964. „Enhancement and evaluation of SCIRTSS (sequential circuits test search system) on ISCAS'89 benchmark sequential circuits“. Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/278283.

Der volle Inhalt der Quelle
Annotation:
SCIRTSS, the automatic test pattern generation system for sequential circuit described in AHPL, has been improved to have the best and correct version of the D-Algorithm. This improvement works together with the recent enhancement of the backward state justification search. SCIRTSS now has a complete set of procedures to generate tests for sequential circuits. The performance of SCIRTSS is evaluated using the recent ISCAS'89 sequential benchmark circuits. The overall concepts of how SCIRTSS generate tests, the improvements made on the D-Algorithm, and the benchmark results are presented in this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wang, Jonathan M. Eng Massachusetts Institute of Technology. „Pentimento : non-sequential authoring of handwritten lectures“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100619.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Pentimento is software developed under the supervision of Fredo Durand in the Computer Graphics Group at CSAIL that focuses on dramatically simplifying the creation of online educational video lectures such as those of Khan Academy. In these videos, the lecture style is that the educator draws on a virtual whiteboard as he/she speaks. Currently, the type of software that the educator uses is very rudimentary in its functionality and only allows for basic functionality such as screen and voice recording. A downside of this approach is that the educator must get it right on the first approach, as there is no ability to simply edit the content taken during a screen capture after the initial recording without using unnecessarily complex video editing software. Even with video editing software, the user is not able to access the original drawing content used to create video. The overall goal of this project is to develop lecture recording software that uses a vector based representation to keep track of the user's sketching, which will allow the user to easily editing the original drawing content retroactively. The goal for my contribution to this project is to implement components for a web-based version of Pentimento. This will allow the application to reach a broader range of users. The goal is to have an HTML5 and Javascript based application that can run on many of popular the web browsers in use today. One of my main focuses in this project is to work on the audio recording and editing component. This includes the working on the user interface component and integrating it with the rest of the parts in the software.
by Jonathan Wang.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Dernoncourt, Franck. „Sequential short-text classification with neural networks“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111880.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-79).
Medical practice too often fails to incorporate recent medical advances. The two main reasons are that over 25 million scholarly medical articles have been published, and medical practitioners do not have the time to perform literature reviews. Systematic reviews aim at summarizing published medical evidence, but writing them requires tremendous human efforts. In this thesis, we propose several natural language processing methods based on artificial neural networks to facilitate the completion of systematic reviews. In particular, we focus on short-text classification, to help authors of systematic reviews locate the desired information. We introduce several algorithms to perform sequential short-text classification, which outperform state-of-the-art algorithms. To facilitate the choice of hyperparameters, we present a method based on Gaussian processes. Lastly, we release PubMed 20k RCT, a new dataset for sequential sentence classification in randomized control trial abstracts.
by Franck Dernoncourt.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Gu, Ronghui. „An Extensible Architecture for Building Certified Sequential and Concurrent OS Kernels“. Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10584948.

Der volle Inhalt der Quelle
Annotation:

Operating System (OS) kernels form the backbone of all system software. They have a significant impact on the resilience, extensibility, and security of today's computing hosts. However, modern OS kernels are complex and may consist of a multitude of sequential or concurrent abstraction layers; unfortunately, abstraction layers have almost never been formally specified or verified. This makes it difficult to establish strong correctness properties, and to scale program verification across multiple abstraction layers.

Recent efforts have demonstrated the feasibility of building large scale formal proofs of functional correctness for simple general-purpose kernels, but the cost of such verification is still prohibitive, and it is unclear how to use their verified kernels to reason about user-level programs and other kernel extensions. Furthermore, they have ignored the issues of concurrency, which include not just user- and I/O concurrency on a single core, but also multicore parallelism with fine-grained locking.

This thesis presents CertiKOS, an extensible architecture for building certified sequential and concurrent OS kernels. CertiKOS proposes a new compositional framework showing how to formally specify, program, verify, and compose concurrent abstraction layers. We present a novel language-based account of abstraction layers and show that they correspond to a strong form of abstraction over a particularly rich class of specifications that we call deep specifications . We show how to instantiate the formal layer-based framework in realistic programming languages such as C and assembly, and how to adapt the CompCert verified compiler to compile certified C layers such that they can be linked with assembly layers. We can then build and compose certified abstraction layers to construct various certified OS kernels, each of which guarantees a strong contextual refinement property for every kernel function, i.e., the implementation of each such function will behave like its specification under any kernel/user context with any valid interleaving.

To demonstrate the effectiveness of our new framework, we have successfully implemented and verified multiple practical sequential and concurrent OS kernels. The most realistic sequential hypervisor kernel is written in 6000 lines of C and x86 assembly, and can boot a version of Linux as a guest. The general-purpose concurrent OS kernel with fine-grained locking can boot on a quad-core machine. For all the certified kernels, their abstraction layers and (contextual) functional correctness properties are specified and verified in the Coq proof assistant.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Sundaresan, Tejas G. „Sequential modeling for mortality prediction in the ICU“. Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113105.

Der volle Inhalt der Quelle
Annotation:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-89).
Severity of illness scores are commonly used in critical care medicine to guide treatment decisions and benchmark the quality of medical care. These scores operate in part by predicting patient mortality in the ICU using physiological variables including lab values, vital signs, and admission information. However, existing evidence suggests that current mortality predictors are less performant on patients who have an especially high risk of mortality in the ICU. This thesis seeks to reconcile this difference by developing a custom high risk mortality predictor for high risk patients in a process termed sequential modeling. Starting with a base set of features derived from the APACHE IV score, this thesis details the engineering of more complex features tailored to the high risk prediction task and development of a logistic regression model trained on the Philips eICU-CRD dataset. This high risk model is shown to be more performant than a baseline severity of illness score, APACHE IV, on the high risk subpopulation. Moreover, a combination of the baseline severity of illness score and the high risk model is shown to be better calibrated and more performant on patients of all risk types. Lastly, I show that this secondary customization approach has useful applications not only in the general population, but in specific patient subpopulations as well. This thesis thus offers a new perspective and strategy for mortality prediction in the ICU, and when taken in context with the increasing digitization of patient medical records, offers a more personalized predictive model in the ICU.
by Tejas G. Sundaresan.
M. Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Goldberg, Andrew Vladislav. „Efficient graph algorithms for sequential and parallel computers“. Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14912.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1987.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: p. 117-123.
by Andrew Vladislav Goldberg.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Maurer, Patrick M. (Patrick Michael). „Sequential decoding of trellis codes through ISI channels“. Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38820.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaves 56-57).
by Patrick M. Maurer.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Alidina, Mazhar Murtaza. „Precomputation-based sequential logic optimization for low power“. Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36454.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 69-71).
by Mazhar Murtaza Alidina.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Al-Hajri, Muhannad Khaled. „Object Tracking Sensor Networks using Sequential Patterns in an Energy Efficient Prediction Technique“. Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28880.

Der volle Inhalt der Quelle
Annotation:
Wireless Sensor Networks applications are attracting more and more research, especially in energy saving techniques/architectures which is the focal point of most researchers in this area. One of the most interesting applications of Wireless Sensor Networks is the Object Tracking Sensor Networks which are used mainly to track certain objects in a monitored area and to report its location to the application's users. This application is a major energy consumer among other Wireless Sensor Networks applications. There have been many techniques that assist in delivering the required data while maintaining a lower energy consumption than the early approaches. Our approach revolves around the ability to predict the objects' future movements in order to track it with the minimum number of sensor nodes, while keeping the other sensor nodes in the network in a sleep mode. Thus, achieving our goals while reducing significantly the network's energy consumption. The prediction technique used in our proposed solution is based on the inherited patterns of the objects' movements in the network and how to utilize data mining techniques, such as Sequential Patterns, in order to predict which sensor node the moving object will be heading next. We propose the Prediction-based Tracking technique using Sequential Patterns (PTSP), which is designed to achieve significant reductions in the energy dissipated by the OTSN network, while maintaining an acceptable missing rate levels. PTSP is tested against basic tracking techniques in order to determine the appropriateness of PTSP in various circumstances. We also test PTSP against some OTSN impacting factors, such as number of tracked objects, object speed, sampling duration and sampling frequency. We also test 3 different missing object recovery mechanisms implemented in PTSP to determine which is the most energy conservative. The experimental results had shown that PTSP outperformed all the other basic tracking techniques and contributed remarkable amounts of savings in terms of energy consumption of the entire network even through different circumstances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Mirikitani, Derrick Takeshi. „Sequential recurrent connectionist algorithms for time series modeling of nonlinear dynamical systems“. Thesis, Goldsmiths College (University of London), 2010. http://research.gold.ac.uk/3239/.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the methodology of building data driven models of nonlinear systems through the framework of dynamic modeling. More specifically this thesis focuses on sequential optimization of nonlinear dynamic models called recurrent neural networks (RNNs). In particular, the thesis considers fully connected recurrent neural networks with one hidden layer of neurons for modeling of nonlinear dynamical systems. The general objective is to improve sequential training of the RNN through sequential second-order methods and to improve generalization of the RNN by regularization. The total contributions of the proposed thesis can be summarized as follows: 1. First, a sequential Bayesian training and regularization strategy for recurrent neural networks based on an extension of the Evidence Framework is developed. 2. Second, an efficient ensemble method for Sequential Monte Carlo filtering is proposed. The methodology allows for efficient O(H 2 ) sequential training of the RNN. 3. Last, the Expectation Maximization (EM) framework is proposed for training RNNs sequentially.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Huggins, Jonathan H. (Jonathan Hunter). „An information-theoretic analysis of resampling in Sequential Monte Carlo“. Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91033.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
29
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (pages 56-57).
Sequential Monte Carlo (SMC) methods form a popular class of Bayesian inference algorithms. While originally applied primarily to state-space models, SMC is increasingly being used as a general-purpose Bayesian inference tool. Traditional analyses of SMC algorithms focus on their usage for approximating expectations with respect to the posterior of a Bayesian model. However, these algorithms can also be used to obtain approximate samples from the posterior distribution of interest. We investigate the asymptotic and non-asymptotic properties of SMC from this sampling viewpoint. Let P be a distribution of interest, such as a Bayesian posterior, and let P be a random estimator of P generated by an SMC algorithm. We study ... i.e., the law of a sample drawn from P, as the number of particles tends to infinity. We give convergence rates of the Kullback-Leibler divergence KL ... as well as necessary and sufficient conditions for the resampled version of P to asymptotically dominate the non-resampled version from this KL divergence perspective. Versions of these results are given for both the full joint and the filtering settings. In the filtering case we also provide time-uniform bounds under a natural mixing condition. Our results open up the possibility of extending recent analyses of adaptive SMC algorithms for expectation approximation to the sampling setting.
by Jonathan H. Huggins.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hammerton, James Alistair. „Exploiting holistic computation : an evaluation of the sequential RAAM“. Thesis, University of Birmingham, 1999. http://etheses.bham.ac.uk//id/eprint/4948/.

Der volle Inhalt der Quelle
Annotation:
In recent years it has been claimed that connectionist methods of representing compositional structures, such as lists and trees, support a new form of symbol processing known as holistic computation. In a holistic computation the constituents of an object are acted upon simultaneously, rather than on a one-by-one basis as is typical in traditional symbolic systems. This thesis presents firstly, a critical examination of the concept of holistic computation, as described in the literature, along with a revised definition of the concept that aims to clarify the issues involved. In particular it is argued that holistic representations are not necessary for holistic computation and that holistic computation is not restricted to connectionist systems. Secondly, an evaluation of the capacity of a particular connectionist representation, the Sequential RAAM, to generate representations that support holistic symbol processing is presented. It is concluded that the Sequential RAAM is not as effective a vehicle for holistic symbol processing as it initially appeared, but that there may be some scope for improving its performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Woodbeck, Kris. „On neural processing in the ventral and dorsal visual pathways using the programmable Graphics Processing Unit“. Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27660.

Der volle Inhalt der Quelle
Annotation:
We describe a system of biological inspiration that represents both pathways of the primate visual cortex. Our model is applied to multi-class object recognition and the creation of disparity maps from stereo images. All processing is done using the programmable graphics processor; we show that the Graphics Processing Unit (GPU) is a very natural platform for modeling the highly parallel nature of the brain. Each visual processing area in our model is closely based on the properties of the associated area within the brain. Our model covers areas V1 and V2, area V3 of the dorsal pathway and V4 of the ventral pathway of the primate visual cortex. Our model is able to programmatically tune its parameters to select the optimal cells with which to process any visual field. We define a biological feature descriptor that is appropriate for both multi-class object recognition and stereo disparity. We demonstrate that this feature descriptor is also able to match well under changes to rotation, scale and object pose. Our model is tested on the Caltech 101 object dataset and the Middlebury stereo dataset, performing well in both cases. We show that a significant speedup is achieved by using the GPU for all neural computation. Our results strengthen the case for using both the GPU and biologically-motivated techniques in computer vision.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

ZHU, WEILI. „The Application of Monte Carlo Sampling to Sequential Auction Games with Incomplete Information:-An Empirical Study“. NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010930-170049.

Der volle Inhalt der Quelle
Annotation:

AbstractWeili, Zhu. The Application of Monte Carlo Sampling to Sequential Auction Games with Incomplete Information: -An Empirical Study. (Under the direction of Peter Wurman.)In this thesis, I develop a sequential auction model and design a bidding agent for it. This agent uses Monte Carlo sampling to ¡°learn¡± from a series sampled games. I use a game theory research toolset called GAMBIT to implement the model and collect some experimental data. The data shows the effect of different factors that impact on our agent¡¯s performance, such as the sample size, the depth of game tree, etc. The data also shows that our agent performs well compared with myopic strategic agent. I also discuss the possible relaxation of different aspects in our auction model, and future research directions.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Chalmers, Kevin. „Investigating communicating sequential processes for Java to support ubiquitous computing“. Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/3507.

Der volle Inhalt der Quelle
Annotation:
Ubiquitous Computing promises to enrich our everyday lives by enabling the environment to be enhanced via computational elements. These elements are designed to augment and support our lives, thus allowing us to perform our tasks and goals. The main facet of Ubiquitous Computing is that computational devices are embedded in the environment, and interact with users and themselves to provide novel and unique applications. Ubiquitous Computing requires an underlying architecture that helps to promote and control the dynamic properties and structures that the applications require. In this thesis, the Networking package of Communicating Sequential Processes for Java (JCSP) is examined to analyse its suitability as the underlying architecture for Ubiquitous Computing. The reason to use JCSP Networking as a case study is that one of the proposed models for Ubiquitous Computing, the π-Calculus, has the potential to have its abstractions implemented within JCSP Networking. This thesis examines some of the underlying properties of JCSP Networking and examines them within the context of Ubiquitous Computing. There is also an examination into the possibility of implementing the mobility constructs of the π-Calculus and similar mobility models within JCSP Networking. It has been found that some of the inherent properties of Java and JCSP Networking do cause limitations, and hence a generalisation of the architecture has been made that should provide greater suitability of the ideas behind JCSP Networking to support Ubiquitous Computing. The generalisation has resulted in the creation of a verified communication protocol that can be applied to any Communicating Process Architecture.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Liu, Ying. „Query optimization for distributed stream processing“. [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3274258.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4597. Adviser: Beth Plale. Title from dissertation home page (viewed Apr. 21, 2008).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Hudson, James. „Processing large point cloud data in computer graphics“. Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054233187.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Ohio State University, 2003.
Title from first page of PDF file. Document formatted into pages; contains xix, 169 p.; also includes graphics (some col.). Includes bibliographical references (p. 159-169). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Isawhe, Boladale Modupe. „Sequential frame synchronization over binary symmetrical channel for unequally distributed data symbols“. Thesis, Kingston University, 2017. http://eprints.kingston.ac.uk/39287/.

Der volle Inhalt der Quelle
Annotation:
Frame synchronization is a critical task in digital communications receivers as it enables the accurate decoding and recovery of transmitted information. Information transmitted over a wireless channel is represented as bit stream. The bit stream is typically organized into groups of bits which can be of the same or variable length, known as frames, with frames being demarcated prior to transmission by a known bit sequence. The task of the frame synchronizer in the receiver is then to correctly determine frame boundaries, given that the received bit stream is a possibly corrupted version of the transmitted bit stream due to the error-prone nature of the wireless channel. Bearing in mind that the problem of frame synchronization has been studied extensively for wireless communications, where frames have a known, constant length, this thesis examines and seeks to make a contribution to the problem of frame synchronization where frames are of variable, unknown lengths. This is a common occurrence in the transmission of multimedia information and in packet or burst mode communications. Furthermore, a uniform distribution of data symbols is commonly assumed in frame synchronization works as this simplifies analysis. In many practical situations however, this assumption may not hold true. An example is in bit streams generated in video sequences encoded through discrete cosine transform (DCT) and also in more recent video coding standards (H.264). In this work, we therefore propose a novel, optimal frame synchronization metric for transmission over a binary symmetric channel (BSC) with a known, unequal source data symbol distribution, and where frames are of unknown, varying lengths. We thus extend prior studies carried out for the additive White Gaussian noise (AWGN) channel. We also provide a performance evaluation for the derived metric, using simulations and by exact mathematical analysis. In addition, we provide an exact analysis for the performance evaluation of the commonly used hard correlation (HC) metric, in the case where data symbols have a known, unequal distribution, which hitherto has not been made available in literature. We thus compare the performance of our proposed metric with that of the HC metric. Finally, the results of our study are applied to the investigation of cross-layer frame synchronization in the transmission of H.264 video over a Worldwide Interoperability for Microwave Access (WiMAX) system. We thus demonstrate priori knowledge of the source data distribution can be exploited to enhance frame synchronization performance, for the cases where hard decision decoding is desirable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lee, Li 1975. „Distributed signal processing“. Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86436.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

McCormick, Martin (Martin Steven). „Digital pulse processing“. Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78468.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 71-74).
This thesis develops an exact approach for processing pulse signals from an integrate-and-fire system directly in the time-domain. Processing is deterministic and built from simple asynchronous finite-state machines that can perform general piecewise-linear operations. The pulses can then be converted back into an analog or fixed-point digital representation through a filter-based reconstruction. Integrate-and-fire is shown to be equivalent to the first-order sigma-delta modulation used in oversampled noise-shaping converters. The encoder circuits are well known and have simple construction using both current and next-generation technologies. Processing in the pulse-domain provides many benefits including: lower area and power consumption, error tolerance, signal serialization and simple conversion for mixed-signal applications. To study these systems, discrete-event simulation software and an FPGA hardware platform are developed. Many applications of pulse-processing are explored including filtering and signal processing, solving differential equations, optimization, the minsum / Viterbi algorithm, and the decoding of low-density parity-check codes (LDPC). These applications often match the performance of ideal continuous-time analog systems but only require simple digital hardware. Keywords: time-encoding, spike processing, neuromorphic engineering, bit-stream, delta-sigma, sigma-delta converters, binary-valued continuous-time, relaxation-oscillators.
by Martin McCormick.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Eldar, Yonina Chana 1973. „Quantum signal processing“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16805.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
Includes bibliographical references (p. 337-346).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Quantum signal processing (QSP) as formulated in this thesis, borrows from the formalism and principles of quantum mechanics and some of its interesting axioms and constraints, leading to a novel paradigm for signal processing with applications in areas ranging from frame theory, quantization and sampling methods to detection, parameter estimation, covariance shaping and multiuser wireless communication systems. The QSP framework is aimed at developing new or modifying existing signal processing algorithms by drawing a parallel between quantum mechanical measurements and signal processing algorithms, and by exploiting the rich mathematical structure of quantum mechanics, but not requiring a physical implementation based on quantum mechanics. This framework provides a unifying conceptual structure for a variety of traditional processing techniques, and a precise mathematical setting for developing generalizations and extensions of algorithms. Emulating the probabilistic nature of quantum mechanics in the QSP framework gives rise to probabilistic and randomized algorithms. As an example we introduce a probabilistic quantizer and derive its statistical properties. Exploiting the concept of generalized quantum measurements we develop frame-theoretical analogues of various quantum-mechanical concepts and results, as well as new classes of frames including oblique frame expansions, that are then applied to the development of a general framework for sampling in arbitrary spaces. Building upon the problem of optimal quantum measurement design, we develop and discuss applications of optimal methods that construct a set of vectors.
(cont.) We demonstrate that, even for problems without inherent inner product constraints, imposing such constraints in combination with least-squares inner product shaping leads to interesting processing techniques that often exhibit improved performance over traditional methods. In particular, we formulate a new viewpoint toward matched filter detection that leads to the notion of minimum mean-squared error covariance shaping. Using this concept we develop an effective linear estimator for the unknown parameters in a linear model, referred to as the covariance shaping least-squares estimator. Applying this estimator to a multiuser wireless setting, we derive an efficient covariance shaping multiuser receiver for suppressing interference in multiuser communication systems.
by Yonina Chana Eldar.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Golab, Lukasz. „Sliding Window Query Processing over Data Streams“. Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2930.

Der volle Inhalt der Quelle
Annotation:
Database management systems (DBMSs) have been used successfully in traditional business applications that require persistent data storage and an efficient querying mechanism. Typically, it is assumed that the data are static, unless explicitly modified or deleted by a user or application. Database queries are executed when issued and their answers reflect the current state of the data. However, emerging applications, such as sensor networks, real-time Internet traffic analysis, and on-line financial trading, require support for processing of unbounded data streams. The fundamental assumption of a data stream management system (DSMS) is that new data are generated continually, making it infeasible to store a stream in its entirety. At best, a sliding window of recently arrived data may be maintained, meaning that old data must be removed as time goes on. Furthermore, as the contents of the sliding windows evolve over time, it makes sense for users to ask a query once and receive updated answers over time.

This dissertation begins with the observation that the two fundamental requirements of a DSMS are dealing with transient (time-evolving) rather than static data and answering persistent rather than transient queries. One implication of the first requirement is that data maintenance costs have a significant effect on the performance of a DSMS. Additionally, traditional query processing algorithms must be re-engineered for the sliding window model because queries may need to re-process expired data and "undo" previously generated results. The second requirement suggests that a DSMS may execute a large number of persistent queries at the same time, therefore there exist opportunities for resource sharing among similar queries.

The purpose of this dissertation is to develop solutions for efficient query processing over sliding windows by focusing on these two fundamental properties. In terms of the transient nature of streaming data, this dissertation is based upon the following insight. Although the data keep changing over time as the windows slide forward, the changes are not random; on the contrary, the inputs and outputs of a DSMS exhibit patterns in the way the data are inserted and deleted. It will be shown that the knowledge of these patterns leads to an understanding of the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies. In the context of the persistent nature of DSMS queries, the insight behind the proposed solution is that various queries may need to be refreshed at different times, therefore synchronizing the refresh schedules of similar queries creates more opportunities for resource sharing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

MacDonald, Darren T. „Image segment processing for analysis and visualization“. Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27641.

Der volle Inhalt der Quelle
Annotation:
This thesis is a study of the probabilistic relationship between objects in an image and image appearance. We give a hierarchical, probabilistic criterion for the Bayesian segmentation of photographic images. We validate the segmentation against the Berkeley Segmentation Data Set, where human subjects were asked to partition digital images into segments each representing a 'distinguished thing'. We show that there exists a strong dependency between the hierarchical segmentation criterion, based on our assumptions about the visual appearance of objects, and the distribution of ground truth data. That is, if two pixels have similar visual properties then they will often have the same ground truth state. Segmentation accuracy is quantified by measuring the information cross-entropy between the ground truth probability distribution and an estimate obtained from the segmentation. We consider the proposed method for estimating joint ground truth probability to be an important tool for future image analysis and visualization work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie