Dissertations / Theses on the topic 'Sequential processing (Computer science)'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Sequential processing (Computer science).'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Parashkevov, Atanas. "Advances in space and time efficient model checking of finite state systems." Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09PH/09php223.pdf.
Full textBari, Himanshu. "Design and implementation of a library to support the Common Component Architecture (CCA) over Legion." Diss., Online access via UMI:, 2004. http://wwwlib.umi.com/dissertations/fullcit/1424173.
Full textZhang, Shujian. "Evaluation in built-in self-test." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ34293.pdf.
Full textMoffat, Nicholas. "Identifying and exploiting symmetry for CSP refinement checking." Thesis, University of Oxford, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711620.
Full textPajic, Slobodan. "Sequential quadratic programming-based contingency constrained optimal power flow." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0430103-152758.
Full textSimpson, Andrew C. "Safety through security." Thesis, University of Oxford, 1996. http://ora.ox.ac.uk/objects/uuid:4a690347-46af-42a4-91fe-170e492a9dd1.
Full textKoufogiannakis, Christos. "Approximation algorithms for covering problems." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957320821&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268338860&clientId=48051.
Full textIncludes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 70-77). Also issued in print.
Bari, Wasimul. "Analyzing binary longitudinal data in adaptive clinical trials /." Internet access available to MUN users only, 2003. http://collections.mun.ca/u?/theses,167453.
Full textThomas, Jonathan. "Asynchronous Validity Resolution in Sequentially Consistent Shared Virtual Memory." Fogler Library, University of Maine, 2001. http://www.library.umaine.edu/theses/pdf/Thomas.pdf.
Full textShao, Yang. "Sequential organization in computational auditory scene analysis." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1190127412.
Full textVan, Delden Sebastian Alexander. "Larger-first partial parsing." Doctoral diss., University of Central Florida, 2003. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/2038.
Full textLarger-first partial parsing is a primarily top-down approach to partial parsing that is opposite to current easy-fzrst, or primarily bottom-up, strategies. A rich partial tree structure is captured by an algorithm that assigns a hierarchy of structural tags to each of the input tokens in a sentence. Part-of-speech tags are first assigned to the words in a sentence by a part-of-speech tagger. A cascade of Deterministic Finite State Automata then uses this part-of-speech information to identify syntactic relations primarily in a descending order of their size. The cascade is divided into four specialized sections: (1) a Comma Network, which identifies syntactic relations associated with commas; (2) a Conjunction Network, which partially disambiguates phrasal conjunctions and llly disambiguates clausal conjunctions; (3) a Clause Network, which identifies non-comma-delimited clauses; and (4) a Phrase Network, which identifies the remaining base phrases in the sentence. Each automaton is capable of adding one or more levels of structural tags to the tokens in a sentence. The larger-first approach is compared against a well-known easy-first approach. The results indicate that this larger-first approach is capable of (1) producing a more detailed partial parse than an easy first approach; (2) providing better containment of attachment ambiguity; (3) handling overlapping syntactic relations; and (4) achieving a higher accuracy than the easy-first approach. The automata of each network were developed by an empirical analysis of several sources and are presented here in detail.
Ph.D.
Doctorate;
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering and Computer Science
215 p.
xiv, 212 leaves, bound : ill. ; 28 cm.
Jang, Geon-Ho. "Design and implementation of pulse sequences for application in MRI /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9953868.
Full textMalkoc, Veysi. "Sequential alignment and position verification system for functional proton radiosurgery." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2535.
Full textDourado, Camila da Silva 1982. "Mineração de dados climáticos para análise de eventos extremos de precipitação." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/256801.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agrícola
Made available in DSpace on 2018-08-22T14:12:46Z (GMT). No. of bitstreams: 1 Dourado_CamiladaSilva_M.pdf: 21521693 bytes, checksum: d0c749dfa3c77ac47a96acd234b8d3c3 (MD5) Previous issue date: 2013
Resumo: O conhecimento das condições climáticas, identificando regiões com maiores riscos de ocorrências de eventos extremos, que possam impactar os diversos setores socioeconômicos e ambientais, tornou-se um grande desafio. No Brasil as maiores ocorrências de eventos extremos estão relacionadas aos fenômenos hidrológicos. Em particular, o estado da Bahia apresenta alta variabilidade temporal e espacial no clima, desde áreas consideradas áridas ou com risco de aridização (ao Norte) a regiões com clima úmido na faixa litorânea. O estado tem sido alvo nesses últimos anos de diferentes eventos extremos de chuva, com enchentes em algumas áreas e secas severas em outras. Neste contexto, o objetivo deste trabalho foi utilizar técnicas de mineração de dados para analisar a frequência das ocorrências dos eventos extremos de precipitação durante o período de 1981 a 2010 no estado da Bahia, com o propósito de subsidiar a tomada de decisão referente a ações preventivas e mitigadoras dos impactos socioeconômico e ambientais. Para isto, foram utilizados dados climáticos de precipitação fornecidos pelo Sistema de Informações Hidrológicas da Agência Nacional de Águas. Aplicando-se a tarefa de agrupamento (clusterização), por meio do algoritmo k-means, as séries históricas de dados climáticos foram agrupadas em cinco zonas pluviometricamente homogêneas. Posteriormente, foram realizadas análises em diferentes escalas temporais (anual, mensal e diária) identificando através da Técnica dos Quantis limiares superiores e inferiores de intensidade de chuva em cada região homogênea, para cada escala temporal. Na escala mensal, foram identificados padrões sequenciais das ocorrências dos eventos extremos positivos e negativos ao longo dos trinta anos. Os resultados reforçam a potencialidade da técnica de mineração de dados em agrupar zonas homogêneas por similaridade pluvial, com o uso do algoritmo k-means. Revelam ainda, para todas as escalas temporais utilizadas, uma alta variabilidade pluviométrica. Os anos registrados com maior ocorrência de eventos extremos negativos estão na década de 90 e os anos registrados com mais eventos extremos positivos na década de 2000
Abstract: The knowledge of climate conditions, identifying areas with the greatest risk of occurrence of extreme events, that may impact the various socioeconomic and environmental sectors, has become a major challenge. In Brazil the largest occurrences of extreme events are related to hydrological phenomena. In particular, the state of Bahia presents a high temporal and spatial variability of climate, from areas considered arid or with risk to become arid - (in the North) to regions with humid along the coast. The state has been targeted of different extreme rainfall events recently, with floods in some areas and severe droughts in others. In this context, the aim of this study was to use data mining techniques to analyze the frequency of occurrences of extreme precipitation events during the period from 1981 to 2010 in the state of Bahia, in order to support decision making regarding the preventive and mitigative environmental and socioeconomic impacts. To accomplish that, it was used climate data of precipitation supplied by the Hydrological Information System of the National Water Agency. By applying the task of grouping (clustering) by means of the k-means algorithm, the time series of climate data were grouped into five homogeneous rainfall zones. Subsequently, analyzes were performed on different time scales (annually, monthly and daily) identifying by quantile methods the upper and lower thresholds of rainfall intensity in each homogeneous region, for each time scale. At the monthly scale, sequential patterns of occurrences of extreme positive and negative events were identified over the thirty years. The results reinforce the potential of the data mining technique to group homogeneous zones by similarity of rain, using the k-means algorithm. They also reveal, for all time scales used, high rainfall variability. The years with the highest recorded extreme negative events are in the 90's and those registered with more extreme positive events are in the 2000's
Mestrado
Planejamento e Desenvolvimento Rural Sustentável
Mestra em Engenharia Agrícola
Uijt, de Haag Maarten. "An investigation into the application of block processing techniques for the Global Positioning System." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1181171187.
Full textMazur, Tomasz Krzysztof. "Model Checking Systems with Replicated Components using CSP." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:6694fac7-00b4-4b25-b054-813d7a6a4cdb.
Full textCostello, Roger Lee. "Responsive sequential processes /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487588249825353.
Full textNelson, Alexander J. "Software signature derivation from sequential digital forensic analysis." Thesis, University of California, Santa Cruz, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10140317.
Full textHierarchical storage system namespaces are notorious for their immense size, which is a significant hindrance for any computer inspection. File systems for computers start with tens of thousands of files, and the Registries of Windows computers start with hundreds of thousands of cells. An analysis of a storage system, whether for digital forensics or locating old data, depends on being able to reduce the namespaces down to the features of interest. Typically, having such large volumes to analyze is seen as a challenge to identifying relevant content. However, if the origins of files can be identified—particularly dividing between software and human origins—large counts of files become a boon to profiling how a computer has been used. It becomes possible to identify software that has influenced the computer's state, which gives an important overview of storage system contents not available to date.
In this work, I apply document search to observed changes in a class of forensic artifact, cell names of the Windows Registry, to identify effects of software on storage systems. Using the search model, a system's Registry becomes a query for matching software signatures. To derive signatures, file system differential analysis is extended from between two storage system states to many sequences of states. The workflow that creates these signatures is an example of analytics on data lineage, from branching data histories. The signatures independently indicate past presence or usage of software, based on consistent creation of measurably distinct artifacts. A signature search engine is demonstrated against a machine with a selected set of applications installed and executed. The optimal search engine according to that machine is then turned against a separate corpus of machines with a set of present applications identified by several non-Registry forensic artifact sources, including the file systems, memory, and network captures. The signature search engine corroborates those findings, using only the Windows Registry.
Hsieh, Wilson Cheng-Yi. "Extracting parallelism from sequential programs." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14752.
Full textKing, Myron Decker. "An efficient sequential BTRS implementation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/46603.
Full textIncludes bibliographical references (leaves 73-74).
This thesis describes the implementation of BTRS, a language based on guarded atomic actions (GAA). The input language to the compiler which forms the basis of this work is a hierarchical tree of modules containing state, interface methods, and rules which fire atomically to cause state transitions. Since a schedule need not be specified, the program description is inherently nondeterministic, though the BTRS language does allow the programmer to remove nondeterminism by specifying varying degrees of scheduling constraints. The compiler outputs a (sequential) single-threaded C implementation of the input description, choosing a static schedule which adheres to the input constraints. The resulting work is intended to be used as the starting point for research into efficient software synthesis from guarded atomic actions, and ultimately a hardware inspired programming methodology for writing parallel software. This compiler is currently being used to generate software for a heterogeneous system in which the software and hardware components are both specified in BTRS.
by Myron Decker King.
S.M.
Xu, Zhi S. M. Massachusetts Institute of Technology. "Private sequential search and optimization." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112054.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 107-108).
We propose and analyze two models to study an intrinsic trade-off between privacy and query complexity in online settings: 1. Our first private optimization model involves an agent aiming to minimize an objective function expressed as a weighted sum of finitely many convex cost functions, where the weights capture the importance the agent assigns to each cost function. The agent possesses as her private information the weights, but does not know the cost functions, and must obtain information on them by sequentially querying an external data provider. The objective of the agent is to obtain an accurate estimate of the optimal solution, x*, while simultaneously ensuring privacy, by making x* difficult to infer for the data provider, who does not know the agent's private weights but only observes the agent's queries. 2. The second private search model we study is also about protecting privacy while searching for an object. It involves an agent attempting to determine a scalar true value, x*, based on querying an external database, whose response indicates whether the true value is larger than or less than the agent's submitted queries. The objective of the agent is again to obtain an accurate estimate of the true value, x*, while simultaneously hiding it from an adversary who observes the submitted queries but not the responses. The main results of this thesis provide tight upper and lower bounds on the agent's query complexity (i.e., number of queries) as a function of desired levels of accuracy and privacy, for both models. We also explicitly construct query strategies whose worst-case query complexity is optimal up to an additive constant.
by Zhi Xu.
S.M.
Hebb, Christopher Louis. "Website usability evaluation using sequential analysis." [Bloomington, Ind.] : Indiana University, 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3167801.
Full textSource: Dissertation Abstracts International, Volume: 66-04, Section: A, page: 1328. Adviser: Theodore W. Frick. "Title from dissertation home page (viewed Nov. 13, 2006)."
Jin, Stone Qiaodan (Qiaodan Jordan). "An ARM-based sequential sampling oscilloscope." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100591.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 141).
Sequential equivalent-time sampling allows a system to acquire repetitious waveforms with frequencies beyond the Nyquist rate. This thesis documents the prototype of a digital ARM-based sequential sampling oscilloscope with peripheral hardware and software. Discussed are the designs and obstacles of various analog circuits and signal processing methods. By means of sequential sampling, alongside analog and digital signal processing techniques, we are able to utilize a 3MSPS ADC for a capture rate of 24MSPS. For sinusoids between 6-12MHz, waveforms acquired display at least 10dB of SNR improvement for unfiltered signals and at least 60dB of SNR improvement for aggressively filtered signals.
by Qiaodan (Jordan) Jin Stone.
M. Eng.
Macindoe, Owen. "Sidekick agents for sequential planning problems." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84892.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 127-131).
Effective Al sidekicks must solve the interlinked problems of understanding what their human collaborator's intentions are and planning actions to support them. This thesis explores a range of approximate but tractable approaches to planning for AI sidekicks based on decision-theoretic methods that reason about how the sidekick's actions will effect their beliefs about unobservable states of the world, including their collaborator's intentions. In doing so we extend an existing body of work on decision-theoretic models of assistance to support information gathering and communication actions. We also apply Monte Carlo tree search methods for partially observable domains to the problem and introduce an ensemble-based parallelization strategy. These planning techniques are demonstrated across a range of video game domains.
by Owen Macindoe.
Ph.D.
Parvathala, Rajeev (Rajeev Krishna). "Representation learning for non-sequential data." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119581.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-90).
In this thesis, we design and implement new models to learn representations for sets and graphs. Typically, data collections in machine learning problems are structured as arrays or sequences, with sequential relationships between successive elements. Sets and graphs both break this common mold of data collections that have been extensively studied in the machine learning community. First, we formulate a new method for performing diverse subset selection using a neural set function approximation method. This method relies on the deep sets idea, which says that any set function s(X) has a universal approximator of the form f([sigma]x[xi]X [phi](x)). Second, we design a new variational autoencoding model for highly structured, sparse graphs, such as chemical molecules. This method uses the graphon, a probabilistic graphical model from mathematics, as inspiration for the decoder. Furthermore, an adversary is employed to force the distribution of vertex encodings to follow a target distribution, so that new graphs can be generated by sampling from this target distribution. Finally, we develop a new framework for performing encoding of graphs in a hierarchical manner. This approach partitions an input graph into multiple connected subgraphs, and creates a new graph where each node represents one such subgraph. This allows the model to learn a higher level representation for graphs, and increases robustness of graphical encoding to varying graph input sizes.
by Rajeev Parvathala.
M. Eng.
Arıkan, Erdal. "Sequential decoding for multiple access channels." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15190.
Full textMICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 111-112.
by Erdal Arikan.
Ph.D.
Koita, Rizwan R. (Rizwan Rahim). "Strategies for sequential design of experiments." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35998.
Full textLiando, Johnny 1964. "Enhancement and evaluation of SCIRTSS (sequential circuits test search system) on ISCAS'89 benchmark sequential circuits." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/278283.
Full textWang, Jonathan M. Eng Massachusetts Institute of Technology. "Pentimento : non-sequential authoring of handwritten lectures." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/100619.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Pentimento is software developed under the supervision of Fredo Durand in the Computer Graphics Group at CSAIL that focuses on dramatically simplifying the creation of online educational video lectures such as those of Khan Academy. In these videos, the lecture style is that the educator draws on a virtual whiteboard as he/she speaks. Currently, the type of software that the educator uses is very rudimentary in its functionality and only allows for basic functionality such as screen and voice recording. A downside of this approach is that the educator must get it right on the first approach, as there is no ability to simply edit the content taken during a screen capture after the initial recording without using unnecessarily complex video editing software. Even with video editing software, the user is not able to access the original drawing content used to create video. The overall goal of this project is to develop lecture recording software that uses a vector based representation to keep track of the user's sketching, which will allow the user to easily editing the original drawing content retroactively. The goal for my contribution to this project is to implement components for a web-based version of Pentimento. This will allow the application to reach a broader range of users. The goal is to have an HTML5 and Javascript based application that can run on many of popular the web browsers in use today. One of my main focuses in this project is to work on the audio recording and editing component. This includes the working on the user interface component and integrating it with the rest of the parts in the software.
by Jonathan Wang.
M. Eng.
Dernoncourt, Franck. "Sequential short-text classification with neural networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111880.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 69-79).
Medical practice too often fails to incorporate recent medical advances. The two main reasons are that over 25 million scholarly medical articles have been published, and medical practitioners do not have the time to perform literature reviews. Systematic reviews aim at summarizing published medical evidence, but writing them requires tremendous human efforts. In this thesis, we propose several natural language processing methods based on artificial neural networks to facilitate the completion of systematic reviews. In particular, we focus on short-text classification, to help authors of systematic reviews locate the desired information. We introduce several algorithms to perform sequential short-text classification, which outperform state-of-the-art algorithms. To facilitate the choice of hyperparameters, we present a method based on Gaussian processes. Lastly, we release PubMed 20k RCT, a new dataset for sequential sentence classification in randomized control trial abstracts.
by Franck Dernoncourt.
Ph. D.
Gu, Ronghui. "An Extensible Architecture for Building Certified Sequential and Concurrent OS Kernels." Thesis, Yale University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10584948.
Full textOperating System (OS) kernels form the backbone of all system software. They have a significant impact on the resilience, extensibility, and security of today's computing hosts. However, modern OS kernels are complex and may consist of a multitude of sequential or concurrent abstraction layers; unfortunately, abstraction layers have almost never been formally specified or verified. This makes it difficult to establish strong correctness properties, and to scale program verification across multiple abstraction layers.
Recent efforts have demonstrated the feasibility of building large scale formal proofs of functional correctness for simple general-purpose kernels, but the cost of such verification is still prohibitive, and it is unclear how to use their verified kernels to reason about user-level programs and other kernel extensions. Furthermore, they have ignored the issues of concurrency, which include not just user- and I/O concurrency on a single core, but also multicore parallelism with fine-grained locking.
This thesis presents CertiKOS, an extensible architecture for building certified sequential and concurrent OS kernels. CertiKOS proposes a new compositional framework showing how to formally specify, program, verify, and compose concurrent abstraction layers. We present a novel language-based account of abstraction layers and show that they correspond to a strong form of abstraction over a particularly rich class of specifications that we call deep specifications . We show how to instantiate the formal layer-based framework in realistic programming languages such as C and assembly, and how to adapt the CompCert verified compiler to compile certified C layers such that they can be linked with assembly layers. We can then build and compose certified abstraction layers to construct various certified OS kernels, each of which guarantees a strong contextual refinement property for every kernel function, i.e., the implementation of each such function will behave like its specification under any kernel/user context with any valid interleaving.
To demonstrate the effectiveness of our new framework, we have successfully implemented and verified multiple practical sequential and concurrent OS kernels. The most realistic sequential hypervisor kernel is written in 6000 lines of C and x86 assembly, and can boot a version of Linux as a guest. The general-purpose concurrent OS kernel with fine-grained locking can boot on a quad-core machine. For all the certified kernels, their abstraction layers and (contextual) functional correctness properties are specified and verified in the Coq proof assistant.
Sundaresan, Tejas G. "Sequential modeling for mortality prediction in the ICU." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113105.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-89).
Severity of illness scores are commonly used in critical care medicine to guide treatment decisions and benchmark the quality of medical care. These scores operate in part by predicting patient mortality in the ICU using physiological variables including lab values, vital signs, and admission information. However, existing evidence suggests that current mortality predictors are less performant on patients who have an especially high risk of mortality in the ICU. This thesis seeks to reconcile this difference by developing a custom high risk mortality predictor for high risk patients in a process termed sequential modeling. Starting with a base set of features derived from the APACHE IV score, this thesis details the engineering of more complex features tailored to the high risk prediction task and development of a logistic regression model trained on the Philips eICU-CRD dataset. This high risk model is shown to be more performant than a baseline severity of illness score, APACHE IV, on the high risk subpopulation. Moreover, a combination of the baseline severity of illness score and the high risk model is shown to be better calibrated and more performant on patients of all risk types. Lastly, I show that this secondary customization approach has useful applications not only in the general population, but in specific patient subpopulations as well. This thesis thus offers a new perspective and strategy for mortality prediction in the ICU, and when taken in context with the increasing digitization of patient medical records, offers a more personalized predictive model in the ICU.
by Tejas G. Sundaresan.
M. Eng.
Goldberg, Andrew Vladislav. "Efficient graph algorithms for sequential and parallel computers." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/14912.
Full textMICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: p. 117-123.
by Andrew Vladislav Goldberg.
Ph.D.
Maurer, Patrick M. (Patrick Michael). "Sequential decoding of trellis codes through ISI channels." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38820.
Full textIncludes bibliographical references (leaves 56-57).
by Patrick M. Maurer.
M.S.
Alidina, Mazhar Murtaza. "Precomputation-based sequential logic optimization for low power." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36454.
Full textIncludes bibliographical references (leaves 69-71).
by Mazhar Murtaza Alidina.
M.S.
Al-Hajri, Muhannad Khaled. "Object Tracking Sensor Networks using Sequential Patterns in an Energy Efficient Prediction Technique." Thesis, University of Ottawa (Canada), 2010. http://hdl.handle.net/10393/28880.
Full textMirikitani, Derrick Takeshi. "Sequential recurrent connectionist algorithms for time series modeling of nonlinear dynamical systems." Thesis, Goldsmiths College (University of London), 2010. http://research.gold.ac.uk/3239/.
Full textHuggins, Jonathan H. (Jonathan Hunter). "An information-theoretic analysis of resampling in Sequential Monte Carlo." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91033.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
29
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (pages 56-57).
Sequential Monte Carlo (SMC) methods form a popular class of Bayesian inference algorithms. While originally applied primarily to state-space models, SMC is increasingly being used as a general-purpose Bayesian inference tool. Traditional analyses of SMC algorithms focus on their usage for approximating expectations with respect to the posterior of a Bayesian model. However, these algorithms can also be used to obtain approximate samples from the posterior distribution of interest. We investigate the asymptotic and non-asymptotic properties of SMC from this sampling viewpoint. Let P be a distribution of interest, such as a Bayesian posterior, and let P be a random estimator of P generated by an SMC algorithm. We study ... i.e., the law of a sample drawn from P, as the number of particles tends to infinity. We give convergence rates of the Kullback-Leibler divergence KL ... as well as necessary and sufficient conditions for the resampled version of P to asymptotically dominate the non-resampled version from this KL divergence perspective. Versions of these results are given for both the full joint and the filtering settings. In the filtering case we also provide time-uniform bounds under a natural mixing condition. Our results open up the possibility of extending recent analyses of adaptive SMC algorithms for expectation approximation to the sampling setting.
by Jonathan H. Huggins.
S.M.
Hammerton, James Alistair. "Exploiting holistic computation : an evaluation of the sequential RAAM." Thesis, University of Birmingham, 1999. http://etheses.bham.ac.uk//id/eprint/4948/.
Full textWoodbeck, Kris. "On neural processing in the ventral and dorsal visual pathways using the programmable Graphics Processing Unit." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27660.
Full textZHU, WEILI. "The Application of Monte Carlo Sampling to Sequential Auction Games with Incomplete Information:-An Empirical Study." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010930-170049.
Full textAbstractWeili, Zhu. The Application of Monte Carlo Sampling to Sequential Auction Games with Incomplete Information: -An Empirical Study. (Under the direction of Peter Wurman.)In this thesis, I develop a sequential auction model and design a bidding agent for it. This agent uses Monte Carlo sampling to ¡°learn¡± from a series sampled games. I use a game theory research toolset called GAMBIT to implement the model and collect some experimental data. The data shows the effect of different factors that impact on our agent¡¯s performance, such as the sample size, the depth of game tree, etc. The data also shows that our agent performs well compared with myopic strategic agent. I also discuss the possible relaxation of different aspects in our auction model, and future research directions.
Chalmers, Kevin. "Investigating communicating sequential processes for Java to support ubiquitous computing." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/3507.
Full textLiu, Ying. "Query optimization for distributed stream processing." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3274258.
Full textSource: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4597. Adviser: Beth Plale. Title from dissertation home page (viewed Apr. 21, 2008).
Hudson, James. "Processing large point cloud data in computer graphics." Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054233187.
Full textTitle from first page of PDF file. Document formatted into pages; contains xix, 169 p.; also includes graphics (some col.). Includes bibliographical references (p. 159-169). Available online via OhioLINK's ETD Center
Isawhe, Boladale Modupe. "Sequential frame synchronization over binary symmetrical channel for unequally distributed data symbols." Thesis, Kingston University, 2017. http://eprints.kingston.ac.uk/39287/.
Full textLee, Li 1975. "Distributed signal processing." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86436.
Full textMcCormick, Martin (Martin Steven). "Digital pulse processing." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78468.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 71-74).
This thesis develops an exact approach for processing pulse signals from an integrate-and-fire system directly in the time-domain. Processing is deterministic and built from simple asynchronous finite-state machines that can perform general piecewise-linear operations. The pulses can then be converted back into an analog or fixed-point digital representation through a filter-based reconstruction. Integrate-and-fire is shown to be equivalent to the first-order sigma-delta modulation used in oversampled noise-shaping converters. The encoder circuits are well known and have simple construction using both current and next-generation technologies. Processing in the pulse-domain provides many benefits including: lower area and power consumption, error tolerance, signal serialization and simple conversion for mixed-signal applications. To study these systems, discrete-event simulation software and an FPGA hardware platform are developed. Many applications of pulse-processing are explored including filtering and signal processing, solving differential equations, optimization, the minsum / Viterbi algorithm, and the decoding of low-density parity-check codes (LDPC). These applications often match the performance of ideal continuous-time analog systems but only require simple digital hardware. Keywords: time-encoding, spike processing, neuromorphic engineering, bit-stream, delta-sigma, sigma-delta converters, binary-valued continuous-time, relaxation-oscillators.
by Martin McCormick.
S.M.
Eldar, Yonina Chana 1973. "Quantum signal processing." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16805.
Full textIncludes bibliographical references (p. 337-346).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Quantum signal processing (QSP) as formulated in this thesis, borrows from the formalism and principles of quantum mechanics and some of its interesting axioms and constraints, leading to a novel paradigm for signal processing with applications in areas ranging from frame theory, quantization and sampling methods to detection, parameter estimation, covariance shaping and multiuser wireless communication systems. The QSP framework is aimed at developing new or modifying existing signal processing algorithms by drawing a parallel between quantum mechanical measurements and signal processing algorithms, and by exploiting the rich mathematical structure of quantum mechanics, but not requiring a physical implementation based on quantum mechanics. This framework provides a unifying conceptual structure for a variety of traditional processing techniques, and a precise mathematical setting for developing generalizations and extensions of algorithms. Emulating the probabilistic nature of quantum mechanics in the QSP framework gives rise to probabilistic and randomized algorithms. As an example we introduce a probabilistic quantizer and derive its statistical properties. Exploiting the concept of generalized quantum measurements we develop frame-theoretical analogues of various quantum-mechanical concepts and results, as well as new classes of frames including oblique frame expansions, that are then applied to the development of a general framework for sampling in arbitrary spaces. Building upon the problem of optimal quantum measurement design, we develop and discuss applications of optimal methods that construct a set of vectors.
(cont.) We demonstrate that, even for problems without inherent inner product constraints, imposing such constraints in combination with least-squares inner product shaping leads to interesting processing techniques that often exhibit improved performance over traditional methods. In particular, we formulate a new viewpoint toward matched filter detection that leads to the notion of minimum mean-squared error covariance shaping. Using this concept we develop an effective linear estimator for the unknown parameters in a linear model, referred to as the covariance shaping least-squares estimator. Applying this estimator to a multiuser wireless setting, we derive an efficient covariance shaping multiuser receiver for suppressing interference in multiuser communication systems.
by Yonina Chana Eldar.
Ph.D.
Golab, Lukasz. "Sliding Window Query Processing over Data Streams." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2930.
Full textThis dissertation begins with the observation that the two fundamental requirements of a DSMS are dealing with transient (time-evolving) rather than static data and answering persistent rather than transient queries. One implication of the first requirement is that data maintenance costs have a significant effect on the performance of a DSMS. Additionally, traditional query processing algorithms must be re-engineered for the sliding window model because queries may need to re-process expired data and "undo" previously generated results. The second requirement suggests that a DSMS may execute a large number of persistent queries at the same time, therefore there exist opportunities for resource sharing among similar queries.
The purpose of this dissertation is to develop solutions for efficient query processing over sliding windows by focusing on these two fundamental properties. In terms of the transient nature of streaming data, this dissertation is based upon the following insight. Although the data keep changing over time as the windows slide forward, the changes are not random; on the contrary, the inputs and outputs of a DSMS exhibit patterns in the way the data are inserted and deleted. It will be shown that the knowledge of these patterns leads to an understanding of the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies. In the context of the persistent nature of DSMS queries, the insight behind the proposed solution is that various queries may need to be refreshed at different times, therefore synchronizing the refresh schedules of similar queries creates more opportunities for resource sharing.
MacDonald, Darren T. "Image segment processing for analysis and visualization." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27641.
Full text