To see the other types of publications on this topic, follow the link: Processing Efficiency Theory.

Dissertations / Theses on the topic 'Processing Efficiency Theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Processing Efficiency Theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chong, Joyce L. Y. "Anxiety and working memory : an investigation and reconceptualisation of the Processing Efficiency Theory." University of Western Australia. School of Psychology, 2003. http://theses.library.uwa.edu.au/adt-WU2004.0050.

Full text
Abstract:
A dominant theory in the anxiety-working memory literature is the Processing Efficiency Theory (Eysenck & Calvo, 1992). According to this theory, worry - the cognitive component of state anxiety - pre-empts capacity in the central executive and phonological loop components within Baddeley and Hitch's (1974) fixed-capacity working memory system. Central to the Processing Efficiency Theory is the distinction between performance effectiveness (i.e. quality of performance) and processing efficiency (i.e. performance effectiveness divided by effort), with anxiety proposed to impair efficiency to a greater extent than it does effectiveness. The existing literature has provided support for this theory, although there exist factors that complicate the findings, including the nature of the working memory tasks utilised, comorbid depression, and the distinction between trait and state anxiety. Clarification of the limiting factors in the anxiety-working memory literature was sought over a series of initial methodological studies. The first study was an initial step in addressing the issue of comorbid depression, identifying measures that maximised the distinction between anxiety and depression. The second study identified verbal and spatial span tasks suitable for examining the various working memory systems. The third study considered a possible role for somatic anxiety in the anxiety-working memory relationship, and additionally addressed the state/trait anxiety distinction. These three initial studies culminated in the fourth study which formally addressed the predictions of the Processing Efficiency Theory, and explored the cognitive/somatic anxiety distinction more fully. For the third and fourth studies, high and low trait anxious individuals underwent either cognitive (ego threat instruction) or somatic (anxious music) stress manipulations, and completed a series of span tasks assessing all components of the working memory system. Unexpectedly, the fourth study yielded a notable absence of robust effects in support of the Processing Efficiency Theory. A consideration of the research into the fractionation of central executive processes, together with an examination of tasks utilised in the existing literature, suggested that anxiety might not affect all central executive processes equally. Specifically, the tasks utilised in this programme of research predominantly invoke the process of updating, and it has recently been suggested that anxiety may not actually impair this process (Dutke & Stober, 2001). This queried whether the current conceptualisation of the central executive component as a unified working memory system within the PET was adequate or if greater specification of this component was necessary. One central executive process identified as possibly mediating the anxiety-working memory relationship is that of inhibition, and the focus of the fifth study thus shifted to clarifying this more complex relationship. In addition to one of the verbal span tasks utilised in the third and fourth studies, the reading span task (Daneman & Carpenter, 1980) and a grammatical reasoning task (MacLeod & Donnellan, 1993) were also included. Inhibitory processing was measured using the directed ignoring task (Hopko, Ashcraft, Gute, Ruggerio, & Lewis, 1998). This study established that inhibition was affected by a cognitive stress manipulation and inhibition also played a part in the anxiety-working memory link. However other central executive processes were also implicated, suggesting a need for greater specification of the central executive component of working memory within the PET. A finding that also emerged from this, and the third and fourth studies, was that situational stress, rather than trait or state anxiety, was predominantly responsible for impairments in working memory. Finally, a theoretical analysis placing the anxiety-working memory relationship within a wider context was pursued, specifically examining how the Processing Efficiency Theory is nested within other accounts examining the relationship between mood and working memory. In particular, similarities between the theoretical accounts of the relationships between anxiety and working memory, and depression and working memory, suggest the operation of similar mechanisms in the way each mood impacts on performance. Despite the similarities, potential distinctions between the impact each has on performance are identified, and recommendations for future research are made.
APA, Harvard, Vancouver, ISO, and other styles
2

Man, Hong. "On efficiency and robustness of adaptive quantization for subband coding of images and video sequences." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/15003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Murray, Nicholas P. "An assessment of the efficiency and effectiveness of simulated auto racing performance psychophysiological evidence for the processing efficiency theory as indexed through visual search characteristics and P300 reciprocity /." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5961/Dissertation%5FNicholas%5FMurray.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains vi, 124 p.; also contains graphics. Vita. Includes bibliographical references (p. 106-117).
APA, Harvard, Vancouver, ISO, and other styles
4

Northern, Jebediah J. "Anxiety and Cognitive Performance: A Test of Predictions Made by Cognitive Interference Theory and Attentional Control Theory." Bowling Green State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1276557720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Adams, Danielle. "Exploring the attentional processes of expert performers and the impact of priming on motor skill execution." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/5082.

Full text
Abstract:
It is widely acknowledged that under situations of heightened pressure, many expert athletes suffer from performance decrements. This phenomenon has been termed ‘choking under pressure’ and has been the subject of extensive research in sport psychology. Despite this attention, gaps in the literature remain leaving opportunities for further advancements in knowledge about the phenomenon, particularly in relation to its underlying processes and the development of appropriate interventions that can be adopted in order to alleviate, or even prevent choking. The present programme of research, in general terms, aimed to develop and test the efficacy of an intervention tool, based on priming, to alleviate choking under pressure. It was acknowledged that such a tool should be matched to the mechanisms that underlie the choking process and although an abundance of research has provided valuable information about these mechanisms, it was identified that there still remains a lack of consensus regarding the most appropriate explanatory theory. Therefore the initial study in this thesis aimed to provide further insight into the processes that govern choking by examining accounts from elite international swimmers of their experiences of performing under high levels of pressure. The results provided further support for the postulation that choking under pressure occurs as a result of a combination of conscious processing hypothesis (Masters, 1992) and processing efficiency theory (Eysenck & Calvo, 1992) and that an optimum level of skill-focused attention is beneficial to performance. The following studies utilised this information as well as that of the existent theories of choking, to develop and examine an effective priming based intervention tool (a scrambled sentence task). Specifically, Studies 2, 3 and 4 examined the amount of residual working memory available after activation of the prime, the optimisation of the priming task and the efficacy of the tool in promoting performance under high pressure respectively. Results revealed support for the efficacy of the tool in reducing online skill-focused attention and promoting performance under both low- and high-pressure conditions. Finally, the general themes that emerged throughout the whole programme of study are discussed, as well as the limitations and recommendations for future research. Implications for coaches, athletes and practitioners are also presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Manglani, Heena R. "A neural network analysis of sedentary behavior and information processing speed in multiple sclerosis." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu15253688510945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Curtis, Cheryl Anne. "The relationship between anxiety, working memory and academic performance among secondary school pupils with social, emotional and behavioural difficulties : a test of Processing Efficiency Theory." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/142539/.

Full text
Abstract:
Research has shown that negative emotions, particularly anxiety, can play a role in learning and academic performance. The Processing Efficiency Theory (PET) and the more recent Attentional Control Theory (ACT) have been put forward to explain the relationship between anxiety and performance. The theories assume that worry (the cognitive component of anxiety) is thought to have a significant impact on performance and that the affect of anxiety on performance is through working memory, and in particular the central executive. The literature review identified a number of key areas of development, including the application of the theories to younger populations and with targeted populations who underachieve in school. The empirical paper aimed to test the application of PET and ACT for pupils with social, emotional and behavioural difficulties (SEBD). It investigated whether the negative impact of anxiety on academic performance was mediated via working memory and whether this relationship was moderated by emotional regulation. Twenty-four pupils with SEBD aged 12 to 14 completed working memory tasks and self-report anxiety measures. Academic performance was also assessed. Heart rate variability and parent-rated measures of conduct problems and hyperactivity were used as indicators of emotional regulation. The results showed that overall, there was a negative association between test anxiety and academic performance and this association was clearer for the thoughts component of test anxiety. Visuospatial, but not verbal working memory was found to mediate the relationship between test anxious thoughts and academic performance on tasks where the central executive was involved. These findings are broadly consistent with PET and ACT. The mediation relationship was stronger for pupils identified as displaying higher levels of hyperactivity; no moderating effect was found for either heart rate variability or conduct problems. The results have implications for understanding the underachievement of children with SEBD and for considering interventions to promote attainment in school.
APA, Harvard, Vancouver, ISO, and other styles
8

Cheng, James Sheung-Chak. "Efficient query processing on graph databases /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20CHENG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lian, Xiang. "Efficient query processing over uncertain data /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20LIAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

He, Xin. "On efficient parallel algorithms for solving graph problems /." The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487331541710947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Xiaoxiao. "Efficient design and decoding of the rate-compatible low-density parity-check codes /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20WUXX.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Leung, Kwong-Keung, and 梁光強. "Fast and efficient video coding based on communication and computationscheduling on multiprocessors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B29750945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Cardoze, David Enrique Fabrega. "Efficient algorithms for geometric pattern matching." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kei, Chun-Ling. "Efficient complexity reduction methods for short-frame iterative decoding /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20KEI.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 86-91). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
15

Leung, Kwong-Keung. "Fast and efficient video coding based on communication and computation scheduling on multiprocessors." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23272867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wong, Chi Wah. "Studying real-time rate control in perceptual, modeling and efficient aspects /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202004%20WONGC.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 205-212). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
17

Ali, Shirook M. Nikolova Natalia K. "Efficient sensitivity analysis and optimization with full-wave EM solvers." *McMaster only, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234245.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
19

Mousumi, Fouzia Ashraf. "Exploiting the probability of observation for efficient Bayesian network inference." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, 2013. http://hdl.handle.net/10133/3457.

Full text
Abstract:
It is well-known that the observation of a variable in a Bayesian network can affect the effective connectivity of the network, which in turn affects the efficiency of inference. Unfortunately, the observed variables may not be known until runtime, which limits the amount of compile-time optimization that can be done in this regard. This thesis considers how to improve inference when users know the likelihood of a variable being observed. It demonstrates how these probabilities of observation can be exploited to improve existing heuristics for choosing elimination orderings for inference. Empirical tests over a set of benchmark networks using the Variable Elimination algorithm show reductions of up to 50% and 70% in multiplications and summations, as well as runtime reductions of up to 55%. Similarly, tests using the Elimination Tree algorithm show reductions by as much as 64%, 55%, and 50% in recursive calls, total cache size, and runtime, respectively.
xi, 88 leaves : ill. ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
20

Shmachkov, Igor. "Efficient dispatch policy for SMT processors." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Kim Chi. "Efficient simulation of space-time coded and turbo coded systems." View thesis, 2007. http://handle.uws.edu.au:8081/1959.7/32467.

Full text
Abstract:
Thesis (Ph.D.)--University of Western Sydney, 2007.
A thesis submitted to the University of Western Sydney, College of Health and Science, School of Engineering in fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
22

Frank, Mario. "TEMPLAR : efficient determination of relevant axioms in big formula sets for theorem proving." Master's thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/7211/.

Full text
Abstract:
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
Dieses Dokument stellt ein System vor, das aus einer (großen) gegebenen Menge von Formeln der klassischen Prädikatenlogik eine Teilmenge auswählt, die für den Beweis einer logischen Formel relevant sind. Ziel des Systems ist, die Beweisbarkeit von Formeln in einer festen Zeitschranke zu ermöglichen oder die Beweissuche durch die eingeschränkte Formelmenge zu beschleunigen. Das Dokument beschreibt die Konzeption, Implementierung und Evaluation des Systems und geht dabei auf die zwei verschiedenen Ansätze zur Auswahl ein. Während das eine Konzept eine Graphensuche wahlweise auf den Negations-Normalformen oder Skolem-Normalformen der Formeln durchführt, indem Pfade von einer Formel zu einer anderen durch Unifikation von Prädikaten gebildet werden, analysiert das andere Konzept die Häufigkeiten von Lexemen und bildet einen Relevanzwert durch Anwendung des in der Computerlinguistik bekannten tf-idf-Maßes. Es werden die Ergebnisse der Weltmeisterschaft der automatischen Theorembeweiser (CADE ATP Systems Competition, CASC-24) vorgestellt und der Effekt des Systems für die Beweissuche analysiert. Weiterhin werden die Ergebnisse der Tests des Systems auf den Problemen der Weltmeisterschaft aus dem Jahre 2012 (CASC-J6) vorgestellt. Es wird darauf basierend evaluiert, inwieweit die Einschränkungen die Theorembeweiser bei dem Beweis komplexer Probleme unterstützen. Letztendlich wird gezeigt, dass das System einerseits positive Effekte für die Theorembeweiser hat und andererseits unabhängig von dem Kalkül ist, den die Theorembeweiser nutzen. Ferner ist der Ansatz unabhängig von der genutzten Logik und kann prinzipiell für alle Stufen der Prädikatenlogik und Aussagenlogik sowie Modallogik genutzt werden. Dieser Aspekt macht den Ansatz universell im automatischen Theorembeweisen nutzbar. Es zeigt sich, dass beide Ansätze zur Auswahl für verschiedene Formelmengen geeignet sind. Es wird auch gezeigt, dass die Kombination beider Ansätze eine signifikante Erhöhung der beweisbaren Formeln zur Folge hat und dass die Auswahl durch die Ansätze mit den Fähigkeiten eines anderen Auswahl-Systems mithalten kann.
APA, Harvard, Vancouver, ISO, and other styles
23

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Full text
Abstract:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
APA, Harvard, Vancouver, ISO, and other styles
24

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
25

"High efficiency block coding techniques for image data." Chinese University of Hong Kong, 1992. http://library.cuhk.edu.hk/record=b5887007.

Full text
Abstract:
by Lo Kwok-tung.
Thesis (Ph.D.)--Chinese University of Hong Kong, 1992.
Includes bibliographical references.
ABSTRACT --- p.i
ACKNOWLEDGEMENTS --- p.iii
LIST OF PRINCIPLE SYMBOLS AND ABBREVIATIONS --- p.iv
LIST OF FIGURES --- p.vii
LIST OF TABLES --- p.ix
TABLE OF CONTENTS --- p.x
Chapter CHAPTER 1 --- Introduction
Chapter 1.1 --- Background - The Need for Image Compression --- p.1-1
Chapter 1.2 --- Image Compression - An Overview --- p.1-2
Chapter 1.2.1 --- Predictive Coding - DPCM --- p.1-3
Chapter 1.2.2 --- Sub-band Coding --- p.1-5
Chapter 1.2.3 --- Transform Coding --- p.1-6
Chapter 1.2.4 --- Vector Quantization --- p.1-8
Chapter 1.2.5 --- Block Truncation Coding --- p.1-10
Chapter 1.3 --- Block Based Image Coding Techniques --- p.1-11
Chapter 1.4 --- Goal of the Work --- p.1-13
Chapter 1.5 --- Organization of the Thesis --- p.1-14
Chapter CHAPTER 2 --- Block-Based Image Coding Techniques
Chapter 2.1 --- Statistical Model of Image --- p.2-1
Chapter 2.1.1 --- One-Dimensional Model --- p.2-1
Chapter 2.1.2 --- Two-Dimensional Model --- p.2-2
Chapter 2.2 --- Image Fidelity Criteria --- p.2-3
Chapter 2.2.1 --- Objective Fidelity --- p.2-3
Chapter 2.2.2 --- Subjective Fidelity --- p.2-5
Chapter 2.3 --- Transform Coding Theroy --- p.2-6
Chapter 2.3.1 --- Transformation --- p.2-6
Chapter 2.3.2 --- Quantization --- p.2-10
Chapter 2.3.3 --- Coding --- p.2-12
Chapter 2.3.4 --- JPEG International Standard --- p.2-14
Chapter 2.4 --- Vector Quantization Theory --- p.2-18
Chapter 2.4.1 --- Codebook Design and the LBG Clustering Algorithm --- p.2-20
Chapter 2.5 --- Block Truncation Coding Theory --- p.2-22
Chapter 2.5.1 --- Optimal MSE Block Truncation Coding --- p.2-24
Chapter CHAPTER 3 --- Development of New Orthogonal Transforms
Chapter 3.1 --- Introduction --- p.3-1
Chapter 3.2 --- Weighted Cosine Transform --- p.3-4
Chapter 3.2.1 --- Development of the WCT --- p.3-6
Chapter 3.2.2 --- Determination of a and β --- p.3-9
Chapter 3.3 --- Simplified Cosine Transform --- p.3-10
Chapter 3.3.1 --- Development of the SCT --- p.3-11
Chapter 3.4 --- Fast Computational Algorithms --- p.3-14
Chapter 3.4.1 --- Weighted Cosine Transform --- p.3-14
Chapter 3.4.2 --- Simplified Cosine Transform --- p.3-18
Chapter 3.4.3 --- Computational Requirement --- p.3-19
Chapter 3.5 --- Performance Evaluation --- p.3-21
Chapter 3.5.1 --- Evaluation using Statistical Model --- p.3-21
Chapter 3.5.2 --- Evaluation using Real Images --- p.3-28
Chapter 3.6 --- Concluding Remarks --- p.3-31
Chapter 3.7 --- Note on Publications --- p.3-32
Chapter CHAPTER 4 --- Pruning in Transform Coding of Images
Chapter 4.1 --- Introduction --- p.4-1
Chapter 4.2 --- "Direct Fast Algorithms for DCT, WCT and SCT" --- p.4-3
Chapter 4.2.1 --- Discrete Cosine Transform --- p.4-3
Chapter 4.2.2 --- Weighted Cosine Transform --- p.4-7
Chapter 4.2.3 --- Simplified Cosine Transform --- p.4-9
Chapter 4.3 --- Pruning in Direct Fast Algorithms --- p.4-10
Chapter 4.3.1 --- Discrete Cosine Transform --- p.4-10
Chapter 4.3.2 --- Weighted Cosine Transform --- p.4-13
Chapter 4.3.3 --- Simplified Cosine Transform --- p.4-15
Chapter 4.4 --- Operations Saved by Using Pruning --- p.4-17
Chapter 4.4.1 --- Discrete Cosine Transform --- p.4-17
Chapter 4.4.2 --- Weighted Cosine Transform --- p.4-21
Chapter 4.4.3 --- Simplified Cosine Transform --- p.4-23
Chapter 4.4.4 --- Generalization Pruning Algorithm for DCT --- p.4-25
Chapter 4.5 --- Concluding Remarks --- p.4-26
Chapter 4.6 --- Note on Publications --- p.4-27
Chapter CHAPTER 5 --- Efficient Encoding of DC Coefficient in Transform Coding Systems
Chapter 5.1 --- Introduction --- p.5-1
Chapter 5.2 --- Minimum Edge Difference (MED) Predictor --- p.5-3
Chapter 5.3 --- Performance Evaluation --- p.5-6
Chapter 5.4 --- Simulation Results --- p.5-9
Chapter 5.5 --- Concluding Remarks --- p.5-14
Chapter 5.6 --- Note on Publications --- p.5-14
Chapter CHAPTER 6 --- Efficient Encoding Algorithms for Vector Quantization of Images
Chapter 6.1 --- Introduction --- p.6-1
Chapter 6.2 --- Sub-Codebook Searching Algorithm (SCS) --- p.6-4
Chapter 6.2.1 --- Formation of the Sub-codebook --- p.6-6
Chapter 6.2.2 --- Premature Exit Conditions in the Searching Process --- p.6-8
Chapter 6.2.3 --- Sub-Codebook Searching Algorithm --- p.6-11
Chapter 6.3 --- Predictive Sub-Codebook Searching Algorithm (PSCS) --- p.6-13
Chapter 6.4 --- Simulation Results --- p.6-17
Chapter 6.5 --- Concluding Remarks --- p.5-20
Chapter 6.6 --- Note on Publications --- p.6-21
Chapter CHAPTER 7 --- Predictive Classified Address Vector Quantization of Images
Chapter 7.1 --- Introduction --- p.7-1
Chapter 7.2 --- Optimal Three-Level Block Truncation Coding --- p.7-3
Chapter 7.3 --- Predictive Classified Address Vector Quantization --- p.7-5
Chapter 7.3.1 --- Classification of Images using Three-level BTC --- p.7-6
Chapter 7.3.2 --- Predictive Mean Removal Technique --- p.7-8
Chapter 7.3.3 --- Simplified Address VQ Technique --- p.7-9
Chapter 7.3.4 --- Encoding Process of PCAVQ --- p.7-13
Chapter 7.4 --- Simulation Results --- p.7-14
Chapter 7.5 --- Concluding Remarks --- p.7-18
Chapter 7.6 --- Note on Publications --- p.7-18
Chapter CHAPTER 8 --- Recapitulation and Topics for Future Investigation
Chapter 8.1 --- Recapitulation --- p.8-1
Chapter 8.2 --- Topics for Future Investigation --- p.8-3
REFERENCES --- p.R-1
APPENDICES
Chapter A. --- Statistics of Monochrome Test Images --- p.A-l
Chapter B. --- Statistics of Color Test Images --- p.A-2
Chapter C. --- Fortran Program Listing for the Pruned Fast DCT Algorithm --- p.A-3
Chapter D. --- Training Set Images for Building the Codebook of Standard VQ Scheme --- p.A-5
Chapter E. --- List of Publications --- p.A-7
APA, Harvard, Vancouver, ISO, and other styles
26

"Efficient and perceptual picture coding techniques." Thesis, 2009. http://library.cuhk.edu.hk/record=b6074972.

Full text
Abstract:
In the first part, some efficient algorithms are proposed to reduce the complexity of H.264 encoder, which is the latest state-of-the-art video coding standard. Intra and Inter mode decision play a vital role in H.264 encoder and can reduce the spatial and temporal redundancy significantly, but the computational cost is also high. Here, a fast Intra mode decision algorithm and a fast Inter mode decision algorithm are proposed. Experimental results show that the proposed algorithms not only save a lot of computational cost, but also maintain coding performance quite well. Moreover, a real time H.264 baseline codec is implemented on mobile device. Based on our real time H.264 codec, an H.264 based mobile video conferencing system is achieved.
The objective of this thesis is to develop some efficient and perceptual image and video coding techniques. Two parts of the work are investigated in this thesis.
The second part of this thesis investigates two kinds of perceptual picture coding techniques. One is the just noticeable distortion (JND) based picture coding. Firstly, a DCT based spatio-temporal JND model is proposed, which is an efficient model to represent the perceptual redundancies existing in images and is consistent with the human visual system (HVS) characteristic. Secondly, the proposed JND model is incorporated into image and video coding to improve the perceptual quality. Based on the JND model, a transparent image coder and a perceptually optimized H.264 video coder are implemented. Another technique is the image compression scheme based on the recent advances in texture synthesis. In this part, an image compression scheme is proposed with the perceptual visual quality as the performance criterion instead of the pixel-wise fidelity. As demonstrated in extensive experiments, the proposed techniques can improve the perceptual quality of picture coding significantly.
Wei Zhenyu.
Adviser: Ngan Ngi.
Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 148-154).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
27

"Computational models for efficient reconstruction of gene regulatory network." Thesis, 2011. http://library.cuhk.edu.hk/record=b6075380.

Full text
Abstract:
Zhang, Qing.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 129-148).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstract also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
28

Amberker, B. B. "Large-Scale Integer And Polynomial Computations : Efficient Implementation And Applications." Thesis, 1996. http://etd.iisc.ernet.in/handle/2005/1678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zheng, Jiabao. "Efficient spin-photon interface for solid-state-based spin systems for quantum information processing and enhanced metrology." Thesis, 2017. https://doi.org/10.7916/D8N87PB3.

Full text
Abstract:
The holy grail for quantum engineers and scientists is to build the quantum internet that spans over the entire globe. This information infrastructure holds the promise for transmitting information securely, scaling up computing power exponentially and setting the standards for precision measurement at the ultimate limit. Solid-state-based spin systems recently emerge as promising building blocks for the quantum internet. Among these candidates, the negatively charged nitrogen vacancy (NV) center in diamond attracted much attention thanks to its optical addressability, long spin coherence times, and well-controlled electronic orbitals and spin states. However, the non-ideal optical properties of NV poses a challenge to its implementation in quantum technologies. This calls for building photonic structures as efficient spin-photon interfaces for realizing strong interactions with photon modes or efficient out-coupling of its fluorescence. Such interfacing structures are also of great importance for other optically active spin-systems newly found. In this dissertation, chirped dielectric cavities are designed for building NV as fast single photon sources via broadband Purcell enhancement, using an inverse simulation approach to maximize the broadband absorption of the atomically thin absorbers. Simulated NV-cavity coupling indicates broadband Purcell factor of ∼> 100. Next, to realize coupled NV-cavity systems over large scale, a self-aligned nano-implantation technique is investigated using a lithographically defined hybrid mask for both precision pattern transfer and nitrogen implantation. Measured results show single-NV per cavity yield of ∼ 26±1% and 5-fold Purcell induced intensity enhancement. Finally, chirped circular gratings are designed for efficient collection from the NV for remote entanglement and precision sensing. Simulated grating structures present near-unity collection efficiencies. These demonstrated techniques and structures are also applicable to other solid-state-based spin systems.
APA, Harvard, Vancouver, ISO, and other styles
30

Nguyen, Kim Chi, University of Western Sydney, College of Health and Science, and School of Engineering. "Efficient simulation of space-time coded and turbo coded systems." 2007. http://handle.uws.edu.au:8081/1959.7/32467.

Full text
Abstract:
The two main goals of this research are to study the implementation aspects of space-time turbo trellis codes (ST Turbo TC) and to develop efficient simulation methods for space-time and turbo coded systems using the importance sampling (IS) technique. The design of ST Turbo TC for improving the bandwidth efficiency and the reliability of wireless communication networks, which is based on the turbo structure, has been proposed in the literature. To achieve memory savings and reduce the decoding delay, this thesis proposes a simplified ST Turbo TC decoder using a sliding window (SW) technique. Different window sizes are employed and investigated. Through computer simulation, the optimum window sizes are determined for various system configurations. The effect of finite word length representation on the performance of ST Turbo TC is then studied. Simulation results show that ST Turbo TC is feasible for finite word length representation without significant degradation in the frame error rate performance. The optimum word length configurations are defined for all quantities external and internal to the ST Turbo TC decoder. For complex communication systems such as space-time codes and turbo codes, computer simulation is in fact the useful approach to obtain the estimated performance. To overcome the lengthy run-time requirements of the conventional Monte-Carlo (MC) method, this thesis introduces importance sampling simulation methods that accurately estimate the performances of turbo codes and space-time codes including orthogonal space-time block codes (OSTBC) and concatenated OSTBC. It is demonstrated that the proposed methods require much smaller sample sizes to achieve the same accuracy required by a conventional MC estimator.
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
31

Cheeseman, Bevan. "The Adaptive Particle Representation (APR) for Simple and Efficient Adaptive Resolution Processing, Storage and Simulations." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A30873.

Full text
Abstract:
This thesis presents the Adaptive Particle Representation (APR), a novel adaptive data representation that can be used for general data processing, storage, and simulations. The APR is motivated, and designed, as a replacement representation for pixel images to address computational and memory bottlenecks in processing pipelines for studying spatiotemporal processes in biology using Light-sheet Fluo- rescence Microscopy (LSFM) data. The APR is an adaptive function representation that represents a function in a spatially adaptive way using a set of Particle Cells V and function values stored at particle collocation points P∗. The Particle Cells partition space, and implicitly define a piecewise constant Implied Resolution Function R∗(y) and particle sampling locations. As an adaptive data representation, the APR can be used to provide both computational and memory benefits by aligning the number of Particle Cells and particles with the spatial scales of the function. The APR allows reconstruction of a function value at any location y using any positive weighted combination of particles within a distance of R∗(y). The Particle Cells V are selected such that the error between the reconstruction and the original function, when weighted by a function σ(y), is below a user-set relative error threshold E. We call this the Reconstruction Condition and σ(y) the Local Intensity Scale. σ(y) is motivated by local gain controls in the human visual system, and for LSFM data can be used to account for contrast variations across an image. The APR is formed by satisfying an additional condition on R∗(y); we call the Resolution Bound. The Resolution Bound relates the R∗(y) to a local maximum of the absolute value function derivatives within a distance R∗(y) or y. Given restric- tions on σ(y), satisfaction of the Resolution Bound also guarantees satisfaction of the Reconstruction Condition. In this thesis, we present algorithms and approaches that find the optimal Implied Resolution Function to general problems in the form of the Resolution Bound using Particle Cells using an algorithm we call the Pulling Scheme. Here, optimal means the largest R∗(y) at each location. The Pulling Scheme has worst-case linear complexity in the number of pixels when used to rep- resent images. The approach is general in that the same algorithm can be used for general (α,m)-Reconstruction Conditions, where α denotes the function derivative and m the minimum order of the reconstruction. Further, it can also be combined with anisotropic neighborhoods to provide adaptation in both space and time. The APR can be used with both noise-free and noisy data. For noisy data, the Reconstruction Condition can no longer be guaranteed, but numerical results show an optimal range of relative error E that provides a maximum increase in PSNR over the noisy input data. Further, if it is assumed the Implied Resolution Func- tion satisfies the Resolution Bound, then the APR converges to a biased estimate (constant factor of E), at the optimal statistical rate. The APR continues a long tradition of adaptive data representations and rep- resents a unique trade off between the level of adaptation of the representation and simplicity. Both regarding the APRs structure and its use for processing. Here, we numerically evaluate the adaptation and processing of the APR for use with LSFM data. This is done using both synthetic and LSFM exemplar data. It is concluded from these results that the APR has the correct properties to provide a replacement of pixel images and address bottlenecks in processing for LSFM data. Removal of the bottleneck would be achieved by adapting to spatial, temporal and intensity scale variations in the data. Further, we propose the simple structure of the general APR could provide benefit in areas such as the numerical solution of differential equations, adaptive regression methods, and surface representation for computer graphics.
APA, Harvard, Vancouver, ISO, and other styles
32

Mishra, Ashirbad. "Efficient betweenness Centrality Computations on Hybrid CPU-GPU Systems." Thesis, 2016. http://hdl.handle.net/2005/2718.

Full text
Abstract:
Analysis of networks is quite interesting, because they can be interpreted for several purposes. Various features require different metrics to measure and interpret them. Measuring the relative importance of each vertex in a network is one of the most fundamental building blocks in network analysis. Between’s Centrality (BC) is one such metric that plays a key role in many real world applications. BC is an important graph analytics application for large-scale graphs. However it is one of the most computationally intensive kernels to execute, and measuring centrality in billion-scale graphs is quite challenging. While there are several existing e orts towards parallelizing BC algorithms on multi-core CPUs and many-core GPUs, in this work, we propose a novel ne-grained CPU-GPU hybrid algorithm that partitions a graph into two partitions, one each for CPU and GPU. Our method performs BC computations for the graph on both the CPU and GPU resources simultaneously, resulting in a very small number of CPU-GPU synchronizations, hence taking less time for communications. The BC algorithm consists of two phases, the forward phase and the backward phase. In the forward phase, we initially and the paths that are needed by either partitions, after which each partition is executed on each processor in an asynchronous manner. We initially compute border matrices for each partition which stores the relative distances between each pair of border vertex in a partition. The matrices are used in the forward phase calculations of all the sources. In this way, our hybrid BC algorithm leverages the multi-source property inherent in the BC problem. We present proof of correctness and the bounds for the number of iterations for each source. We also perform a novel hybrid and asynchronous backward phase, in which each partition communicates with the other only when there is a path that crosses the partition, hence it performs minimal CPU-GPU synchronizations. We use a variety of implementations for our work, like node-based and edge based parallelism, which includes data-driven and topology based techniques. In the implementation we show that our method also works using variable partitioning technique. The technique partitions the graph into unequal parts accounting for the processing power of each processor. Our implementations achieve almost equal percentage of utilization on both the processors due to the technique. For large scale graphs, the size of the border matrix also becomes large, hence to accommodate the matrix we present various techniques. The techniques use the properties inherent in the shortest path problem for reduction. We mention the drawbacks of performing shortest path computations on a large scale and also provide various solutions to it. Evaluations using a large number of graphs with different characteristics show that our hybrid approach without variable partitioning and border matrix reduction gives 67% improvement in performance, and 64-98.5% less CPU-GPU communications than the state of art hybrid algorithm based on the popular Bulk Synchronous Paradigm (BSP) approach implemented in TOTEM. This shows our algorithm's strength which reduces the need for larger synchronizations. Implementing variable partitioning, border matrix reduction and backward phase optimizations on our hybrid algorithm provides up to 10x speedup. We compare our optimized implementation, with CPU and GPU standalone codes based on our forward phase and backward phase kernels, and show around 2-8x speedup over the CPU-only code and can accommodate large graphs that cannot be accommodated in the GPU-only code. We also show that our method`s performance is competitive to the state of art multi-core CPU and performs 40-52% better than GPU implementations, on large graphs. We show the drawbacks of CPU and GPU only implementations and try to motivate the reader about the challenges that graph algorithms face in large scale computing, suggesting that a hybrid or distributed way of approaching the problem is a better way of overcoming the hurdles.
APA, Harvard, Vancouver, ISO, and other styles
33

(9757565), Yo-Sing Yeh. "Efficient Knot Optimization for Accurate B-spline-based Data Approximation." Thesis, 2020.

Find full text
Abstract:
Many practical applications benefit from the reconstruction of a smooth multivariate function from discrete data for purposes such as reducing file size or improving analytic and visualization performance. Among the different reconstruction methods, tensor product B-spline has a number of advantageous properties over alternative data representation. However, the problem of constructing a best-fit B-spline approximation effectively contains many roadblocks. Within the many free parameters in the B-spline model, the choice of the knot vectors, which defines the separation of each piecewise polynomial patch in a B-spline construction, has a major influence on the resulting reconstruction quality. Yet existing knot placement methods are still ineffective, computationally expensive, or impose limitations on the dataset format or the B-spline order. Moving beyond the 1D cases (curves) and onto higher dimensional datasets (surfaces, volumes, hypervolumes) introduces additional computational challenges as well. Further complications also arise in the case of undersampled data points where the approximation problem can become ill-posed and existing regularization proves unsatisfactory.

This dissertation is concerned with improving the efficiency and accuracy of the construction of a B-spline approximation on discrete data. Specifically, we present a novel B-splines knot placement approach for accurate reconstruction of discretely sampled data, first in 1D, then extended to higher dimensions for both structured and unstructured formats. Our knot placement methods take into account the feature or complexity of the input data by estimating its high-order derivatives such that the resulting approximation is highly accurate with a low number of control points. We demonstrate our method on various 1D to 3D structured and unstructured datasets, including synthetic, simulation, and captured data. We compare our method with state-of-the-art knot placement methods and show that our approach achieves higher accuracy while requiring fewer B-spline control points. We discuss a regression approach to the selection of the number of knots for multivariate data given a target error threshold. In the case of the reconstruction of irregularly sampled data, where the linear system often becomes ill-posed, we propose a locally varying regularization scheme to address cases for which a straightforward regularization fails to produce a satisfactory reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
34

Haley, David. "Efficient architectures for error control using low-density parity-check codes." 2004. http://arrow.unisa.edu.au:8081/1959.8/46679.

Full text
Abstract:
Recent designs for low-density parity-check (LDPC) codes have exhibited capacity approaching performance for large block length, overtaking the performance of turbo codes. While theoretically impressive, LDPC codes present some challenges for practical implementation. In general, LDPC codes have higher encoding complexity than turbo codes both in terms of computational latency and architecture size. Decoder circuits for LDPC codes have a high routing complexity and thus demand large amounts of circuit area. There has been recent interest in developing analog circuit architectures suitable for decoding. These circuits offer a fast, low-power alternative to the digital approach. Analog decoders also have the potential to be significantly smaller than digital decoders. In this thesis we present a novel and efficient approach to LDPC encoder / decoder (codec) design. We propose a new algorithm which allows the parallel decoder architecture to be reused for iterative encoding. We present a new class of LDPC codes which are iteratively encodable, exhibit good empirical performance, and provide a flexible choice of code length and rate. Combining the analog decoding approach with this new encoding technique, we design a novel time-multiplexed LDPC codec, which switches between analog decode and digital encode modes. In order to achieve this behaviour from a single circuit we have developed mode-switching gates. These logic gates are able to switch between analog (soft) and digital (hard) computation, and represent a fundamental circuit design contribution. Mode-switching gates may also be applied to built-in self-test circuits for analog decoders. Only a small overhead in circuit area is required to transform the analog decoder into a full codec. The encode operation can be performed two orders of magnitude faster than the decode operation, making the circuit suitable for full-duplex applications. Throughput of the codec scales linearly with block size, for both encode and decode operations. The low power and small area requirements of the circuit make it an attractive option for small portable devices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography