Dissertations / Theses on the topic 'Coding framework'

To see the other types of publications on this topic, follow the link: Coding framework.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 38 dissertations / theses for your research on the topic 'Coding framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

He, Shan. "A joint coding and embedding framework for multimedia fingerprinting." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7347.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
2

Pliushch, Iuliia [Verfasser]. "Self-deception within the predictive coding framework / Iuliia Pliushch." Mainz : Universitätsbibliothek Mainz, 2017. http://d-nb.info/1130424901/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Al-Najdawi, Ashraf. "A multi-objective performance optimisation framework for video coding." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6446.

Full text
Abstract:
Digital video technologies have become an essential part of the way visual information is created, consumed and communicated. However, due to the unprecedented growth of digital video technologies, competition for bandwidth resources has become fierce. This has highlighted a critical need for optimising the performance of video encoders. However, there is a dual optimisation problem, wherein, the objective is to reduce the buffer and memory requirements while maintaining the quality of the encoded video. Additionally, through the analysis of existing video compression techniques, it was found that the operation of video encoders requires the optimisation of numerous decision parameters to achieve the best trade-offs between factors that affect visual quality; given the resource limitations arising from operational constraints such as memory and complexity. The research in this thesis has focused on optimising the performance of the H.264/AVC video encoder, a process that involved finding solutions for multiple conflicting objectives. As part of this research, an automated tool for optimising video compression to achieve an optimal trade-off between bit rate and visual quality, given maximum allowed memory and computational complexity constraints, within a diverse range of scene environments, has been developed. Moreover, the evaluation of this optimisation framework has highlighted the effectiveness of the developed solution.
APA, Harvard, Vancouver, ISO, and other styles
4

Yellapragada, Deepthi V. L. "A SNOMED annotator for UIMA framework." Morgantown, W. Va. : [West Virginia University Libraries], 2007. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5402.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2007.
Title from document title page. Document formatted into pages; contains v, 47 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 47).
APA, Harvard, Vancouver, ISO, and other styles
5

Herath, H. M. A. C. "Statistical databases within a relational framework." Thesis, Keele University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chung, Wilson C. "Adaptive subband video coding in a rate-distortion-constrained framework." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cai, Xiaodong. "Object-based video : integrated segmentation framework and coding quality control." Thesis, University of Sussex, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Harrison, Timothy David. "A connectionist framework for continuous speech recognition." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.253820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Changick. "A framework for object-based video analysis /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/5823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mei, Liming, and james mei@ieee org. "A DWT Based Perceptual Video Coding Framework - Concepts, Issues and Techniques." RMIT University. Electrical and Computer Engineering, 2009. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090506.103244.

Full text
Abstract:
The work in this thesis explore the DWT based video coding by the introduction of a novel DWT (Discrete Wavelet Transform) / MC (Motion Compensation) / DPCM (Differential Pulse Code Modulation) video coding framework, which adopts the EBCOT as the coding engine for both the intra- and the inter-frame coder. The adaptive switching mechanism between the frame/field coding modes is investigated for this coding framework. The Low-Band-Shift (LBS) is employed for the MC in the DWT domain. The LBS based MC is proven to provide consistent improvement on the Peak Signal-to-Noise Ratio (PSNR) of the coded video over the simple Wavelet Tree (WT) based MC. The Adaptive Arithmetic Coding (AAC) is adopted to code the motion information. The context set of the Adaptive Binary Arithmetic Coding (ABAC) for the inter-frame data is redesigned based on the statistical analysis. To further improve the perceived picture quality, a Perceptual Distortion Measure (PDM) based on human vi sion model is used for the EBCOT of the intra-frame coder. A visibility assessment of the quantization error of various subbands in the DWT domain is performed through subjective tests. In summary, all these findings have solved the issues originated from the proposed perceptual video coding framework. They include: a working DWT/MC/DPCM video coding framework with superior coding efficiency on sequences with translational or head-shoulder motion; an adaptive switching mechanism between frame and field coding mode; an effective LBS based MC scheme in the DWT domain; a methodology of the context design for entropy coding of the inter-frame data; a PDM which replaces the MSE inside the EBCOT coding engine for the intra-frame coder, which provides improvement on the perceived quality of intra-frames; a visibility assessment to the quantization errors in the DWT domain.
APA, Harvard, Vancouver, ISO, and other styles
11

Sprljan, Nikola. "A flexible scalable video coding framework with adaptive spatio-temporal decompositions." Thesis, Queen Mary, University of London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Krishnan, Venkatesh. "A framework for low bit-rate speech coding in noisy environment." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-04042005-182043/unrestricted/krishnan%5Fvenkatesh%5F200505%5Fphd.pdf.

Full text
Abstract:
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2005.
Anderson, David, Committee Chair ; Barnwell-III, Thomas, Committee Member ; Clements, Mark, Committee Member ; Truong, Kwan, Committee Member ; Basu, Saugata, Committee Member. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
13

Wong, Georges. "Improved speech hidden Markov modelling via an expectation-maximization framework." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Wei. "Reinforcement Learning in Keepaway Framework for RoboCup Simulation League." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-13412.

Full text
Abstract:
This thesis aims to apply the reinforcement learning into soccer robot and show the great power of reinforcement learning for the RoboCup. In the first part, the background of reinforcement learning is briefly introduced before showing the previous work on it. Therefore the difficulty in implementing reinforcement learning is proposed. The second section demonstrates basic concepts in reinforcement learning, including three fundamental elements, state, action and reward respectively, and three classical approaches, dynamic programming, monte carlo methods and temporal-difference learning respectively. When it comes to keepaway framework, more explanations are given to further combine keepaway with reinforcement learning. After the suggestion about sarsa algorithm with two function approximation, artificial neural network and tile coding, it is implemented successfully during the simulations. The results show it significantly improves the performance of soccer robot.
APA, Harvard, Vancouver, ISO, and other styles
15

Strand, Mattias. "A Software Framework for Facial Modelling and Tracking." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54563.

Full text
Abstract:

The WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.

APA, Harvard, Vancouver, ISO, and other styles
16

Battistini, Ylenia. "Studio e Sviluppo Prototipale di un Framework su Piattaforma Snap! per attività di Apprendimento e Coding in Scuole Primarie e Secondarie." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19065/.

Full text
Abstract:
”Perché programmare? E’ diventato un luogo comune identificare i giovani come nativi digitali a causa della loro apparente agilità nell’interagire con le tecnologie digitali. Ma il fatto che siano considerati dei nativi digitali li rende davvero fluenti con le nuove tecnologie? Anche se interagiscono con i media digitali tutto il tempo, pochi sono davvero in grado di creare i propri giochi, le proprie animazioni o simulazioni. In effetti è quasi come se potessero leggere senza però saper scrivere. Ciò che manca è quindi la capacità di sfruttare tali tecnologie per progettare, creare e inventare cose nuove, per i quali è necessario il coding. [...] Programmare espande notevolmente la portata di ciò che puoi creare con il computer e di ciò che puoi imparare. In particolare, il coding supporta il Pensiero Computazionale, grazie al quale poter imparare importanti strategie di problem-solving e progettazione e i cui benefici vanno ben oltre ai domini dell’informatica. Infine, poiché la programmazione coinvolge la creazione di rappresentazioni esterne dei tuoi processi di problem-solving, fornisce l’opportunità di riflettere sul proprio pensiero, nonché a riflettere sull’opinione che abbiamo di noi stessi” . Il computer può diventare uno strumento per fare scuola. L'utilizzo didattico dell'elaboratore implica un cambio di atteggiamento mentale nei suoi confronti, poiché esso, può diventare un ambiente utile per fornire ai bambini nuove possibilità di apprendere, di pensare, di migliorare le proprie capacità di approccio ai problemi costruendo le proprie conoscenze attivamente. ”Dovrebbe essere il bambino a programmare il computer e non il computer a programmare il bambino”, queste le parole di Seymour Papert creatore di uno dei linguaggi di coding più conosciuti: LOGO. Papert considera il bambino come un vaso da riempire ed ”il computer come creta con cui costruire una scultura". Rimane il paradosso ”Perché non insegniamo ai bambini ad imparare, pensare e giocare?”.
APA, Harvard, Vancouver, ISO, and other styles
17

Mazo, Lucille. "University Educators' Instructional Choices and Their Learning Styles Within a Lesson Framework." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3499.

Full text
Abstract:
Research on learning styles often focuses on the learning style of the student; however, the learning style of the educator may affect instructional choices and student learning. Few studies have addressed the lack of knowledge that exists in universities with respect to educators' learning styles and a lesson framework (development, delivery, and debriefing). This sequential mixed methods study explored university educators' conscious, reflective instructional choices as they related to learning styles application within a lesson. Two theoretical frameworks and one conceptual framework drew on Kolb's experiential learning theory; Bloom's, Reigeluth's, and Gagné's instructional design theories and models; and Fiddler and Marienau's events model of learning from experience. Research questions addressed learning styles, usage patterns, instructional choices, and reflections of university educators within a lesson framework. An online inventory recorded 38 university educators' instructional choices, learning styles, and learning styles patterns within the framework of a lesson. Interviews were conducted with 7 of the university educators to document their conscious reflections regarding their instructional choices. Results from the inventory identified that more than 56% of university educators applied the accommodation learning style during the stages of development and delivery of a lesson, and 34% applied the assimilation learning style during the debriefing stage; these findings were supported by detailed reflections about participants' instructional choices in relation to their learning styles. The knowledge acquired about learning styles applications within a lesson framework may benefit university educators' teaching, thereby providing a foundation for positive social change within academic and social communities.
APA, Harvard, Vancouver, ISO, and other styles
18

Lucero, Aldo. "Compressing scientific data with control and minimization of the L-infinity metric under the JPEG 2000 framework." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Kimbung, Stanley Mbandi. "A computational framework for transcriptome assembly and annotation in non-model organisms: the case of venturia inaequalis." Thesis, University of the Western Cape, 2014. http://hdl.handle.net/11394/4022.

Full text
Abstract:
Philosophiae Doctor - PhD
In this dissertation three computational approaches are presented that enable optimization of reference-free transcriptome reconstruction. The first addresses the selection of bona fide reconstructed transcribed fragments (transfrags) from de novo transcriptome assemblies and annotation with a multiple domain co-occurrence framework. We showed that selected transfrags are functionally relevant and represented over 94% of the information derived from annotation by transference. The second approach relates to quality score based RNA-seq sub-sampling and the description of a novel sequence similarity-derived metric for quality assessment of de novo transcriptome assemblies. A detail systematic analysis of the side effects induced by quality score based trimming and or filtering on artefact removal and transcriptome quality is describe. Aggressive trimming produced incomplete reconstructed and missing transfrags. This approach was applied in generating an optimal transcriptome assembly for a South African isolate of V. inaequalis. The third approach deals with the computational partitioning of transfrags assembled from RNA-Seq of mixed host and pathogen reads. We used this strategy to correct a publicly available transcriptome assembly for V. inaequalis (Indian isolate). We binned 50% of the latter to Apple transfrags and identified putative immunity transcript models. Comparative transcriptomic analysis between fungi transfrags from the Indian and South African isolates reveal effectors or transcripts that may be expressed in planta upon morphogenic differentiation. These studies have successfully identified V. inaequalis specific transfrags that can facilitate gene discovery. The unique access to an in-house draft genome assembly allowed us to provide preliminary description of genes that are implicated in pathogenesis. Gene prediction with bona fide transfrags produced 11,692 protein-coding genes. We identified two hydrophobin-like genes and six accessory genes of the melanin biosynthetic pathway that are implicated in the invasive action of the appressorium. The cazyome reveals an impressive repertoire of carbohydrate degrading enzymes and carbohydrate-binding modules amongst which are six polysaccharide lyases, and the largest number of carbohydrate esterases (twenty-eight) known in any fungus sequenced to date
APA, Harvard, Vancouver, ISO, and other styles
20

Hautala, I. (Ilkka). "From dataflow models to energy efficient application specific processors." Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526223681.

Full text
Abstract:
Abstract The development of wireless networks has provided the necessary conditions for several new applications. The emergence of the virtual and augmented reality and the Internet of things and during the era of social media and streaming services, various demands related to functionality and performance have been set for mobile and wearable devices. Meeting these demands is complicated due to minimal energy budgets, which are characteristic of embedded devices. Lately, the energy efficiency of devices has been addressed by increasing parallelism and the use of application-specific hardware resources. This has been hindered by hardware development as well as software development because the conventional development methods are based on the use of low-level abstractions and sequential programming paradigms. On the other hand, deployment of high-level design methods is slowed down because of final solutions that are too much compromised when energy efficiency and performance are considered. This doctoral thesis introduces a model-driven framework for the development of signal processing systems that facilitates hardware and software co-design. The design flow exploits an easily customizable, re-programmable and energy-efficient processor template. The proposed design flow enables tailoring of multiple heterogeneous processing elements and the connections between them to the demands of an application. Application software is described by using high-level dataflow models, which enable the automatic synthesis of parallel applications for different multicore hardware platforms and speed up design space exploration. Suitability of the proposed design flow is demonstrated by using three different applications from different signal processing domains. The experiments showed that raising the level of abstraction has only a minor impact on performance. Video processing algorithms are selected to be the main application area in this thesis. The thesis proposes tailored and reprogrammable energy-efficient processing elements for video coding algorithms. The solutions are based on the use of multiple processing elements by exploiting the pipeline parallelism of the application, which is characteristic of many signal processing algorithms. Performance, power and area metrics for the designed solutions have been obtained using post-layout simulation models. In terms of energy efficiency, the proposed programmable processors form a new compromise solution between fixed hardware accelerators and conventional embedded processors for video coding
Tiivistelmä Langattomien verkkojen kehittyminen on luonut edellytykset useille uusille sovelluksille. Muiden muassa sosiaalisen media, suoratoistopalvelut, virtuaalitodellisuus ja esineiden internet asettavat kannettaville ja puettaville laitteille moninaisia toimintoihin, suorituskykyyn, energiankulutukseen ja fyysiseen muotoon liittyviä vaatimuksia. Yksi isoimmista haasteista on sulautettujen laitteiden energiankulutus. Laitteiden energiatehokkuutta on pyritty parantamaan rinnakkaislaskentaa ja räätälöityjä laskentaresursseja hyödyntämällä. Tämä puolestaan on vaikeuttanut niin laite- kuin sovelluskehitystä, koska laajassa käytössä olevat kehitystyökalut perustuvat matalan tason abstraktioihin ja hyödyntävät alun perin yksi ydinprosessoreille suunniteltuja ohjelmointikieliä. Korkean tason ja automatisoitujen kehitysmenetelmien käyttöönottoa on hidastanut aikaansaatujen järjestelmien puutteellinen suorituskyky ja laiteresurssien tehoton hyödyntäminen. Väitöskirja esittelee datavuopohjaiseen suunnitteluun perustuvan työkaluketjun, joka on tarkoitettu energiatehokkaiden signaalikäsittelyjärjestelmien toteuttamiseen. Työssä esiteltävä suunnitteluvuo pohjautuu laitteistoratkaisuissa räätälöitävään ja ohjelmoitavaan siirtoliipaistavaan prosessoritemplaattiin. Ehdotettu suunnitteluvuo mahdollistaa useiden heterogeenisten prosessoriytimien ja niiden välisten kytkentöjen räätälöimisen sovelluksien tarpeiden vaatimalla tavalla. Suunnitteluvuossa ohjelmistot kuvataan korkean tason datavuomallien avulla. Tämä mahdollistaa erityisesti rinnakkaista laskentaa sisältävän ohjelmiston automaattisen sovittamisen erilaisiin moniprosessorijärjestelmiin ja nopeuttaa erilaisten järjestelmätason ratkaisujen kartoittamista. Suunnitteluvuon käyttökelpoisuus osoitetaan käyttäen esimerkkinä kolmea eri signaalinkäsittelysovellusta. Tulokset osoittavat, että suunnittelumenetelmien abstraktiotasoa on mahdollista nostaa ilman merkittävää suorituskyvyn heikkenemistä. Väitöskirjan keskeinen sovellusalue on videonkoodaus. Työ esittelee videonkoodaukseen suunniteltuja energiatehokkaita ja uudelleenohjelmoitavia prosessoriytimiä. Ratkaisut perustuvat usean prosessoriytimen käyttämiseen hyödyntäen erityisesti videonkäsittelyalgoritmeille ominaista liukuhihnarinnakkaisuutta. Prosessorien virrankulutus, suorituskyky ja pinta-ala on analysoitu käyttämällä simulointimalleja, jotka huomioivat logiikkasolujen sijoittelun ja johdotuksen. Ehdotetut sovelluskohtaiset prosessoriratkaisut tarjoavat uuden energiatehokkaan kompromissiratkaisun tavanomaisten ohjelmoitavien prosessoreiden ja kiinteästi johdotettujen video-kiihdyttimien välille
APA, Harvard, Vancouver, ISO, and other styles
21

Ali, Khan Syed Irteza. "Classification using residual vector quantization." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50300.

Full text
Abstract:
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yihui. "Do All Asian Americans Feel Alike? Exploring Asian American College Students' Sense of Belonging on Campuses." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1542046823067658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sandberg, Emil. "Creative Coding on the Web in p5.js : A Library Where JavaScript Meets Processing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17941.

Full text
Abstract:
Creative coding is the practice of writing code primarily for an expressive purpose rather than a functional one. It is mostly used in creative arts contexts. One of the most popular tools in creative coding is Processing. Processing is a desktop application and in recent years a web-based alternative named p5.js has been developed. This thesis investigates the p5.js JavaScript library. It looks at what can be accomplished with it and in which cases it might be used. The main focus is on the pros and cons of using p5.js for web graphics. Another point of focus is on how the web can be used as a creative platform with tools like p5.js. The goals are to provide an overview of p5.js and an evaluation of the p5.js library as a tool for creating interactive graphics and animations on the web. The research focuses on comparing p5.js with plain JavaScript from usability and performance perspectives and making general comparisons with other web-based frameworks for creative coding. The methods are a survey and interviews with members of creative coding communities, as well as performing coding experiments in p5.js and plain JavaScript and comparing the results and the process. The results from the coding experiments show that compared to plain JavaScript p5.js is easier to get started with, it is more intuitive, and code created in p5.js is easier to read. On the other hand, p5.js performs worse, especially when continuously drawing large amounts of elements to the screen. This is further supported by the survey and the interviews, which show that p5.js is liked for its usability, but that its performance issues and lack of advanced features mean that it is usually not considered for professional projects. The primary use case for p5.js is creating quick, visual prototypes. At the same time, the interviews show that p5.js has been used in a variety of contexts, both creative and practical. p5.js is a good library for getting started with coding creatively in the browser and is an excellent choice for experimenting and creating prototypes quickly. Should project requirements be much more advanced than that, there might be other options that will work better.
APA, Harvard, Vancouver, ISO, and other styles
24

"A unified framework for linear network coding." 2008. http://library.cuhk.edu.hk/record=b5893688.

Full text
Abstract:
Tan, Min.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.
Includes bibliographical references (p. 35-36).
Abstracts in English and Chinese.
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Previous Work --- p.1
Chapter 1.2 --- Motivation --- p.2
Chapter 1.3 --- Contributions --- p.2
Chapter 1.4 --- Thesis Organization --- p.3
Chapter 2 --- Linear Network Coding Basics --- p.5
Chapter 2.1 --- Formulation and Example --- p.5
Chapter 2.2 --- Some Notations --- p.9
Chapter 3 --- A Unified Framework --- p.13
Chapter 3.1 --- Generic Network Codes Revisited --- p.13
Chapter 3.2 --- A Unified Framework --- p.24
Chapter 3.3 --- Simplified Proofs --- p.29
Chapter 4 --- Conclusion --- p.33
Bibliography --- p.35
APA, Harvard, Vancouver, ISO, and other styles
25

蕭哲民. "SoC architecture for MPEG Reconfigurable Video Coding Framework." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/31948018021751248763.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
95
Due to the variety of popular video coding standards, many efforts have been put into the design of a single video decoder chip that supports multiple formats. In 2004, ISO/IEC MPEG started a new work item to facilitate multi-format video codec design and to enable more flexible usage of coding tools. The work item has turned into the MPEG Reconfigurable Video Coding (RVC) framework. The key concept of the RVC framework is to allow flexible reconfiguration of coding tools to create different codec solutions on-the-fly. In this thesis, flexible SoC architecture is proposed to support the RVC framework. Some analysis has been conducted to show the extra costs required for this platform compared to hard-wired codec architecture. In conclusion, the RVC framework can be mapped to an SoC platform to provide flexibility and scalability for dynamic application environment with reasonable cost in hardware design.
APA, Harvard, Vancouver, ISO, and other styles
26

HUANG, Long-wang, and 黃龍旺. "A Constant Quality Coding Framework for H.264/AVC." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/06896136144104909169.

Full text
Abstract:
碩士
國立中央大學
資訊工程研究所
100
Quality control is important in video coding, which tries to dynamically adjust the encoder parameters for achieving the target distortion. In this thesis, we propose a quality control framework for the constant quality coding in H.264/AVC. The proposed scheme can assign a suitable Quantization Parameter (QP) to each frame based on the scene complexity. In intra-coded frames, we evaluate the scene complexity based on the quality measurements of the resized and singular value decomposition processed frames. With the proposed model, we can adjust the QP to achieve the target distortion. Our propose framework can use different quality measurements such as Peak Signal to Noise Ratio and Structural Similarity. For inter-coded frames, we employ the additional temporal information by the simple motion estimation to improve the prediction accuracy. We also propose a dynamic encoding mechanism for the model adjustment. When the content has large variations, we may encode the frame twice. Otherwise, we encode it only once. In addition, the effect of scene changes on the model update is also considered to reduce the quality deviation from the target. Experimental results show that our scheme performs well in various test videos.
APA, Harvard, Vancouver, ISO, and other styles
27

Sun, Chen-Hsaing, and 孫振翔. "The Improved Distributed Video Codec under Multiple Description Coding Framework." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/71259798298453459481.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
96
Multiple Description Coding (MDC) is an effective solution of multi-media transmission in the jammed internet and unstable wireless networks. MDC promises the stability and reliability of multi-media communication with multiple transmission paths. We have proposed an improved MDC architecture that: (1) decomposes the original video into two sub-videos which are encoded using MC-EZBC coder; and (2) correlations between sub-videos are exploited and encoded using the Distributed Video Coder (DVC); (3) Apply predictive coding framework between sub-video and DVC bit stream; and well exploit the correlations between video to enhance PSNR at the same bit rate;(4) combine the encoded data of above two parts to generate one description. The generated two descriptions are transmitted independently and are protected by FEC (Forward Error Correction) algorithm. The PSNR of reconstructed video can be improved to 0.5 to 1 dB higher, as compared to previous researches. Experiments also show that, by effectively integrate residual DVC coding with MDC, the stability and quality are largely enhanced.
APA, Harvard, Vancouver, ISO, and other styles
28

Lai, Hung-Liang, and 賴宏亮. "Adaptive Leaky Prediction Technique under Multiple Description Video Coding Framework." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/22339505678368670002.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
93
In recent years, with the development of the internet, video streaming across packet-lossy networks has received much attention. But the problem of packet loss is still difficult to deal with. Since video coding uses motion compensation prediction, there will be error propagation problems in the decoding if some packets of previous frames are lost. To solve the problem of the error propagation, the well-known approach is leaky prediction. However, how to find an adaptive optimal leaky factor in the leaky prediction remains a challenging task. For such a task, we propose a new solution for the leaky factor of leaky prediction under Base MDSQ. Generally speaking, our optimal leaky factor is still depending on the packet loss rate. But the natural property of videos such as the complexity or amount of movement also affects the decision of the leaky factor. Moreover, it is necessary that the whole framework of coder must be considered because some properties of coder such as multiple description coding or Error Concealment technique can help for the reconstruction of lost frames. Therefore, we propose a new method about how to find the optimal leaky factors depending on the loss rate, the natural property of videos and the whole framework of the coder. From the simulation results, we can see that the proposed algorithm provides better performance than other leaky factors. And our method can have fine performance for the videos with different properties.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Chih Han, and 李致翰. "A Framework for EEG Compression with Compressive Sensing and Huffman Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/28431597266494445081.

Full text
Abstract:
碩士
國立清華大學
電機工程學系
103
Compressive sensing (CS) is an emerging technique for data compression in recent years. In this thesis, it is used to compress electroencephalogram (EEG) signals. CS includes two major principles. The one is the sparsity, and the other is incoherence. However, the EEG signal is not sparse enough. Thus, CS can only recover the compressed EEG signals in low compression ratios. Under high compression ratios, the recovery of compressed EEG signals fails after the compression. The compression ratios where EEG can be reconstructed with high quality is not high enough to let the system become energy-efficient, so the compression will be not meaningful. Thus, we want to find a solution to make CS become practical in compressing EEG signals when high compression ratios are adopted. From surveying literatures, the approaches to increase performance in CS can be separated into three classes. First, design a more strong reconstruction algorithm. Second, find a dictionary where the EEG signals can have sparse presentation in such transform domain. Lastly, combine the CS with other compression techniques. Here we take the first and third approaches to achieve the goal. First of all, we proposed a modified iterative pseudo-inverse multiplication (MIPIM) with the complexity O(KMN) where M is the dimension of the measurements, N is the dimension of the signal, and K is the sparse level. This complexity is lower than the most existing algorithms. Next, we extend MIPIM into a multiple measurements (MMV) algorithm. It is called as simultaneously MIPIM (SMIPIM). This aims at recovering all channel signals at the same time and taking the correlation among channels to increase performance. The SMIPIM can reduce normalized mean square error (NMSE) by 0.06 comparing with the classical algorithms in CS. For the part of combining the CS with other compression techniques, we adopt an existing framework which takes an information from server or receiver node to combine CS and Huffman coding efficiently. The framework was proposed to increase the compression to apply to the telemedicine with EEG signals, but we found a shortcoming. It takes a long computational time on running the algorithm which produces information. It will make the instant telemedicine unavailable because sensors can not transmit data until the information are received. Therefore, we propose an algorithm to replace the existing one. The complexity changes from O(L^5) to O(L^2) where L is the number of channels. In our experiment, our algorithm is faster 10^5 times than the existing one. Finally, we carried out the simulation of entire system. We simulated the framework with our proposed algorithm for computing the information of correlation of channels and our SMIPIM for reconstruction. In a compression ratio 3 : 1, the NMSE is 0.0672 , and the original CS framework with Block Sparse Bayesian Learning Bound Optimization (BSBLBO) is 0.1554. On the other hand, depending on the minimum acceptable NMSE which is 0.09 for EEG signals, we have a compression ratio 0.31. Moreover, we take the compression ratio to estimate how many channels we can transmit in a fixed transmission bandwidth. The result shows that the number of channels can increase 16 with Bluetooth 2.0 and 35 with ZigBee for wireless transmission after the work.
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Jen-Chang, and 劉震昌. "The Stereo Audio Coding in the Framework of MPEG1 Layer I, II." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/16573177512286166460.

Full text
Abstract:
碩士
國立交通大學
資訊工程學系
84
The purpose of stereo audio signal coding is to reduce the required bit rate, while maintaining the signal quality after decoding. The ISO MPEG1 is the most widely used audio compression standard in many commercial applications. Among the vast commercial products, MPEG1 layers I and II coding processes are most widely adopted. MPEG1 layer II can achieve a transparent audio quality above 2x128 kbits/s by independent coding of the left and the right channels. With the use of joint stereo coding technique, such as intensity stereo coding in MPEG1, the decoded audio quality can be improved for the bit rate lower than 2x128 kbits/s. In this thesis, we analyze the data redundancy of stereo audio signals. The Karhunen-Loeve (KL) transform and inter-channel prediction methods are applied to exploit and analyze the data redundancy in the framework of MPEG1 layers I and II. On the KL transform, we propose two modified intensity stereo coding algorithms for MPEG1 layers I and II by KL transform to further improve the decoded stereo audio quality at bit rate below 2x128 kbits/s. Subjective and objective measurements show that the two algorithms have better stereo audio quality than the original MPEG1 method. On the inter-channel prediction, we consider the coding gains along with various parameters such as prediction order, prediction delay, time varying property, the required side information, etc.. The experiment results suggest the applying of inter- channel prediction in the low frequency bands, and transmission of the prediction coefficients once for longer frames to avoid the side information overhead.
APA, Harvard, Vancouver, ISO, and other styles
31

Juang, Shyh-Yan, and 莊士賢. "The audio coding in the framework of AC-3:transform, coupling, and dithering." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/56495932938850908421.

Full text
Abstract:
碩士
國立交通大學
資訊工程學系
85
The purpose of audio coding is to reduce the required bit rate, while maintaining the signal quality after decoding. The AC-3 standard is a widely used audio compression standard in many commercial applications. This thesis considers audio coding under the framework of AC3 through three design issues: time-domain-aliasing-cancellation (TDAC) transform, coupling, and dithering. The AC3 is a transform coding and the TDAC plays the role to transform a signal from time-domain into frequency domain. Since that the computing complexity of the TDAC is high, the development of fast computation algorithms for the TDAC has been a major concern for real-time applications. The first issue of this thesis develops a fast algorithm for the TDAC. The fast algorithm is derived through two steps. The first step converts the six formulae, which are for the three forward-inverse transforms used in the TDAC, into a unified formula known as discrete cosine transform through data permutation in the input and the output. For the discrete cosine transform, the second step develops a fast computing algorithm which takes special consideration on the stack size limitation in x86 CPU in addition to achieving the low complexity as traditional fast algorithms. The second issue of the thesis is on the coupling strategies in AC3. Nowadays, the CD quality audio sequence can be compressed into 2x128 kbits per second by independent coding of the left and the right channels. Coupling strategies can provide the design space exploiting the human hearing knowledge which indicates low human sensitivity of the stereophonic in high frequency components. With the use of coupling coding techniques, the compressed audio quality can be improved for the bit rate lower than 2x128 Kbits/s. This thesis presents five algorithms which are applicable to coupling strategies in AC3. We compare the five algorithms through theoretical analysis, objective quality measure and subjective quality measure. Among the five algorithms, three are newly developed. The three algorithms are developed based on the KL transform. Subjective and objective measurements show that the coupling strategy based on the KL transform can provide better stereo audio quality than the others. The third issue of the thesis is on the dithering signals. Dithering can avoid the undesired noise from quantization and provide the means to have better stereo signals. This thesis confirms the above two benefits through experiments and provides the strategies to apply the dithering in the framework of AC3.
APA, Harvard, Vancouver, ISO, and other styles
32

Pai, Srikanth B. "Classical Binary Codes And Subspace Codes in a Lattice Framework." Thesis, 2015. http://etd.iisc.ernet.in/handle/2005/2708.

Full text
Abstract:
The classical binary error correcting codes, and subspace codes for error correction in random network coding are two different forms of error control coding. We identify common features between these two forms and study the relations between them using the aid of lattices. Lattices are partial ordered sets where every pair of elements has a least upper bound and a greatest lower bound in the lattice. We shall demonstrate that many questions that connect these forms have a natural motivation from the viewpoint of lattices. We shall show that a lattice framework captures the notion of Singleton bound where the bound is on the size of the code as a function of its parameters. For the most part, we consider a special type of a lattice which has the geometric modular property. We will use a lattice framework to combine the two different forms. And then, in order to demonstrate the utility of this binding view, we shall derive a general version of Singleton bound. We will note that the Singleton bounds behave differently in certain respects because the binary coding framework is associated with a lattice that is distributive. We shall demonstrate that lack of distributive gives rise to a weaker bound. We show that Singleton bound for classical binary codes, subspace codes, rank metric codes and Ferrers diagram rank metric codes can be derived using a common technique. In the literature, Singleton bounds are derived for Ferrers diagram rank metric codes where the rank metric codes are linear. We introduce a generalized version of Ferrers diagram rank metric codes and obtain a Singleton bound for this version. Next, we shall prove a conjecture concerning the constraints of embedding a binary coding framework into a subspace framework. We shall prove a conjecture by Braun, Etzion and Vardy, which states that any such embedding which contains the full space in its range is constrained to have a particular size. Our proof will use a theorem due to Lovasz, a subspace counting theorem for geometric modular lattices, to prove the conjecture. We shall further demonstrate that any code that achieves the conjectured size must be of a particular type. This particular type turns out to be a natural distributive sub-lattice of a given geometric modular lattice.
APA, Harvard, Vancouver, ISO, and other styles
33

Liang, Chia-Ming, and 梁家銘. "Applying Lattice Vector Quantization to Audio Coding in the Framework of MPEG Layer I." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/52678765025140122477.

Full text
Abstract:
碩士
國立交通大學
資訊工程學系
84
Lattice vector quantizer is a uniform quantizer, and it can conform the quantization error to the masking threshold computed from the psychoacoustic model analysis. In the thesis, we attempt to apply lattice vector quantization (VQ) to audio coding and investigate the potential of lattice VQ in very low bitrate coding. The encoding structure of MPEG layer I is chosen as the backbone of our proposed coder because MPEG is a inter-national audio coding standard and it will be persuasive when it is taken as a benchmark. We will demonstrate the procedure to design an optimal lattice VQ based on the root lattice in a given dimension. The theoretical benefit of lattice VQover scalar quantization will be analyzed and verified through experiments. In addition to verifying the geometric compactness of lattice VQ, we also make proper adjustment to the lattice VQ for taking advantage of the nonuniform distribution of normalized input vectors. The experiment results show that at 32 kbps, we can obtain average 1.36 dB gain from the geometric compactness of lattice VQ and additional 0.18 dB from the nonuniform distribution of normalized input vectors. Therefore, it is convincible that lattice VQ is applicable in very low bit rate audio coding.
APA, Harvard, Vancouver, ISO, and other styles
34

Liang, Jia-Ming, and 梁家銘. "Applying Lattice Vector Quantization to Audio Coding in the Framework of MPEG Layer I." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/08122722642763898422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Lien-Yu, and 李聯育. "Low complexity Subband/Wavelet Framework for Scalable Video Coding Based on the H.264/AVC." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/52375342680726556036.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
97
The subband/wavelet coding was adopted by the H.264/SVC standard. The SBC use 9/7 wavelet transform for DCT-based H.264/SVC. The 9/7 DWT need lots of computation and increase complexity. We proposed a new SBC structure for low complexity. For hardware, the new SBC structure is easy for implementation. The compression efficiency of the new SBC is higher than JSVM, and lower than SBC. But the efficiency between the new SBC and SBC is similar.
APA, Harvard, Vancouver, ISO, and other styles
36

Rapaka, Krishnakanth. "A Novel Multi-Symbol Curve Fit based CABAC Framework for Hybrid Video Codec's with Improved Coding Efficiency and Throughput." Thesis, 2012. http://hdl.handle.net/10012/7032.

Full text
Abstract:
Video compression is an essential component of present-day applications and a decisive factor between the success or failure of a business model. There is an ever increasing demand to transmit larger number of superior-quality video channels into the available transmission bandwidth. Consumers are increasingly discerning about the quality and performance of video-based products and there is therefore a strong incentive for continuous improvement in video coding technology for companies to have market edge over its competitors. Even though processor speeds and network bandwidths continue to increase, a better video compression results in a more competitive product. This drive to improve video compression technology has led to a revolution in the last decade. In this thesis we addresses some of these data compression problems in a practical multimedia system that employ Hybrid video coding schemes. Typically Real life video signals show non-stationary statistical behavior. The statistics of these signals largely depend on the video content and the acquisition process. Hybrid video coding schemes like H264/AVC exploits some of the non-stationary characteristics but certainly not all of it. Moreover, higher order statistical dependencies on a syntax element level are mostly neglected in existing video coding schemes. Designing a video coding scheme for a video coder by taking into consideration these typically observed statistical properties, however, offers room for significant improvements in coding efficiency.In this thesis work a new frequency domain curve-fitting compression framework is proposed as an extension to H264 Context Adaptive Binary Arithmetic Coder (CABAC) that achieves better compression efficiency at reduced complexity. The proposed Curve-Fitting extension to H264 CABAC, henceforth called as CF-CABAC, is modularly designed to conveniently fit into existing block based H264 Hybrid video Entropy coding algorithms. Traditionally there have been many proposals in the literature to fuse surfaces/curve fitting with Block-based, Region based, Training-based (VQ, fractals) compression algorithms primarily to exploiting pixel- domain redundancies. Though the compression efficiency of these are expectantly better than DCT transform based compression, but their main drawback is the high computational demand which make the former techniques non-competitive for real-time applications over the latter. The curve fitting techniques proposed so far have been on the pixel domain. The video characteristic on the pixel domain are highly non-stationary making curve fitting techniques not very efficient in terms of video quality, compression ratio and complexity. In this thesis, we explore using curve fitting techniques to Quantized frequency domain coefficients. we fuse this powerful technique to H264 CABAC Entropy coding. Based on some predictable characteristics of Quantized DCT coefficients, a computationally in-expensive curve fitting technique is explored that fits into the existing H264 CABAC framework. Also Due to the lossy nature of video compression and the strong demand for bandwidth and computation resources in a multimedia system, one of the key design issues for video coding is to optimize trade-off among quality (distortion) vs compression (rate) vs complexity. This thesis also briefly studies the existing rate distortion (RD) optimization approaches proposed to video coding for exploring the best RD performance of a video codec. Further, we propose a graph based algorithm for Rate-distortion. optimization of quantized coefficient indices for the proposed CF-CABAC entropy coding.
APA, Harvard, Vancouver, ISO, and other styles
37

(8187867), Abubakr O. Alabbasi. "A QUANTITATIVE FRAMEWORK FOR CDN-BASED OVER-THE-TOP VIDEO STREAMING SYSTEMS." Thesis, 2020.

Find full text
Abstract:
The demand for global video has been burgeoning across industries. With the expansion and improvement of video-streaming services, cloud-based video is evolving into a necessary feature of any successful business for reaching internal and external audiences. Over-the-top (OTT) video streaming, e.g., Netfix and YouTube, has been dominating the global IP traffic in recent years. More than 50% of OTT video traffic are now delivered through content distribution networks (CDNs). Even though multiple solutions have been proposed for improving congestion in the CDN system, managing the ever-increasing traffic requires a fundamental understanding of the system and the different design fexibilities (control knobs) to make the best use of the hardware limitations. In Addition, there is no analytical understanding for the key quality of experience (QoE) attributes (stall duration, average quality, etc.) for video streaming when transmitted using CDN-based multi-tier infrastructure, which is the focus of this thesis. The key contribution of this thesis is to provide a white-box analytical understanding of the key QoE attributes of the enduser in cloud storage systems, which can be used to systematically address the choppy user experience and have optimized system designs. The rst key design involves the scheduling strategy, that chooses the subset of CDN servers to obtain the content. The second key design involves the quality of each video chunk. The third key design involves deciding which contents to cache at the edge routers and which content needs to be stored at the CDN. Towards solving these challenges, this dissertation is divided into three parts. Part 1 considers video streaming over distributed systems where the video segments are encoded using an erasure code for better reliability. Part 2 looks at the problem of optimizing the tradeoff between quality and stall of the streamed videos. In Part 3, we consider caching partial contents of the videos at the CDN as well as at the edge-routers to further optimize video streaming services. We present a model for describing a today's representative multi-tier system architecture
for video streaming applications, typically composed of a centralized origin server, several CDN sites and edge-caches. Our model comprehensively considers the following factors: limited caching spaces at the CDN sites and edge-routers, allocation of CDN for a video request, choice of different ports from the CDN, and the central storage and bandwidth allocation. With this model, we optimize different quality of experience (QoE) measures and present novel, yet efficient, algorithms to solve the formulated optimization problems. Our extensive simulation results demonstrate that the proposed algorithms signicantly outperform the state-of-the-art strategies. We take one step further and implement a small-scale video streaming system in a real cloud environment, managed by Openstack, and validate our results
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Y. "Investigating collaboration in art and technology." Thesis, 2008. http://hdl.handle.net/10453/37565.

Full text
Abstract:
University of Technology, Sydney. Faculty of Information Technology.
With the rapid development in computer technology in recent years, the arrival of digital media and computational tools has opened up new possibilities for creative practice in art, where collaboration between digital art practitioners and computer technologists often happens. The study of interdisciplinary collaboration in art and technology offers great opportunities for investigation of creativity and the role of new technology. This thesis presents an investigation into interdisciplinary collaboration between artists and technologists based on a series of case studies selected from actual art- technology projects. Two analysis techniques were used in this research: context analysis, which provides the breadth of the analysis, and protocol analysis, which provides the depth of the analysis. During the analysis process, two coding schemes, which are the context analysis coding scheme and the protocol analysis coding scheme, were developed, evaluated and refined over a series of case studies. Using the coding schemes, the results of the analysis drawn from different cases are compared and the implications are discussed. The findings provide insights into art- technology collaboration in the creative process, in particular, the features of communication and the role of mediation tools. The outcomes of this thesis are: • The analysis framework, consisting of the context analysis coding scheme and the protocol analysis coding scheme, which has been developed and applied to a series of case studies and has been tested for effectiveness and reliability. • The findings, with the assistance of the analysis framework, provide a better understanding of the nature of the interaction between artists and technologists during a creative process. This includes: o How communication behaviour is distributed between artists and technologists; o What the role of computer tools is during the creative process and how these tools can affect artists’ and technologists’ communication behaviour; o How the collaborative creative process is facilitated by external mediation tools, such as computers, interactive artefacts and physical objects. There are two main contributions of the thesis: first, the analysis framework can serve as a powerful and robust analysis tool for future research in the filed of art- technology collaboration or other related domains. Second, the findings provide a better understanding of the collaborative process, in particular, how mediation tools support creative practice between artists and technologists
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography