Dissertations / Theses on the topic 'Coding'

To see the other types of publications on this topic, follow the link: Coding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zafar, Bilal. "Network Coding Employing Product Coding at Relay Stations." Thesis, KTH, Kommunikationssystem, CoS, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48942.

Full text
Abstract:
Network coding is a useful tool to increase the multicast capacity of networks. The traditional approach to network coding involving XOR operation has several limitations such as low robustness and can support only two users/packets at a time,per relay, in the mixing process to achieve optimal error performance. We propose the employment of product coding at the relay station instead of xor and investigate such a system where we use the relay to generate product codes by combining packets from different users.Our scheme uses relays to transmit only the redundancy of the product code instead of the whole product code.We seek to employ product coding can be able to support more than two users/packets per relay per slot,while maintaining a good error performance. Our scheme can accomodate as many users per relay as the costituent block code allows, thus reducing the number of relays required in the network. Product codes also offer increased robustness and flexibility as well as several other advantages, such as proper structure for burst error correction without extra interleaving. We compare the performance of such a scheme to the conventional xor scheme and see that our scheme not only reduces the number of relays required but gives improved error performance as well as. Another encouraging result is that our scheme starts to significantly outperform the conventional one by introducing a gain at the relay.
APA, Harvard, Vancouver, ISO, and other styles
2

Lehman, April Rasala 1977. "Network coding." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/30162.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 115-118).
In the network coding problem, there are k commodities each with an associated message Mi, a set of sources that know Mi and a set of sinks that request Mi. Each edge in the graph may transmit any function of the messages. These functions define a network coding solution. We explore three topics related to network coding. First, for a model in which the messages and the symbols transmitted on edges are all from the same alphabet [sigma], we prove lower bounds on [the absolute value of sigma]. In one case, we prove [the absolute value of sigma] needs to be doubly-exponential in the size of the network. We also show that it is NP-hard to determine the smallest alphabet size admitting a solution. We then explore the types of functions that admit solutions. In a linear solution over a finite field F the symbol transmitted over each edge is a linear combination of the messages. We show that determining if there exists a linear solution is NP-hard for many classes of network coding problems. As a corollary, we obtain a solvable instance of the network coding problem that does not admit a linear solution over any field F. We then define a model of network coding in which messages are chosen from one alphabet, [gamma], and edges transmit symbols from another alphabet, [sigma]. In this model, we define the rate of a solution as log [gamma absolute value]/ log [sigma absolute value]. We then explore techniques to upper bound the maximum achievable rate for instances defined on directed and undirected graphs. We present a network coding instance in an undirected graph in which the maximum achievable rate is strictly smaller than the sparsity of the graph.
by April Rasala Lehman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Mercier, Rachel Havens. "Coding AuthentiCity." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44334.

Full text
Abstract:
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2008.
Includes bibliographical references (p. 123-125).
This thesis analyzes the impact of form-based codes, focusing on two research questions: (1) What is the underlying motivation for adopting a form-based code? (2) What motivations have the most significant impact on development outcomes? This thesis answers these two questions through an evaluation of form-based code literature and an analysis of three recent form-based codes case studies: SmartCode for Taos, New Mexico, Downtown Specific Plan for Ventura, and SmartCode for Leander, Texas. For each case study, this thesis reviews the historical context of each community, the political process that brought about the form-based code, and the components of the coding document. After considering all three case studies, this thesis formulates conclusions about a range of motivations underlying the use of form-based codes as well as which motivations will have the most significant impact on how form-based codes will shape the built environment. Form-based coding is a relatively new regulatory tool, and has recently been standardized through the creation of the Form-Based Codes Institute (FBCI) in 2005. Using the FBCI's criteria for a form-based code, this thesis evaluates the components of each case study's coding document. Insight into each coding document is supplemented by personal interviews, site visits and background materials that paint a holistic picture of what each community is striving to achieve through a form-based code. The range of motivations for a form-based reached within the conclusion of this thesis include: 1. Preservation of Community Character 2. Creation of Community Character 3. Economic Development 4. Affordable Housing 5. Control of Sprawl.
(cont.) This list does not represent a complete range of motivations for all form-based codes, but rather the motivations uncovered from the cases reviewed in this thesis. Based on these motivations, the author makes a conclusion that Preservation of Community Character has the most significant impact on the built environment. This conclusion is based on literature on city form theory that suggests history provides security through the built form and thus is significant to the psychological and physical nourishment of its inhabitants. This psychological stability is more powerful than any other motivation and will have a lasting impact on how the city evolves into the future.
by Rachel Havens Mercier.
M.C.P.
APA, Harvard, Vancouver, ISO, and other styles
4

Мельник, Ю. "Coding theory." Thesis, Видавництво СумДУ, 2006. http://essuir.sumdu.edu.ua/handle/123456789/21790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Holt, Jim. "Coding Update." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etsu-works/6498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Holt, Jim. "Coding Pearls." Digital Commons @ East Tennessee State University, 2004. https://dc.etsu.edu/etsu-works/6502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Andersson, Tomas. "On error-robust source coding with image coding applications." Licentiate thesis, Stockholm : Department of Signals, Sensors and Systems, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liew, Tong Hooi. "Channel coding and space-time coding for wireless channels." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Adistambha, Kevin. "Embedded lossless audio coding using linear prediction and cascade coding." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20060724.122433/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chaiyaboonthanit, Thanit. "Image coding using wavelet transform and adaptive block truncation coding /." Online version of thesis, 1991. http://hdl.handle.net/1850/10913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dupe, Kai Ajala. "Coding While Black." Thesis, Pepperdine University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10640383.

Full text
Abstract:

The focus on the lack of diversity in technology has become a hot topic over the last several years, with technology companies coming under fire for not being more representative of the markets that they serve. Even The White House and President Obama has made this issue of technology diversity and recruiting more women and people of color a topic of discussion hosting several events at The White House aimed at finding solutions to this issue. The issue has become so prevalent in the news recently that technology companies have been asked to publish report cards disclosing the demographic breakdown of their employee workforce. Most of the major technology companies in Silicon Valley have vowed to dedicate themselves to becoming more diverse, and have instituted programs to do such. However, progress has been slow and the results have been disappointing. Although many attempts to fix this problem has occurred for decades there has been no panacea to emerge. Why are there so few minorities pursuing careers in technology? The answer to this question at the moment is unknown. Although many experts have offered theories, there is little in the way of agreement. As the numbers continue to dwindle and more women and people of color continue to pursue careers in other fields or depart from the technology industry, technology companies are challenged to increase the number of underrepresented minorities in their workforce and to come up with solutions that address this issue that has become so important to the future economic growth of the United States.

Qualitative by design, this study examines the perspectives, insights, and understandings of African American software development engineers. Accordingly, participants in this research study provided key insights regarding strategies, best practices, and challenges experienced by African American software development engineers while developing and implementing application programs at American corporations. Participants’ perspectives provided an insightful understanding of the complexities of being an underrepresented minority in an American corporate information technology department.

APA, Harvard, Vancouver, ISO, and other styles
12

Nasiopoulos, Panagiotis. "Adaptive compression coding." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28508.

Full text
Abstract:
An adaptive image compression coding technique, ACC, is presented. This algorithm is shown to preserve edges and give better quality decompressed pictures and better compression ratios than that of the Absolute Moment Block Truncation Coding. Lookup tables are used to achieve better compression rates without affecting the visual quality of the reconstructed image. Regions with approximately uniform intensities are successfully detected by using the range and these regions are approximated by their average. This procedure leads to further reduction in the compression data rates. A method for preserving edges is introduced. It is shown that as more details are preserved around edges the pictorial results improve dramatically. The ragged appearance of the edges in AMBTC is reduced or eliminated, leading to images far superior than those of AMBTC. For most of the images ACC yields Root Mean Square Error smaller than that obtained by AMBTC. Decompression time is shown to be comparable to that of AMBTC for low threshold values and becomes significantly lower as the compression rate becomes smaller. An adaptive filter is introduced which helps recover lost texture at very low compression rates (0.8 to 0.6 b/p, depending on the degree of texture in the image). This algorithm is easy to implement since no special hardware is needed.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Fan. "Integral Video Coding." Thesis, KTH, Kommunikationsteori, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162922.

Full text
Abstract:
In recent years, 3D camera products and prototypes based on Integral imaging (II) technique have gradually emerged and gained broad attention. II is a method that spatially samples the natural light (light field) of a scene, usually using a microlens array or a camera array and records the light field using a high resolution 2D image sensor. The large amount of data generated by II and the redundancy it contains together lead to the need for an efficient compression scheme. During recent years, the compression of 3D integral images has been widely researched. Nevertheless, there have not been many approaches proposed regarding the compression of integral videos (IVs). The objective of the thesis is to investigate efficient coding methods for integral videos. The integral video frames used are captured by the first consumer used light field camera Lytro. One of the coding methods is to encode the video data directly by an H.265/HEVC encoder. In other coding schemes the integral video is first converted to an array of sub-videos with different view perspectives. The sub-videos are then encoded either independently or following a specific reference picture pattern which uses a MVHEVC encoder. In this way the redundancy between the multi-view videos is utilized instead of the original elemental images. Moreover, by varying the pattern of the subvideo input array and the number of inter-layer reference pictures, the coding performance can be further improved. Considering the intrinsic properties of the input video sequences, a QP-per-layer scheme is also proposed in this thesis. Though more studies would be required regarding time and complexity constraints for real-time applications as well as dramatic increase of number of views, the methods proposed inthis thesis prove to be an efficient compression for integral videos.
APA, Harvard, Vancouver, ISO, and other styles
14

Streit, Juergen Stefan. "Digital image coding." Thesis, University of Southampton, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Akram, Muhammad. "Surveillance centric coding." Thesis, Queen Mary, University of London, 2011. http://qmro.qmul.ac.uk/xmlui/handle/123456789/2320.

Full text
Abstract:
The research work presented in this thesis focuses on the development of techniques specific to surveillance videos for efficient video compression with higher processing speed. The Scalable Video Coding (SVC) techniques are explored to achieve higher compression efficiency. The framework of SVC is modified to support Surveillance Centric Coding (SCC). Motion estimation techniques specific to surveillance videos are proposed in order to speed up the compression process of the SCC. The main contributions of the research work presented in this thesis are divided into two groups (i) Efficient Compression and (ii) Efficient Motion Estimation. The paradigm of Surveillance Centric Coding (SCC) is introduced, in which coding aims to achieve bit-rate optimisation and adaptation of surveillance videos for storing and transmission purposes. In the proposed approach the SCC encoder communicates with the Video Content Analysis (VCA) module that detects events of interest in video captured by the CCTV. Bit-rate optimisation and adaptation are achieved by exploiting the scalability properties of the employed codec. Time segments containing events relevant to surveillance application are encoded using high spatiotemporal resolution and quality while the irrelevant portions from the surveillance standpoint are encoded at low spatio-temporal resolution and / or quality. Thanks to the scalability of the resulting compressed bit-stream, additional bit-rate adaptation is possible; for instance for the transmission purposes. Experimental evaluation showed that significant reduction in bit-rate can be achieved by the proposed approach without loss of information relevant to surveillance applications. In addition to more optimal compression strategy, novel approaches to performing efficient motion estimation specific to surveillance videos are proposed and implemented with experimental results. A real-time background subtractor is used to detect the presence of any motion activity in the sequence. Different approaches for selective motion estimation, GOP based, Frame based and Block based, are implemented. In the former, motion estimation is performed for the whole group of pictures (GOP) only when a moving object is detected for any frame of the GOP. iii While for the Frame based approach; each frame is tested for the motion activity and consequently for selective motion estimation. The selective motion estimation approach is further explored at a lower level as Block based selective motion estimation. Experimental evaluation showed that significant reduction in computational complexity can be achieved by applying the proposed strategy. In addition to selective motion estimation, a tracker based motion estimation and fast full search using multiple reference frames has been proposed for the surveillance videos. Extensive testing on different surveillance videos shows benefits of application of proposed approaches to achieve the goals of the SCC.
APA, Harvard, Vancouver, ISO, and other styles
16

McLean, Patrick Campbell. "Structured video coding." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/27985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wee, Susie Jung-Ah. "Scalable video coding." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Balasubramaniyam, Balamuralii. "Stereoscopic video coding." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/33973.

Full text
Abstract:
It is well known that stereoscopic images and video can be used to simulate the natural process of stereopsis within the Human Visual System (HVS), by providing the two stereo images/video-streams separately to the two eyes. However as compared to presenting traditional two-dimensional images/video to the HVS, providing stereoscopic information requires double the resources, in the form of transmission bandwidth and/or storage space. Thus to handle this excess data effectively, data compression techniques are required, which is the main focus of the research presented in this thesis. The thesis proposes two novel stereoscopic video CODECs, based on the latest video coding standard, H.264.
APA, Harvard, Vancouver, ISO, and other styles
19

Shevchenko, M., Нiна Володимирiвна Мальована, Нина Владимировна Малеванная, and Nina Volodymyrivna Malovana. "Plane coding device." Thesis, Sumy State University, 2020. https://essuir.sumdu.edu.ua/handle/123456789/77835.

Full text
Abstract:
While designing a fault protection device, one of the most important tasks is to ensure the high reliability of the transmitted data with the highest possible speed at the lowest possible cost. In order to accomplish this task, it is necessary to use codes capable of detecting and correcting an error. To achieve noise immunity, a combinatorial plane code is often used.
APA, Harvard, Vancouver, ISO, and other styles
20

Schmidt, Robert. "Hippocampal correlation coding." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät I, 2010. http://dx.doi.org/10.18452/16134.

Full text
Abstract:
Korrelationskodierung im Hippokampus bildet möglicherweise die neuronale Basis für episodisches Gedächtnis. In dieser Arbeit untersuchen wir zwei Phänomene der Korrelationskodierung: Phasenpräzession und Sequenzwiederholungen. Phasenpräzession bezeichnet die Abnahme der Phase des Aktionspotentials einer Ortszelle relativ zur Theta-Oszillation. Sequenzwiederholung beschreibt die Aktivität von Ortszellen in Ruhephasen; dabei werden vorangegangene Orts- Sequenzen in umgekehrter Reihenfolge wiederholt. Wir untersuchen Phasenpräzession in einzelnen Versuchsdurchläufen. In bisherigen Studien wurden Daten zur Phasenpräzession in vielen Versuchsdurchläufen zusammengelegt. Wir zeigen, dass dies zu einer verzerrten Schätzung von grundlegenden Eigenschaften der Phasenpräzession führen kann. Weiterhin demonstrieren wir eine starke Variabilität der Phasenpräzession zwischen verschiedenen Versuchsdurchläufen. Daher ist Phasenpräzession besser geeignet zeitlich strukturierte Sequenzen zu lernen, als man aufgrund der zusammengelegten Daten vermutet hatte. Desweiteren untersuchen wir die Beziehung von Phasenpräzession in unterschiedlichen Teilen des Hippokampus. Wir zeigen, dass die extrazellulären Theta- Oszillationen in CA3 und CA1 außer Phase sind. Dennoch geschieht Phasenpräzession in beiden Regionen fast gleichzeitig, und CA3 Zellen feuern oft kurz vor CA1 Zellen. Diese zeitliche Beziehung ist im Einklang mit einer Vererbung von Phasenpräzession von CA3 nach CA1. Wir entwickeln ein mechanistisches Modell für Sequenzwiederholungen in umgekehrter Reihenfolge basierend auf Kurzzeitfazilitierung. Mit Hilfe des Tempotrons beweisen wir, dass die entstehenden zeitlichen Muster geeignet sind, um von nachgeschalteten Strukturen ausgelesen zu werden. Das Modell sagt voraus, dass im Gyrus Dentatus synchrone Zellaktivität kurz vor einer Sequenzwiederholung in CA3 zu sehen ist, und es zeigt, dass Sequenzwiederholungen zum Lernen von zeitlichen Mustern genutzt werden können.
Hippocampal correlation coding is a putative neural mechanism underlying episodic memory. Here, we look at two related phenomena: phase precession and reverse replay of sequences. Phase precession refers to the decrease of the firing phase of a place cell with respect to the local theta rhythm during the crossing of the place field. Reverse replay refers to reactivation of previously experienced place field sequences in reverse order during awake resting periods. First, we study properties of phase precession in single trials. Usually, phase precession is studied on the basis of data in which many place field traversals are pooled together. We find that single-trial and pooled-trial phase precession are different with respect to phase-position correlation, phase-time correlation, and phase range. We demonstrate that phase precession exhibits a large trial-to-trial variability and that pooling trials changes basic measures of phase precession. These findings indicate that single trials may be better suited for encoding temporally structured events than is suggested by the pooled data. Second, we examine the coordination of phase precession among subregions of the hippocampus. We find that the local theta rhythms in CA3 and CA1 are almost antiphasic. Still, phase precession in the two regions occurs with only a small phase shift, and CA3 cells tend to fire a few milliseconds before CA1 cells. These results suggest that phase precession in CA1 might be inherited from CA3. Finally, we present a model of reverse replay based on short-term facilitation. The model compresses temporal patterns from a behavioral time scale of seconds to shorter time scales relevant for synaptic plasticity. We demonstrate that the compressed patterns can be learned by the tempotron learning rule. The model provides testable predictions (synchronous activation of dentate gyrus during sharp wave-ripples) and functional interpretations of hippocampal activity (temporal pattern learning).
APA, Harvard, Vancouver, ISO, and other styles
21

Sundström, Stina. "Coding in Multiple Regression Analysis: A Review of Popular Coding Techniques." Thesis, Uppsala University, Department of Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ahlberg, Jörgen. "Model-based coding : extraction, coding, and evaluation of face model parameters /." Linköping : Univ, 2002. http://www.bibl.liu.se/liupubl/disp/disp2002/tek761s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Xu, Yan. "Investigation of key non-coding and coding genes in cutaneous melanomagenesis." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5958.

Full text
Abstract:
Cutaneous melanoma is associated with significant morbidity and mortality representing the most significant cutaneous malignancy. As it is known that early diagnosis and treatment are the most efficient approaches to cure cutaneous melanoma, an improved understanding of the molecular pathogenesis of melanoma and exploration of more reliable molecular biomarkers are particularly essential. Two different types of molecular biomarker for melanoma have been investigated in this thesis. microRNAs (miRNAs) are single-stranded RNA molecules of 20-23 nucleotides in length that are found in both animal and plant cells. miRNAs are involved in the RNA interference (RNAi) machinery to regulate gene expression posttranscriptionally. miRNAs have important roles in cancer: by controlling the expression level of their target genes they can affect cell signalling pathways and have been shown to have both prognostic and therapeutic potential. Importantly for melanoma research, reproducible miRNA expression profiles from formalin-fixed paraffin-embedded (FFPE) tissues can be obtained that are comparable to those from fresh-frozen samples. The aims of the miRNA project were: first, to identify a melanoma-specific miRNA expression profile; secondly, to investigate roles of some of the melanoma-specific miRNAs identified in melanomagenesis. Using miRNA microarray on FFPE samples, I obtained a melanoma-specific miRNA expression profile. 9 of these differentially expressed miRNAs between benign naevi and melanomas (7 downregulated, 2 upregulated in malignancies) were verified by qRT-PCR and the functions of four of these miRNAs were studied. Ectopic overexpression of miR- 200c and miR-205 in A375 melanoma cells inhibited colony forming ability in methylcellulose, an in vitro surrogate assay for tumourigenicity. Moreover, elevation of miR-200c resulted in increased expression levels of E-cadherin through negative regulation of the zinc finger E-box-binding homeobox 2 (ZEB2) gene. Ectopic overexpression of miR-211 in A375 melanoma cells repressed both colony formation in methylcellulose and migratory ability in matrigel, an in vitro surrogate assay for invasiveness. These findings indicate that miR-200c, miR-205 and miR-211 act as tumour suppressors in melanomagenesis. The second biomarker investigated, mutated BRAF, has been seen in 50-70% of spontaneous cutaneous melanoma. The commonest mutation in melanoma is a glutamic acid for valine substitution at position 600 (V600E). Oncogenic BRAF controls many aspects of melanoma cell biology. The aim of this part of the work was: firstly, to study BRAF V600E mutation status in our melanoma tissue microarray (TMA) panel; secondly, to correlate this mutation to various clinicopathological features and evaluate its prognostic value through statistical analyses. BRAF V600E mutations were seen in 20% of the primary and 69% of the metastatic melanomas, respectively. More BRAF V600E mutations were seen in males relative to females. The mutation was also related to cell pigmentation, but not to age, ulceration or solar elastosis. Melanoma patients with the BRAF V600E mutation relapse earlier than patients without this mutation. However, no significant association between the BRAF V600E mutation and overall survival and melanoma specific survival was found.
APA, Harvard, Vancouver, ISO, and other styles
24

Luo, Qinghua, Yu Peng, Wei Wan, Tao Huang, YaNing Fan, and Xiyuan Peng. "Evaluation of FLDPC Coding Scheme for Adaptive Coding in Aeronautical Telemetry." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596396.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
The aeronautical telemeter channel is characterized by Multipath interference, Doppler shift and rapid changes in channel behavior. In addition to transmission error during aeronautical telemeter, transmission losing also exists. In this paper, we investigate the correction of transmission error and processing of telemetry transmission losing, and propose an adaptive coding scheme, which organic combines Fountain code and low density parity check (LDPC) code. We call it fountain LDPC (FLDPC) coding. In the coding scheme, The LDPC code is explored to perform transmission error correction, while, the problem of transmission losing is resorted to fountain code. So FLDPC is robust for transmission losing and transmission error. Moreover, without knowing any of these the channel information, FLDPC can adapt the data link and avoid the interference through adjusting the transmission rate. Experimental results illustrated that a signification improvement in transmission reliability and transmitting efficiency can be achieved by using the FLDPC coding.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Yun. "Coding of three-dimensional video content : Depth image coding by diffusion." Licentiate thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19087.

Full text
Abstract:
Three-dimensional (3D) movies in theaters have become a massive commercial success during recent years, and it is likely that, with the advancement of display technologies and the production of 3D contents, TV broadcasting in 3D will play an important role in home entertainments in the not too distant future. 3D video contents contain at least two views from different perspectives for the left and the right eye of viewers. The amount of coded information is doubled if these views are encoded separately. Moreover, for multi-view displays (i.e. different perspectives of a scene in 3D are presented to the viewer at the same time through different angles), either video streams of all the required views must be transmitted to the receiver, or the displays must synthesize the missing views with a subset of the views. The latter approach has been widely proposed to reduce the amount of data being transmitted. The virtual views can be synthesized by the Depth Image Based Rendering (DIBR) approach from textures and associated depth images. However it is still the case that the amount of information for the textures plus the depths presents a significant challenge for the network transmission capacity. An efficient compression will, therefore, increase the availability of content access and provide a better video quality under the same network capacity constraints. In this thesis, the compression of depth images is addressed. These depth images can be assumed as being piece-wise smooth. Starting from the properties of depth images, a novel depth image model based on edges and sparse samples is presented, which may also be utilized for depth image post-processing. Based on this model, a depth image coding scheme that explicitly encodes the locations of depth edges is proposed, and the coding scheme has a scalable structure. Furthermore, a compression scheme for block-based 3D-HEVC is also devised, in which diffusion is used for intra prediction. In addition to the proposed schemes, the thesis illustrates several evaluation methodologies, especially, the subjective test of the stimulus-comparison method. It is suitable for evaluating the quality of two impaired images, as the objective metrics are inaccurate with respect to synthesized views. The MPEG test sequences were used for the evaluation. The results showed that virtual views synthesized from post-processed depth images by using the proposed model are better than those synthesized from original depth images. More importantly, the proposed coding schemes using such a model produced better synthesized views than the state of the art schemes. As a result, the outcome of the thesis can lead to a better quality of 3DTV experience.
APA, Harvard, Vancouver, ISO, and other styles
26

Gowrisankar, Sivakumar. "Predicting Functional Impact of Coding and Non-Coding Single Nucleotide Polymorphisms." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1225422057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kim, Hyungjin. "Joint coding and security secure arithmetic coding and secure MIMO communications /." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1680039771&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Carr, AnnaLisa Ward. "Coding Rupture Indicators in Couple Therapy (CRICT): An Observational Coding Scheme." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7533.

Full text
Abstract:
The therapeutic alliance, a construct representing agreement and collaboration on therapy goals, therapy tasks, and the emotional bond between client(s) and therapist, is a robust predictor of therapy outcomes in individual, couple, and family therapy. One way to track the therapeutic alliance is through ruptures and repairs. Ruptures are breaks, tensions, or tears in the therapeutic alliance. Ruptures and repairs influence the therapeutic alliance and consequently therapeutic outcomes. Currently, there is a lack of research addressing ruptures and repairs in couple therapy. The first step in researching alliance ruptures is to have a reliable way to assess alliance ruptures. This study will describe the development of the Coding Rupture Indicators in Couples Therapy (CRICT). The CRICT is an observational coding scheme that measures ruptures in couple therapy. The CRICT was developed through collaboration with researchers in marriage and family therapy, creation of items, adaptation of items from established coding schemes from individual therapy, and input and feedback as the CRICT was used and tested by undergraduates in a coding class. This paper will review foundational research of ruptures and repairs as well as the construction and use of the CRICT coding scheme.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Yun. "Coding of Three-dimensional Video Content : Diffusion-based Coding of Depth Images and Displacement Intra-Coding of Plenoptic Contents." Doctoral thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25035.

Full text
Abstract:
In recent years, the three-dimensional (3D) movie industry has reaped massive commercial success in the theaters. With the advancement of display technologies, more experienced capturing and generation of 3D contents, TV broadcasting, movies, and games in 3D have entered home entertainment, and it is likely that 3D applications will play an important role in many aspects of people's life in a not distant future. 3D video contents contain at least two views from different perspectives for the left and the right eye of viewers. The amount of coded information is doubled if these views are encoded separately. Moreover, for multi-view displays (i.e. different perspectives of a scene in 3D are presented to the viewer at the same time through different angles), either video streams of all the required views must be transmitted to the receiver, or the displays must synthesize the missing views with a subset of the views. The latter approach has been widely proposed to reduce the amount of data being transmitted and make data adjustable to 3D-displays. The virtual views can be synthesized by the Depth Image Based Rendering (DIBR) approach from textures and associated depth images. However, it is still the case that the amount of information for the textures plus the depths presents a significant challenge for the network transmission capacity. Compression techniques are vital to facilitate the transmission. In addition to multi-view and multi-view plus depth for reproducing 3D, light field techniques have recently become a hot topic. The light field capturing aims at acquiring not only spatial but also angular information of a view, and an ideal light field rendering device should be such that the viewers would perceive it as looking through a window. Thus, the light field techniques are a step forward to provide us with a more authentic perception of 3D. Among many light field capturing approaches, focused plenoptic capturing is a solution that utilize microlens arrays. The plenoptic cameras are also portable and commercially available. Multi-view and refocusing can be obtained during post-production from these cameras. However, the captured plenoptic images are of a large size and contain significant amount of a redundant information. An efficient compression of the above mentioned contents will, therefore, increase the availability of content access and provide a better quality experience under the same network capacity constraints. In this thesis, the compression of depth images and of plenoptic contents captured by focused plenoptic cameras are addressed. The depth images can be assumed to be piece-wise smooth. Starting from the properties of depth images, a novel depth image model based on edges and sparse samples is presented, which may also be utilized for depth image post-processing. Based on this model, a depth image coding scheme that explicitly encodes the locations of depth edges is proposed, and the coding scheme has a scalable structure. Furthermore, a compression scheme for block-based 3D-HEVC is also devised, in which diffusion is used for intra prediction. In addition to the proposed schemes, the thesis illustrates several evaluation methodologies, especially the subjective test of the stimulus-comparison method. This is suitable for evaluating the quality of two impaired images, as the objective metrics are inaccurate with respect to synthesized views. For the compression of plenoptic contents, displacement intra prediction with more than one hypothesis is applied and implemented in the HEVC for an efficient prediction. In addition, a scalable coding approach utilizing a sparse set and disparities is introduced for the coding of focused plenoptic images. The MPEG test sequences were used for the evaluation of the proposed depth image compression, and public available plenoptic image and video contents were applied to the assessment of the proposed plenoptic compression. For depth image coding, the results showed that virtual views synthesized from post-processed depth images by using the proposed model are better than those synthesized from original depth images. More importantly, the proposed coding schemes using such a model produced better synthesized views than the state of the art schemes. For the plenoptic contents, the proposed scheme achieved an efficient prediction and reduced the bit rate significantly while providing coding and rendering scalability. As a result, the outcome of the thesis can lead to improving quality of the 3DTV experience and facilitate the development of 3D applications in general.
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Szeming. "Coding with side information." Texas A&M University, 2004. http://hdl.handle.net/1969.1/2751.

Full text
Abstract:
Source coding and channel coding are two important problems in communications. Although side information exists in everyday scenario, the effect of side information is not taken into account in the conventional setups. In this thesis, we focus on the practical designs of two interesting coding problems with side information: Wyner-Ziv coding (source coding with side information at the decoder) and Gel??fand-Pinsker coding (channel coding with side information at the encoder). For WZC, we split the design problem into the two cases when the distortion of the reconstructed source is zero and when it is not. We review that the first case, which is commonly called Slepian-Wolf coding (SWC), can be implemented using conventional channel coding. Then, we detail the SWC design using the low-density parity-check (LDPC) code. To facilitate SWC design, we justify a necessary requirement that the SWC performance should be independent of the input source. We show that a sufficient condition of this requirement is that the hypothetical channel between the source and the side information satisfies a symmetry condition dubbed dual symmetry. Furthermore, under that dual symmetry condition, SWC design problem can be simply treated as LDPC coding design over the hypothetical channel. When the distortion of the reconstructed source is non-zero, we propose a practical WZC paradigm called Slepian-Wolf coded quantization (SWCQ) by combining SWC and nested lattice quantization. We point out an interesting analogy between SWCQ and entropy coded quantization in classic source coding. Furthermore, a practical scheme of SWCQ using 1-D nested lattice quantization and LDPC is implemented. For GPC, since the actual design procedure relies on the more precise setting of the problem, we choose to investigate the design of GPC as the form of a digital watermarking problem as digital watermarking is the precise dual of WZC. We then introduce an enhanced version of the well-known spread spectrum watermarking technique. Two applications related to digital watermarking are presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Sun, Yong. "Source-channel coding for robust image transmission and for dirty-paper coding." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4800.

Full text
Abstract:
In this dissertation, we studied two seemingly uncorrelated, but conceptually related problems in terms of source-channel coding: 1) wireless image transmission and 2) Costa ("dirty-paper") code design. In the first part of the dissertation, we consider progressive image transmission over a wireless system employing space-time coded OFDM. The space-time coded OFDM system based on a newly built broadband MIMO fading model is theoretically evaluated by assuming perfect channel state information (CSI) at the receiver for coherent detection. Then an adaptive modulation scheme is proposed to pick the constellation size that offers the best reconstructed image quality for each average signal-to-noise ratio (SNR). A more practical scenario is also considered without the assumption of perfect CSI. We employ low-complexity decision-feedback decoding for differentially space- time coded OFDM systems to exploit transmitter diversity. For JSCC, we adopt a product channel code structure that is proven to provide powerful error protection and bursty error correction. To further improve the system performance, we also apply the powerful iterative (turbo) coding techniques and propose the iterative decoding of differentially space-time coded multiple descriptions of images. The second part of the dissertation deals with practical dirty-paper code designs. We first invoke an information-theoretical interpretation of algebraic binning and motivate the code design guidelines in terms of source-channel coding. Then two dirty-paper code designs are proposed. The first is a nested turbo construction based on soft-output trellis-coded quantization (SOTCQ) for source coding and turbo trellis- coded modulation (TTCM) for channel coding. A novel procedure is devised to balance the dimensionalities of the equivalent lattice codes corresponding to SOTCQ and TTCM. The second dirty-paper code design employs TCQ and IRA codes for near-capacity performance. This is done by synergistically combining TCQ with IRA codes so that they work together as well as they do individually. Our TCQ/IRA design approaches the dirty-paper capacity limit at the low rate regime (e.g., < 1:0 bit/sample), while our nested SOTCQ/TTCM scheme provides the best performs so far at medium-to-high rates (e.g., >= 1:0 bit/sample). Thus the two proposed practical code designs are complementary to each other.
APA, Harvard, Vancouver, ISO, and other styles
32

Kamnoonwatana, Nawat. "Efficient and robust video coding : metadata-assisted and multiple description video coding." Thesis, University of Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.528096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Chandrasekaran, Balaji. "COMPARISON OF SPARSE CODING AND JPEG CODING SCHEMES FOR BLURRED RETINAL IMAGES." Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2732.

Full text
Abstract:
Overcomplete representations are currently one of the highly researched areas especially in the field of signal processing due to their strong potential to generate sparse representation of signals. Sparse representation implies that given signal can be represented with components that are only rarely significantly active. It has been strongly argued that the mammalian visual system is highly related towards sparse and overcomplete representations. The primary visual cortex has overcomplete responses in representing an input signal which leads to the use of sparse neuronal activity for further processing. This work investigates the sparse coding with an overcomplete basis set representation which is believed to be the strategy employed by the mammalian visual system for efficient coding of natural images. This work analyzes the Sparse Code Learning algorithm in which the given image is represented by means of linear superposition of sparse statistically independent events on a set of overcomplete basis functions. This algorithm trains and adapts the overcomplete basis functions such as to represent any given image in terms of sparse structures. The second part of the work analyzes an inhibition based sparse coding model in which the Gabor based overcomplete representations are used to represent the image. It then applies an iterative inhibition algorithm based on competition between neighboring transform coefficients to select subset of Gabor functions such as to represent the given image with sparse set of coefficients. This work applies the developed models for the image compression applications and tests the achievable levels of compression of it. The research towards these areas so far proves that sparse coding algorithms are inefficient in representing high frequency sharp image features. So this work analyzes the performance of these algorithms only on the natural images which does not have sharp features and compares the compression results with the current industrial standard coding schemes such as JPEG and JPEG 2000. It also models the characteristics of an image falling on the retina after the distortion effects of the eye and then applies the developed algorithms towards these images and tests compression results.
M.S.E.E.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering MSEE
APA, Harvard, Vancouver, ISO, and other styles
34

COLANTONI, ALESSIO. "Computational characterization of alternative splicing events in coding and non-coding genes." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2014. http://hdl.handle.net/2108/202013.

Full text
Abstract:
Alternative splicing (AS) represents an effective way to expand the proteome, and thus the biological complexity, without having to create and evolve new genes. Anecdotal evidence of the involvement of alternative splicing in the regulation of protein-protein interactions has been reported by several studies. AS events have been shown to significantly occur in regions where a protein interaction domain or a short linear motif is present. Several AS variants show partial or complete loss of interface residues, suggesting that AS can play a major role in the interaction regulation by selectively targeting the protein binding sites. In the first part of my PhD work I performed a statistical analysis of the alternative splicing of a non-redundant data set of human protein-protein interfaces known at molecular level to determine the importance of this way of modulation of protein-protein interactions through AS. I demonstrated that the alternative splicingmediated partial removal of both heterodimeric and homodimeric binding sites occurs at lower frequencies than expected, and this holds true even if I consider only those isoforms whose sequence is less different from that of the canonical protein and which therefore allow to selectively regulate functional regions of the protein. On the other hand, large removals of the binding site are not significantly prevented, possibly because they are associated to drastic structural changes of the protein. The observed protection of the binding sites from AS is not preferentially directed towards putative hot spot interface residues, and is widespread to all protein functional classes. Using the same procedure as that applied for proteinprotein interactions, I also evaluated the importance of AS-mediated removal of protein-ligand binding sites, obtained from three-dimensional structures of human proteins. Again, I observed that AS tends to avoid partial removal of such sites, while being quite indifferent to complete or near-complete deletions. This tendency does not depend on the size of the binding site. The choice of the AS pattern of a gene is thus conditioned by constraints imposed by the three-dimensional structure of the protein products. Alternative splicing is not observed only in protein-coding genes: many long non-coding RNA (lncRNAs) have two or more transcript isoforms. Since these transcripts are not translated, the differential usage of splice sites cannot be influenced by structural constraints, except for those related to the RNA folding. Even if AS occurs in about a quarter of lncRNA genes, little is known about its role in the regulation of lncRNA function and stability, mainly because few lncRNAs have been functionally characterized. The aim of the second half of my PhD work was to study the alternative splicing of lncRNA genes. First, I analyzed the evolutionary conservation of lncRNA alternatively spliced sequences (and their flanking regions) and I found that their pattern of conservation is similar to that showed in protein-coding genes; this suggests that AS of lncRNA genes is as important as that of protein-coding genes, at least from an evolutionary standpoint. To study the impact of AS on lncRNA functional sites, I assembled a data set of proteinRNA interaction sites by reanalysing published CLIP-Seq, RIP-Seq and RIPChip experiments. The results of this reanalysis work will be stored in a public database of protein-RNA interactions detected via high-throughput methods.
APA, Harvard, Vancouver, ISO, and other styles
35

De, Paolo Raffaella. "Zebrafish models in melanoma research: analysis of coding and non-coding BRAFV600E." Doctoral thesis, Università di Siena, 2022. http://hdl.handle.net/11365/1215235.

Full text
Abstract:
Malignant melanoma is one of the most aggressive types of cancer. While early-stage melanoma can be cured by surgical excision, late-stage melanoma remains a highly lethal disease. Current therapeutic strategies, including single agents or combined therapies, are hampered by low response rates and by diverse resistance mechanisms. The most frequent mutation in malignant melanoma is the V600E substitution in the BRAF oncogene. This mutation constitutively activates the MAPK pathway, promoting cell survival, proliferation, and motility. Among the impacting therapies, BRAFV600E inhibitors (BRAFi) are initially very effective, but, due to quick development of acquired resistance, they can be used for short periods of time (4-6 months). The development of current drug combinations just postpones the acquired resistance. With the final aim to identify new molecular factors involved in BRAFV600E-driven malignant transformation, hence, to improve the response to BRAFi, we are developing and characterizing new melanoma models in zebrafish. In this 3-year PhD project, using the Tol2 system, I generated melanoma-prone transgenic lines in which tumors are driven by BRAFV600E in its reference and X1 isoforms (BRAFV600E-ref and BRAFV600E-X1). While BRAFV600E-ref is the isoform commonly used for similar models, BRAFV600E-X1 is a poorly characterized isoform that, as we discovered in our laboratory, always coexists with the ref. The novelty of this project also lies in the study of the 3’UTR (three prime untranslated region) regulatory regions. These lines express either BRAFV600E ref/X1 coding sequence only or BRAFV600E ref/X1 coding sequence plus their respective 3’UTR. Our data in a mosaic condition show alterations in the pigmentation patterns and in the development of nevi, from which tumors originate, as well as a higher melanoma incidence in presence of BRAFV600E-ref compared to BRAFV600E-X1. Moreover, tumor development resulted to be faster in fish expressing each coding sequence with respect to the coding sequence + 3'UTR. Likewise, we are also generating stable lines. The determination of the location of the transgene is done through an innovative genotyping technique that combines CRISPR/Cas9-mediated DNA editing and Oxford Nanopore Technology (ONT) based sequencing. Stable lines will be studied for BRAFV600E variant-specific coding-(in)dependent activities and drug sensitivity. With respect to drug screening, we will exploit the neural crest signature, which is present in progenitor cells during the early stages of embryonic development and is aberrantly reactivated in melanoma cells. We will use crestin, a common marker in zebrafish, to generate a dual reporter zebrafish line expressing mCherry and Luciferase reporter genes under the control of the crestin promoter. Crossed with the BRAFV600E variant-specific melanoma prone lines, this crestin line can be used as a tool for high-throughput quantitative screening of novel BRAFi-focused drug combinations in zebrafish embryos. Preliminarily, our data confirm a higher expression of crestin in the embryos of the BRAFV600E transgenic line compared to the wild type line. Furthermore, we observed reduced expression of this marker when treating BRAFV600E embryos with anticancer drugs, including BRAFi.
APA, Harvard, Vancouver, ISO, and other styles
36

Bhumbra, Gardave Singh. "Coding in the hypothalamus." Thesis, University of Cambridge, 2004. https://www.repository.cam.ac.uk/handle/1810/251929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Xu, Jin. "Adaptive block truncation coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ39162.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Provine, Joseph A. "3D model-based coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0015/NQ47926.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chowdhury, Md Mahbubul Islam. "Image segmentation for coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0017/MQ55494.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Shamim, Md Ahsan. "Object based video coding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0034/MQ62425.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vafin, Renat. "Towards flexible audio coding /." Stockholm, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Denil, Misha. "Recklessly approximate sparse coding." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43662.

Full text
Abstract:
Introduction of the so called “K-means” or “triangle” features in Coates, Lee and Ng, 2011 caused significant discussion in the deep learning community. These simple features are able to achieve state of the art performance on standard image classification benchmarks, outperforming much more sophisticated methods including deep belief networks, convolutional nets, factored RBMs, mcRBMs, convolutional RBMs, sparse autoencoders and several others. Moreover, these features are extremely simple and easy to compute. Several intuitive arguments have been put forward to describe this remarkable performance, yet no mathematical justification has been offered. In Coates and Ng, 2011, the authors improve on the triangle features with “soft threshold” features, adding a hyperparameter to tune performance, and compare these features to sparse coding. Both soft thresholding and sparse coding are found to often yield similar classification results, though soft threshold features are much faster to compute. The main result of this thesis is to show that the soft threshold features are realized as a single step of proximal gradient descent on a non-negative sparse coding objective. This result is important because it provides an explanation for the success of the soft threshold features and shows that even very approximate solutions to the sparse coding problem are sufficient to build effective classifiers.
APA, Harvard, Vancouver, ISO, and other styles
43

Abboud, Karim. "Wideband CELP speech coding." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=56805.

Full text
Abstract:
The purpose of this thesis is to study the coding of wideband speech and to improve on previous Code-Excited Linear Prediction (CELP) coders in terms of speech quality and bit rate. To accomplish this task, improved coding techniques are introduced and the operating bit rate is reduced while maintaining and even enhancing the speech quality.
the first approach considers the quantization of Liner Predictive Coding (LPC) parameters and uses a three way split vector quantization. Both scalar and vector quantization are initially studied; results show that, with adequate codebook training, the second method generates better results while using a fewer number of bits. Nevertheless, the use of vector quantizers remain highly complex in terms of memory and number of computations. A new quantization scheme, split vector quantization (split VQ), is investigated to overcome this complexity problem. Using a new weighted distance measure as a selection criterion for split VQ, the average spectral distortion is significantly reduced to match the results obtained with scalar quantizers.
The second approach introduces a new pitch predictor with an increased temporal resolution for periodicity. This new technique has the advantage of maintaining the same quality obtained with conventional multiple coefficient predictors at a reduced bit rate. Furthermore, the conventional CELP noise weighting filter is modified to allow more freedom and better accuracy in the modeling of both tilt and formant structures. Throughout this process, different noise weighting schemes are evaluated and the results show that the new filter greatly contributes in solving the problem of high frequency distortion.
The final wideband CELP coder is operational at 11.7 kbits/s and generates a high perceptual quality of the reconstructed speech using the fractional pitch predictor and the new perceptual noise weighting filter.
APA, Harvard, Vancouver, ISO, and other styles
44

Rajakaruna, R. M. Thilioni P. "Application-aware video coding." Thesis, University of Surrey, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.543906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Sejdinovic, Dino. "Topics in Fountain Coding." Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Olsson, Sandgren Johannes. "Pixel-based video coding." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-220118.

Full text
Abstract:
This paper studies the possibilities of extending the pixel-based compression algorithm LOCO-I, used by the lossless and near lossless image compression standard JPEG-LS, introduced by the Joint Photographic Experts Group (JPEG) in 1999, to video sequences and very low bit-rates. Bitrates below 1 bit per pixel are achieved through skipping signaling when the prediction of a pixels sufficiently good. The pixels to be skipped are implicitly detected  by the decoder, minimizing the overhead. Different methods of quantization are tested, and the possibility of using vector quantization is investigated, by matching pixel sequences against a dynamically generated vector tree. Several different prediction schemes are evaluated, both linear and non-linear, with both static and adaptive weights.  Maintaining the low computational complexity of LOCO-I has been a priority. The results are compared to different HEVC implementations with regards to compression speed and ratio.
APA, Harvard, Vancouver, ISO, and other styles
47

Kilic, Suha. "Modification of Huffman Coding." Thesis, Monterey, California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Garnham, Nigel William. "Motion compensated video coding." Thesis, University of Nottingham, 1995. http://eprints.nottingham.ac.uk/13447/.

Full text
Abstract:
The result of many years of international co-operation in video coding has been the development of algorithms that remove interframe redundancy, such that only changes in the image that occur over a given time are encoded for transmission to the recipient. The primary process used here is the derivation of pixel differences, encoded in a method referred to as Differential Pulse-Coded Modulation (DPCM)and this has provided the basis of contemporary research into low-bit rate hybrid codec schemes. There are, however, instances when the DPCM technique cannot successfully code a segment of the image sequence because motion is a major cause of interframe differences. Motion Compensation (MC) can be used to improve the efficiency of the predictive coding algorithm. This thesis examines current thinking in the area of motion-compensated video compression and contrasts the application of differing algorithms to the general requirements of interframe coding. A novel technique is proposed, where the constituent features in an image are segmented, classified and their motion tracked by a local search algorithm. Although originally intended to complement the DPCM method in a predictive hybrid codec, it will be demonstrated that the evaluation of feature displacement can, in its own right, form the basis of a low bitrate video codec of low complexity. After an extensive discussion of the issues involved, a description of laboratory simulations shows how the postulated technique is applied to standard test sequences. Measurements of image quality and the efficiency of compression are made and compared with a contemporary standard method of low bitrate video coding.
APA, Harvard, Vancouver, ISO, and other styles
49

Smyth, Stephen M. F. "High fidelity music coding." Thesis, Queen's University Belfast, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Redwood-Sawyerr, J. A. S. "Constant envelope modulation coding." Thesis, University of Essex, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.356049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography