Academic literature on the topic 'Encoder optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Encoder optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Encoder optimization":

1

Hassan, Hammad, Muhammad Nasir Khan, Syed Omer Gilani, Mohsin Jamil, Hasan Maqbool, Abdul Waheed Malik, and Ishtiaq Ahmad. "H.264 Encoder Parameter Optimization for Encoded Wireless Multimedia Transmissions." IEEE Access 6 (2018): 22046–53. http://dx.doi.org/10.1109/access.2018.2824835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hamza, Ahmed M., Mohamed Abdelazim, Abdelrahman Abdelazim, and Djamel Ait-Boudaoud. "HEVC Rate-Distortion Optimization with Source Modeling." Electronic Imaging 2021, no. 10 (January 18, 2021): 259–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.10.ipas-259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Rate-Distortion adaptive mechanisms of MPEG-HEVC (High Efficiency Video Coding) and its derivatives are an incremental improvement in the software reference encoder, providing a selective Lagrangian parameter choice which varies by encoding mode (intra or inter) and picture reference level. Since this weighting factor (and the balanced cost functions it impacts) are crucial to the RD optimization process, affecting several encoder decisions and both coding efficiency and quality of the encoded stream, we investigate an improvement by modern reinforcement learning methods. We develop a neural-based agent that learns a real-valued control policy to maximize rate savings by input signal pattern, mapping pixel intensity values from the picture at the coding tree unit level, to the appropriate weighting-parameter. Our testing on reference software yields improvements for coding efficiency performance across different video sequences, in multiple classes of video.
3

Wang, Lei, Qimin Ren, Jingang Jiang, Hongxin Zhang, and Yongde Zhang. "Recent Patents on Magnetic Encoder and its use in Rotating Mechanism." Recent Patents on Engineering 13, no. 3 (September 19, 2019): 194–200. http://dx.doi.org/10.2174/1872212112666180628145856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: The application of magnetic encoder relieves the problem of reliable application of servo system in vibration field. The magnetic encoder raises the efficiency and reliability of the system, and from structural considerations, the magnetic encoder is divided into two parts: signal conversion and structural support. Objective: In order to improve the accuracy of the magnetic encoder, its structure is constantly improving. To evaluate a magnetic encoder, the accuracy is a factor, meanwhile, the structure of magnetic encoder is one of the key factors that make difference in the accuracy of magnetic encoder. The purpose of this paper is to study the accuracy of different structures of magnetic encoder. Methods: This paper reviews various representative patents related to magnetic encoder. Results: The differences in different types of magnetic encoders were compared and analyzed and the characteristics were concluded. The main problems in its development were analyzed, the development trend forecasted, and the current and future developments of the patents on magnetic encoder were discussed. Conclusion: The optimization of the magnetic encoder structure improves the accuracy of magnetic encoder. In the future, for wide popularization of magnetic encoder, modularization, generalization, and reliability are the factors that practitioner should pay attention to, and more patents on magnetic encoder should be invented.
4

Lee, Yoon Jin, Dong In Bae, and Gwang Hoon Park. "HEVC Encoder Optimization using Depth Information." Journal of Broadcast Engineering 19, no. 5 (September 30, 2014): 640–55. http://dx.doi.org/10.5909/jbe.2014.19.5.640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Shanshe, Falei Luo, Siwei Ma, Xiang Zhang, Shiqi Wang, Debin Zhao, and Wen Gao. "Low complexity encoder optimization for HEVC." Journal of Visual Communication and Image Representation 35 (February 2016): 120–31. http://dx.doi.org/10.1016/j.jvcir.2015.12.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Merel, Josh, Donald M. Pianto, John P. Cunningham, and Liam Paninski. "Encoder-Decoder Optimization for Brain-Computer Interfaces." PLOS Computational Biology 11, no. 6 (June 1, 2015): e1004288. http://dx.doi.org/10.1371/journal.pcbi.1004288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hanli Wang, Ming-Yan Chan, S. Kwong, and Chi-Wah Kok. "Novel quantized DCT for video encoder optimization." IEEE Signal Processing Letters 13, no. 4 (April 2006): 205–8. http://dx.doi.org/10.1109/lsp.2005.863691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bariani, M., P. Lambruschini, and M. Raggio. "An Efficient Multi-Core SIMD Implementation for H.264/AVC Encoder." VLSI Design 2012 (May 29, 2012): 1–14. http://dx.doi.org/10.1155/2012/413747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The optimization process of a H.264/AVC encoder on three different architectures is presented. The architectures are multi- and singlecore and SIMD instruction sets have different vector registers size. The need of code optimization is fundamental when addressing HD resolutions with real-time constraints. The encoder is subdivided in functional modules in order to better understand where the optimization is a key factor and to evaluate in details the performance improvement. Common issues in both partitioning a video encoder into parallel architectures and SIMD optimization are described, and author solutions are presented for all the architectures. Besides showing efficient video encoder implementations, one of the main purposes of this paper is to discuss how the characteristics of different architectures and different set of SIMD instructions can impact on the target application performance. Results about the achieved speedup are provided in order to compare the different implementations and evaluate the more suitable solutions for present and next generation video-coding algorithms.
9

Cho, Jung-Hyun, Myung-Soo Lee, Han-Soo Jeong, Chang-Suk Kim, and Dae-Jea Cho. "Optimization of H.264 Encoder based on Hardware Implementation in Embedded System." Journal of the Korea Academia-Industrial cooperation Society 11, no. 8 (August 31, 2010): 3076–82. http://dx.doi.org/10.5762/kais.2010.11.8.3076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Han, Guohua Cao, Hongchang Ding, and Kun Li. "Research on Particle Swarm Compensation Method for Subdivision Error Optimization of Photoelectric Encoder Based on Parallel Iteration." Sensors 22, no. 12 (June 12, 2022): 4456. http://dx.doi.org/10.3390/s22124456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Photoelectric encoders are widely used in high-precision measurement fields such as industry and aerospace because of their high precision and reliability. In order to improve the subdivision accuracy of moiré grating signals, a particle swarm optimization compensation model for grating the subdivision error of a photoelectric encoder based on parallel iteration is proposed. In the paper, an adaptive subdivision method of a particle swarm search domain based on the honeycomb structure is proposed, and a raster signal subdivision error compensation model based on the multi-swarm particle swarm optimization algorithm based on parallel iteration is established. The optimization algorithm can effectively improve the convergence speed and system accuracy of traditional particle swarm optimization. Finally, according to the subdivision error compensation algorithm, the subdivision error of the grating system caused by the sinusoidal error in the system is quickly corrected by taking advantage of the high-speed parallel processing of the FPGA pipeline architecture. The design experiment uses a 25-bit photoelectric encoder to verify the subdivision error algorithm. The experimental results show that the actual dynamic subdivision error can be reduced to ½ before compensation, and the static subdivision error can be reduced from 1.264″ to 0.487″ before detection.

Dissertations / Theses on the topic "Encoder optimization":

1

Mallikarachchi, Thanuja. "HEVC encoder optimization and decoding complexity-aware video encoding." Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/841841/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The increased demand for high quality video evidently elevates the bandwidth requirements of the communication channels being used, which in return demands for more efficient video coding algorithms within the media distribution tool chain. As such, High Efficiency Video Coding (HEVC) video coding standard is a potential solution that demonstrates a significant coding efficiency improvement over its predecessors. HEVC constitutes an assortment of novel coding tools and features that contribute towards its superior coding performance, yet at the same time demand more computational, processing and energy resources; a crucial bottleneck, especially in the case of resource constrained Consumer Electronic (CE) devices. In this context, the first contribution in this thesis presents a novel content adaptive Coding Unit (CU) size prediction algorithm for HEVC-based low-delay video encoding. In this case, two independent content adaptive CU size selection models are introduced while adopting a moving window-based feature selection process to ensure that the framework remains robust and dynamically adapts to any varying video content. The experimental results demonstrate a consistent average encoding time reduction ranging from 55% - 58% and 57% - 61% with average Bjøntegaard Delta Bit Rate (BDBR) increases of 1.93% - 2.26% and 2.14% - 2.33% compared to the HEVC 16.0 reference software for the low delay P and low delay B configurations, respectively, across a wide range of content types and bit rates. The video decoding complexity and the associated energy consumption are tightly coupled with the complexity of the codec as well as the content being decoded. Hence, video content adaptation is extensively considered as an application layer solution to reduce the decoding complexity and thereby the associated energy consumption. In this context, the second contribution in this thesis introduces a decoding complexity-aware video encoding algorithm for HEVC using a novel decoding complexity-rate-distortion model. The proposed algorithm demonstrates on average a 29.43% and 13.22% decoding complexity reductions for the same quality with only a 6.47% BDBR increase when using the HM 16.0 and openHEVC decoders, respectively. Moreover, decoder energy consumption analysis reveals an overall energy reduction of up to 20% for the same video quality. Adaptive video streaming is considered as a potential solution in the state-of-the-art to cope with the uncertain fluctuations in the network bandwidth. Yet, the simultaneous consideration of both bit rate and decoding complexity for content adaptation with minimal quality impact is extremely challenging due to the dynamics of the video content. In response, the final contribution in this thesis introduces a content adaptive decoding complexity and rate controlled encoding framework for HEVC. The experimental results reveal that the proposed algorithm achieves a stable rate and decoding complexity controlling performance with an average error of only 0.4% and 1.78%, respectively. Moreover, the proposed algorithm is capable of generating HEVC bit streams that exhibit up to 20.03 %/dB decoding complexity reduction which result in up to 7.02 %/dB decoder energy reduction per 1dB Peak Signal-to-Noise Ratio (PSNR) quality loss.
2

Syu, Eric. "Implementing rate-distortion optimization on a resource-limited H.264 encoder." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaves 57-59).
This thesis models the rate-distortion characteristics of an H.264 video compression encoder to improve its mode decision performance. First, it provides a background to the fundamentals of video compression. Then it describes the problem of estimating rate and distortion of a macroblock given limited computational resources. It derives the macroblock rate and distortion as a function of the residual SAD and H.264 quantization parameter QP. From the resulting equations, this thesis implements and verifies rate-distortion optimization on a resource-limited H.264 encoder. Finally, it explores other avenues of improvement.
by Eric Syu.
M.Eng.
3

Carriço, Nuno Filipe Marques. "Transformer approaches on hyper-parameter optimization and anomaly detection with applications in stream tuning." Master's thesis, Universidade de Évora, 2022. http://hdl.handle.net/10174/31068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hyper-parameter Optimisation consists of finding the parameters that maximise a model’s performance. However, this mainly concerns processes in which the model shouldn’t change over time. Hence, how should an online model be optimised? For this, we pose the following research question: How and when should the model be optimised? For the optimisation part, we explore the transformer architecture as a function mapping data statistics into model parameters, by means of graph attention layers, together with reinforcement learning approaches, achieving state of the art results. On the other hand, in order to detect when the model should be optimised, we use the transformer architecture to empower already existing anomaly detection methods, in this case, the Variational Auto Encoder. Finally, we join these developed methods in a framework capable of deciding when an optimisation should take part and how to do it, aiding the stream tuning process; Sumário: Abordagens de Transformer em Optimização de Hiper-Parâmetros e Deteção de Anomalias com Aplicações em Stream Tuning Optimização de hiper parâmetros consiste em encontrar os parâmetros que maximizam a performance de um modelo. Contudo, maioritariamente, isto diz respeito a processos em que o modelo não muda ao longo do tempo. Assim, como deve um modelo online ser optimizado? Para este fim, colocamos a seguinte pergunta: Como e quando deve ser o modelo optimizado? Para a fase de optimização, exploramos a arquitectura de transformador, como uma função que mapeia estatísticas sobre dados para parâmetros de modelos, utilizando atenção de grafos junto de abordagens de aprendizagem por reforço, alcançando resultados de estado da arte. Por outro lado, para detectar quando o modelo deve ser optimizado, utilizamos a arquitectura de transformador, reforçando abordagens de detecção de anomalias já existentes, o Variational Auto Encoder. Finalmente, juntamos os métodos desenvolvidos numa framework capaz de decidir quando se deve realizar uma optimização e como o fazer, auxiliando o processo de tuning em stream.
4

Hägg, Ragnar. "Scalable High Efficiency Video Coding : Cross-layer optimization." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In July 2014, the second version of the HEVC/H.265 video coding standard was announced, and it included the Scalable High efficiency Video Coding (SHVC) extension. SHVC is used for coding a video stream with subset streams of the same video with lower quality, and it supports spatial, temporal and SNR scalability among others. This is used to enable easy adaption of a video stream, by dropping or adding packages, to devices with different screen sizes, computing power and bandwidth. In this project SHVC has been implemented in Ericsson's research encoder C65. Some cross-layer optimizations have also been implemented and evaluated. The main goal of these optimizations are to make better decisions when choosing the reference layer's motion parameters and QP, by doing multi-pass coding and using the coded enhancement layer information from the first pass.
5

Sun, Hui [Verfasser], Ralph [Akademischer Betreuer] Kennel, Alexander W. [Gutachter] Koch, and Ralph [Gutachter] Kennel. "Optimization of Velocity and Displacement Measurement with Optical Encoder and Laser Self-Mixing Interferometry / Hui Sun ; Gutachter: Alexander W. Koch, Ralph Kennel ; Betreuer: Ralph Kennel." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1230552693/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Hasani, Firas Ali Jawad. "Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers." Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
7

Nasrallah, Anthony. "Novel compression techniques for next-generation video coding." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le contenu vidéo occupe aujourd'hui environ 82% du trafic Internet mondial. Ce pourcentage important est dû à la révolution des contenus vidéo. D’autre part, le marché exige de plus en plus des vidéos avec des résolutions et des qualités plus élevées. De ce fait, développer des algorithmes de codage encore plus efficaces que ceux existants devient une nécessité afin de limiter afin de limiter l’augmentation de la quantité de données vidéo circulant sur internet et assurer une meilleure qualité de service. En outre, la consommation impressionnante de contenu multimédia dans les produits électroniques impacte l’aspect écologique. Par conséquent, trouver un compromis entre la complexité des algorithmes et l’efficacité des implémentations s’impose comme nouveau défi. Pour cela, une équipe collaborative a été créée dans le but de développer une nouvelle norme de codage vidéo, Versatile Video Coding – VVC/H.266. Bien que VVC ait pu aboutir à une réduction de plus de 40% du débit par rapport à HEVC, cela ne signifie pas du tout qu’il n’y a plus de besoin pour améliorer encore l’efficacité du codage. De plus, VVC ajoute une complexité remarquable par rapport à HEVC. Cette thèse vient répondre à ces problématiques en proposant trois nouvelles méthodes d'encodage. Les apports de cette recherche se répartissent en deux axes principaux. Le premier axe consiste à proposer et mettre en œuvre de nouveaux outils de compression dans la nouvelle norme, capables de générer des gains de codage supplémentaires. Deux méthodes ont été proposées pour ce premier axe. Le point commun entre ces deux méthodes est la dérivation des informations de prédiction du côté du décodeur. En effet, l’augmentation des choix de l’encodeur peut améliorer la précision des prédictions et donne moins de résidus d’énergie, conduisant à une réduction du débit. Néanmoins, plus de modes de prédiction impliquent plus de signalisation à envoyer dans le flux binaire pour informer le décodeur des choix qui ont été faits au niveau de l’encodeur. Les gains mentionnés ci-dessus sont donc largement compensés par la signalisation ajoutée. Si l’information de prédiction est dérivée au niveau du décodeur, ce dernier n’est plus passif, mais devient actif, c’est le concept de décodeur intelligent. Ainsi, il sera inutile de signaler l’information, d’où un gain en signalisation. Chacune des deux méthodes propose une technique intelligente différente pour prédire l’information au niveau du décodeur. La première technique construit un histogramme de gradients pour déduire différents modes de prédiction intra pouvant ensuite être combinés, pour obtenir le mode de prédiction intra final pour un bloc donné. Cette propriété de fusion permet de prédire plus précisément les zones avec des textures complexes, ce qui, dans les schémas de codage conventionnels, nécessiterait plutôt un partitionnement et/ou une transmission plus fine des résidus à haute énergie. La deuxième technique consiste à donner à VVC la possibilité de basculer entre différents filtres d’interpolation pour la prédiction inter. La déduction du filtre optimal sélectionné par l’encodeur est réalisée grâce à des réseaux de neurones convolutifs. Le deuxième axe, contrairement au premier, ne cherche pas à ajouter une contribution à l’algorithme de base de VVC. Cet axe vise plutôt à permettre une utilisation optimisée de l’algorithme déjà existant. L’objectif ultime est de trouver le meilleur compromis possible entre l’efficacité de compression fournie et la complexité imposée par les outils VVC. Ainsi, un système d’optimisation est conçu pour déterminer une technique efficace d’adaptation de l’activation des outils au contenu. La détermination de ces outils peut être effectuée soit en utilisant des réseaux de neurones artificiels, soit sans aucune technique d’intelligence artificielle
Video content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
8

Luo, Fangyi. "Post-Layout DFM optimization based on hybrid encoded topological layout /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Yuanzhi. "Algorithms and Hardware Co-Design of HEVC Intra Encoders." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/dissertations/1769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. What’s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction.
10

Nguyen, Ngoc-Mai. "Stratégies d'optimisation de la consommation pour un système sur puce encodeur H.264." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT049/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La consommation d'énergie des systèmes sur puces induit des contraintes fortes lors de leur conception. Elle affecte la fiabilité du système, le coût du refroidissement de la plateforme, et la durée de vie de la batterie lorsque le circuit est alimenté par des batteries. En fait, avec la diminution de la tailles de la technologie des semi-conducteurs, l'optimisation de la puissance consommée est devenue un enjeu majeur, au même titre que le coût lié à la surface silicium et l'optimisation des performances, en particulier pour les applications mobiles. Des puces codec vidéo dédiées ont été utilisés dans diverses applications telles que les systèmes de vidéoconférence, de sécurité et de surveillance, ou encore et des applications de divertissement. Pour répondre aux contraintes des applications mobiles en termes de performance et de consommation énergétique, le codec vidéo est généralement implémenté en matériel plutôt qu'en logiciel, ce qui permet de garantir les contraintes d'efficacité énergétique et de traitement en temps réel. L'une des normes les plus efficaces pour les applications vidéo est aujourd'hui la norme H.264 Encodage Vidéo Avancé (H.264/AVC), qui offre une meilleure qualité vidéo à un débit binaire plus bas que les normes précédentes. Pour pouvoir effectivement intégrer cette norme dans des produits commerciaux, en particulier pour les appareils mobiles, lors de la conception du codec vidéo en matériel, les concepteurs devront utiliser des approches spécifiques de conception de circuits basse consommation et implanter des mécanismes de contrôle de la consommation. Cette thèse de doctorat s'est déroulée dans le cadre de la conception de l'encoder matériel au format H.264, appelé plateforme VENGME. La plateforme est découpée en différents modules et le module EC-NAL a été développé durant la thèse, en prenant en compte différentes solutions apparues dans la littérature pour minimiser la consommation de ce module. Les résultats en simulation montrent que le module EC-NAL présente de meilleurs résultats d'un point de vue consommation que ses concurrents de la littérature. L'architecture de la plateforme VENGME a ensuite été analysée, et des simulations au niveau RTL ont été menées pour évaluer sa consommation globale. Il en est ressorti une possibilité de diminuer encore plus la consommation de la plateforme matérielle en contrôlant la fréquence de certains modules. Cette approche a été appliquée au module EC-NAL qui possède en interne une FIFO. Dont le niveau peut être contrôlé en ajustant la fréquence d'horloge du côté du sous-module NAL. Cela a donc conduit à implémenter une approche d'adaptation automatique de la fréquence en fonction du niveau de remplissage de la FIFO. Le contrôleur a été implémenté en matériel et la stabilité du système bouclé a été étudiée. Les résultats en simulation montrent l'intérêt de la démarche adoptée qui devra être étendue à l'ensemble de la plateforme
Power consumption for Systems-on-Chip induces strong constraints on their design. Power consumption affects the system reliability, cooling cost, and battery lifetime for Systems-on-Chips powered by battery. With the pace of semiconductor technology, power optimization has become a tremendous challenging issue together with Silicon area and/or performance optimization, especially for mobile applications. Video codec chips are used in various applications ranging for video conferencing, security and monitoring systems, but also entertainment applications. To meet the performance and power consumptions constraints encountered for mobile applications, video codecs are favorably preferred to be implemented in hardware rather than in software. This hardware implementation will lead to better power efficiency and real-time requirements. Nowadays, one of the most efficient standards for video applications is the H.264 Advanced Video Coding (H.264/AVC) which provides better video quality at a lower bit-rate than the previous standards. To bring the standard into commercial products, especially for hand-held devices, designers need to apply design approaches dedicated to low-power circuits. They also need to implement mechanisms to control the circuit power consumption. This PhD thesis is conducted in the framework of the VENGME H.264/AVC hardware encoder design. The platform is split in several modules and the VENGME Entropy Coder and bytestream Network Abstraction Layer data packer (EC-NAL) module has been designed during this PhD thesis, taking into account and combining several state-of-the-art solutions to minimise the power consumption. From simulation results, it has been seen that the EC-NAL module presents better power figures than the already published solutions. Then, the VENGME H.264 encoder architecture has been analyzed and power estimations at RTL level have been performed to extract the platform power figures. Then, from these power figures, it has been decided to implement power control on the EC-NAL module. This latter contains a FIFO whose level can be controlled via an appropriate scaling of the clock frequency on the NAL side, which leads to the implementation of a Dynamic Frequency Scaling (DFS) approach based on the control of the FIFO occupancy level. The control law has been implemented in hardware (full-custom) and the closed-loop system stability has been studied. Simulation results show the effectiveness of the proposed DVS strategy that should be extended to the whole H.264 encoder platform

Books on the topic "Encoder optimization":

1

Caplan, Stephanie Tanya. Optimization of binary gabor zone plate encoded holography techniques with visible wavelengths. Birmingham: University of Birmingham, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Encoder optimization":

1

Shu, Ruo, Shibao Li, and Xin Pan. "An Optimization Scheme for SVAC Audio Encoder." In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 221–29. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-662-44980-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qinglei, Meng, Yao Chunlian, and Li Bo. "Video Encoder Optimization Implementation on Embedded Platform." In Lecture Notes in Computer Science, 870–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11881223_111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Guanghao, Dongshun Cui, Shangbo Mao, and Guang-Bin Huang. "Sparse Bayesian Learning for Extreme Learning Machine Auto-encoder." In Proceedings in Adaptation, Learning and Optimization, 319–27. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23307-5_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kumar, Saurav, Satvik Gupta, Vishvender Singh, Mohit Khokhar, and Prashant Singh Rana. "Parameter Optimization for H.265/HEVC Encoder Using NSGA II." In Advances in Intelligent Systems and Computing, 105–18. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-3325-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Hui, Hang-cheng Zeng, and Bu Pu. "Implementation and Optimization of H.264 Encoder Based on TMS320DM6467." In Lecture Notes in Electrical Engineering, 465–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-26001-8_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hsu, Chi-Yuan, and Antonio Ortega. "Joint Encoder and VBR Channel Optimization with Buffer and Leaky Bucket Constraints." In Multimedia Communications and Video Coding, 409–17. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-0403-6_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cao, Li, Xiaoyun Zhang, and Zhiyong Gao. "An Efficient Optimization of Real-Time AVS+ Encoder in Low Bitrate Condition." In Communications in Computer and Information Science, 265–75. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4211-9_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tran, Duc Hoa, Michel Meunier, and Farida Cheriet. "Deep Image Clustering Using Self-learning Optimization in a Variational Auto-Encoder." In Pattern Recognition. ICPR International Workshops and Challenges, 736–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68790-8_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Brahmane, A. V., and B. Chaitanya Krishna. "Chaotic Biogeography Based Optimization Using Deep Stacked Auto Encoder for Big Data Classification." In Evolutionary Artificial Intelligence, 379–89. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-8438-1_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mai, Zhi-Yi, Chun-Ling Yang, Lai-Man Po, and Sheng-Li Xie. "A New Rate-Distortion Optimization Using Structural Information in H.264 I-Frame Encoder." In Advanced Concepts for Intelligent Vision Systems, 435–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11558484_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Encoder optimization":

1

Gomes, Diullei M., and Isah A. Lawal. "Drainage Strategy Optimization Using Machine Learning Methods." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2023. http://dx.doi.org/10.2118/217092-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT Oil production optimization is a crucial issue in the oil industry. Simulating different production scenarios effectively and quickly enables companies to automate and optimize production systems. This paper presents a study on developing intelligent agents to aid reservoir engineers in optimizing oil production. We propose a machine learning model for optimizing oil production over time by adjusting the pressure in oil reservoirs. The proposed model architecture uses three encoders (field values encoder, well values encoder, and 3D grid values encoder) to process input data. Using the encoder outputs, a dense neural network generates a policy function that determines how much pressure adjustment is required for each well in the oil field based on the probability distribution. We evaluate the proposed approach through experimentation. It is worthwhile to mention that, in our experiments, we had to discretize the reservoir well pressure adjustments to be able to compute them. Nevertheless, the results of the experiments show that our proposed model can learn how to optimize the reservoir well pressure with an Elo rating of 349.40 points after training over eleven generations. Also, the results show that the optimization process increases oil production by 1074.5% on a simulated test reservoir with two producers and one injector well, respectively. Although our experimental results reflect only the case of a simulated reservoir environment, we can see that our implementation has huge potential in a real oil reservoir field.
2

Luis Bustamante, Alvaro, José M. Molina López, and Miguel A. Patricio. "Video encoder optimization via evolutionary multiobjective optimization algorithms." In the 11th Annual conference. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1569901.1570189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Qiang, Xiaoyun Zhang, Zhiyong Gao, and Jun Sun. "Analysis and optimization of x265 encoder." In 2014 Visual Communications and Image Processing (VCIP). IEEE, 2014. http://dx.doi.org/10.1109/vcip.2014.7051616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Merkte, Philipp, Jordi Bayo Singla, Karsten Muller, and Thomas Wiegand. "Stereo video encoder optimization for mobile applications." In 2011 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2011). IEEE, 2011. http://dx.doi.org/10.1109/3dtv.2011.5877217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Johnson, N., K. J. Mohan, K. E. Janson, and J. Jose. "Optimization of incremental optical encoder pulse processing." In 2013 International Multi-Conference on Automation, Computing, Communication, Control and Compressed Sensing (iMac4s). IEEE, 2013. http://dx.doi.org/10.1109/imac4s.2013.6526510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yangxia Xiang, Huimin Zhang, Xiaoxuan Xiang, Dazhi Chen, and Ling Xiong. "Optimization of H.264 encoder based on SSE2." In 2010 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2010. http://dx.doi.org/10.1109/pic.2010.5688015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tsai, Chia-Ming, Yuwen He, Jie Dong, Yan Ye, Xiaoyu Xiu, and Yong He. "Joint-layer encoder optimization for HEVC scalable extensions." In SPIE Optical Engineering + Applications, edited by Andrew G. Tescher. SPIE, 2014. http://dx.doi.org/10.1117/12.2063470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Zhiqiang, Lun-hui Deng, and RuiLv. "Cache optimization for real time MPEG-4 encoder." In 2009 ISECS International Colloquium on Computing, Communication, Control, and Management (CCCM). IEEE, 2009. http://dx.doi.org/10.1109/cccm.2009.5267937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qin, Han, Zeyu Jiang, Yonghua Wang, Hongwei Guo, Ce Zhu, Dandan Ding, and Zoe Liu. "Temporally Dependent Rate-Distortion Optimization for AV1 Encoder." In 2022 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB). IEEE, 2022. http://dx.doi.org/10.1109/bmsb55706.2022.9828747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Ting-Feng, Yung-Jhe Yan, Hou-Chi Chiang, Tsan Lin Chen, and Mang Ou-Yang. "Coding optimization for the absolute optical rotary encoder." In 2018 International Automatic Control Conference (CACS). IEEE, 2018. http://dx.doi.org/10.1109/cacs.2018.8606741.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography