Rozprawy doktorskie na temat „Encoder optimization”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 29 najlepszych rozpraw doktorskich naukowych na temat „Encoder optimization”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Mallikarachchi, Thanuja. "HEVC encoder optimization and decoding complexity-aware video encoding". Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/841841/.
Pełny tekst źródłaSyu, Eric. "Implementing rate-distortion optimization on a resource-limited H.264 encoder". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33365.
Pełny tekst źródłaIncludes bibliographical references (leaves 57-59).
This thesis models the rate-distortion characteristics of an H.264 video compression encoder to improve its mode decision performance. First, it provides a background to the fundamentals of video compression. Then it describes the problem of estimating rate and distortion of a macroblock given limited computational resources. It derives the macroblock rate and distortion as a function of the residual SAD and H.264 quantization parameter QP. From the resulting equations, this thesis implements and verifies rate-distortion optimization on a resource-limited H.264 encoder. Finally, it explores other avenues of improvement.
by Eric Syu.
M.Eng.
Carriço, Nuno Filipe Marques. "Transformer approaches on hyper-parameter optimization and anomaly detection with applications in stream tuning". Master's thesis, Universidade de Évora, 2022. http://hdl.handle.net/10174/31068.
Pełny tekst źródłaHägg, Ragnar. "Scalable High Efficiency Video Coding : Cross-layer optimization". Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257558.
Pełny tekst źródłaSun, Hui [Verfasser], Ralph [Akademischer Betreuer] Kennel, Alexander W. [Gutachter] Koch i Ralph [Gutachter] Kennel. "Optimization of Velocity and Displacement Measurement with Optical Encoder and Laser Self-Mixing Interferometry / Hui Sun ; Gutachter: Alexander W. Koch, Ralph Kennel ; Betreuer: Ralph Kennel". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1230552693/34.
Pełny tekst źródłaAl-Hasani, Firas Ali Jawad. "Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers". Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.
Pełny tekst źródłaNasrallah, Anthony. "Novel compression techniques for next-generation video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.
Pełny tekst źródłaVideo content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
Luo, Fangyi. "Post-Layout DFM optimization based on hybrid encoded topological layout /". Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.
Pełny tekst źródłaZhang, Yuanzhi. "Algorithms and Hardware Co-Design of HEVC Intra Encoders". OpenSIUC, 2019. https://opensiuc.lib.siu.edu/dissertations/1769.
Pełny tekst źródłaNguyen, Ngoc-Mai. "Stratégies d'optimisation de la consommation pour un système sur puce encodeur H.264". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT049/document.
Pełny tekst źródłaPower consumption for Systems-on-Chip induces strong constraints on their design. Power consumption affects the system reliability, cooling cost, and battery lifetime for Systems-on-Chips powered by battery. With the pace of semiconductor technology, power optimization has become a tremendous challenging issue together with Silicon area and/or performance optimization, especially for mobile applications. Video codec chips are used in various applications ranging for video conferencing, security and monitoring systems, but also entertainment applications. To meet the performance and power consumptions constraints encountered for mobile applications, video codecs are favorably preferred to be implemented in hardware rather than in software. This hardware implementation will lead to better power efficiency and real-time requirements. Nowadays, one of the most efficient standards for video applications is the H.264 Advanced Video Coding (H.264/AVC) which provides better video quality at a lower bit-rate than the previous standards. To bring the standard into commercial products, especially for hand-held devices, designers need to apply design approaches dedicated to low-power circuits. They also need to implement mechanisms to control the circuit power consumption. This PhD thesis is conducted in the framework of the VENGME H.264/AVC hardware encoder design. The platform is split in several modules and the VENGME Entropy Coder and bytestream Network Abstraction Layer data packer (EC-NAL) module has been designed during this PhD thesis, taking into account and combining several state-of-the-art solutions to minimise the power consumption. From simulation results, it has been seen that the EC-NAL module presents better power figures than the already published solutions. Then, the VENGME H.264 encoder architecture has been analyzed and power estimations at RTL level have been performed to extract the platform power figures. Then, from these power figures, it has been decided to implement power control on the EC-NAL module. This latter contains a FIFO whose level can be controlled via an appropriate scaling of the clock frequency on the NAL side, which leads to the implementation of a Dynamic Frequency Scaling (DFS) approach based on the control of the FIFO occupancy level. The control law has been implemented in hardware (full-custom) and the closed-loop system stability has been studied. Simulation results show the effectiveness of the proposed DVS strategy that should be extended to the whole H.264 encoder platform
Howard, Dawne E. "The Finding Aid Container List Optimization Survey: Recommendations for Web Usability". Thesis, School of Information and Library Science, 2006. http://hdl.handle.net/1901/340.
Pełny tekst źródłaDAHLQVIST, ANTON, i Victor Karlsson. "Design and optimization of a signal converter for incremental encoders : A study about maximizing the boundary limits of quadrature pulse conversion". Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192301.
Pełny tekst źródłaDetta projekt genomfordes under varen 2016 i samarbete med Bosch Rexroth i Mellansel. Syftet var att undersoka mojligheten att implementera en konverterare med mojligheten att steglost skala encodersignaler med olika pulstal, samt konvertera detta till en 4-20mA stromsignal. En utredning om hur felsakerheten skulle kunna forbattras pa konverteraren gjordes och resultaten fran undersokningen implementerades. Mjukvaran och hardvaran reviderades och optimerades for hur denna skulle kunna gjutas in for att klara industrins harda inkapslingskrav men samtidigt vara mojlig att programmera om. En bakgrundsstudie utfordes underatta veckor for att tillgodose tillracklig kunskap om konverteringsalgoritmer for att komma fram till tva stycken for implementation. De slutgiltiga konverteringsalgoritmerna som togs vidare till implementation och testning blev multiplikation och extrapolering av pulstider. Forstudien omfattade aven hur skrivande av e ektiv c kod kan goras da hog hastighet och upplosning kraver mycket berakning av processorn, och en bakgrundsstudie om felsakerhet i elektroniska system. Testfall designades for att testa onskade egenskaper och gransvarden for konverteringarna och dessa utfordes pa en testrig med en magnetencoder uppkopplad mot dSPACE Control Desk. En konverterare, vilket inte kunde hittas i bakgrundsstudien, designades och implementerades. Testresultaten visade att den mest framgangsrika algoritmen i dessa tester var multiplikationen med en adaptiv upplosning vilken reducerar antalet matpunkter pa insignalen vid hogre hastigheter. Aven om extrapolation i detta fall orsakade bade mer statiskt fel och brus pa signalen sa ar det fortfarande den algoritm dar mest utrymme nns for vidare utveckling och forbattring. Felsakerhetsfunktion implementerades vilket hindrar konverteraren fran att skicka ut ogiltiga pulser vid felaktiga insignaler, och som aven startar om enheten om nagot gatt fel som ett forsok att ratta till detta. Huruvida detta gjorde konverteraren felsaker eller inte ar svart att saga, da termen ar ganska vid och utspelar sig olika fran fall till fall. Slutsatsen ar dock att den implementerade funktionen gjorde konverteraren mer felsaker med sina forvantade funktioner. Mjukvara och hardvara optimerades och designades for att en framtida ingjutning skulle vara mojlig av konverteraren. Denna funktion kunde dock ej testas da utveckligsplattformen ej gav tillgang till de nodvandiga ingagarna pa processorn.
Le, Thu Anh. "An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure". Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849728/.
Pełny tekst źródłaChang, Chih-Yang, i 張志揚. "Encoder Optimization for Desktop Sharing Using Screen Video Codec 2". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/b2jkxf.
Pełny tekst źródła國立臺北科技大學
電機工程系所
102
Big Blue Button (BBB) is open-source software for video-conferencing, which provides sharing functions for desktop, video, audio, PPT and PDF, etc. For the desktop sharing, BBB adopts an encoder tailored to the Screen Video Codec 2(SVC2) of Adobe’s standard. The goal of this work is to improve the encoder performance of BBB desktop sharing, while keeping the output video stream conforming to SVC2. This work improves BBB in many aspects, including detection of frame change, decision of frame replenishment, quantization of color, reduction of frame scaling complexity, pipeline structure of screen capture, type conversion of the capture frames. Experiment results shows the frame rate and bandwidth is improved by our proposed method.
Hung, Chi-che, i 洪啟哲. "Optimization of H.264 Video Compression Encoder Based on DSP Platform". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/16136650177538161792.
Pełny tekst źródła大同大學
電機工程學系(所)
99
With the advancement of the digital signal processing, real-time video transmission becomes an essential element in our daily life. In this paper, a implementation and optimization scheme of H.264/MPEG-4 encoder based on TMS320C6416 DSP is presented. For the H.264 encoder, the open source code JM is used as the basis to build a DSP-executable program. We choose the Baseline Profile as our main research from the H.264 encoder architecture, and this profile offer the intra prediction, inter prediction, and the entropy coding adopts CAVLC. The hardware platform used is TI TMS320C6416 DSK, the main function is the digital signal processing depending on its special hardware module designed. The TMS320C6416 DSP operating at 1GHz, eight functional units, operating highest may reach 8000MIPS. The procedure of code immigration, how to optimize the algorithm by using TI CCS, using TI intrinsic setting functions, and writing the linear assembly code to optimize the system are discussed as follows. Furthermore, we use several DSP codes acceleration techniques including memory management, TI DSP intrinsic functions and others. Through the code modifications, we can reduce the computation by 4-11%.
Yang, Chung-Yu, i 楊中瑜. "The Optimization and Complexity Reductionof H.264/AVC Baseline Encoder Using TIThe Optimization and Complexity Reduction of H.264/AVC Baseline Encoder Using TI DM642 Digital Signal Processor". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/18943057802547551809.
Pełny tekst źródła國立臺北大學
通訊工程研究所
96
H.264/AVC is the most advanced compression technology which offers better compression ratio and lower distortion than the previous video compression standards such as MPEG-4 and MPEG-2. However, the computational cost is really high. x264 has the best performance/ cost time ratio in most of the H.264-based algorithms. The main objective of this thesis is to realize x264 on TI DM642 DSP framework. In this thesis, the complexity will be reduced by using algorithm optimization and programming structure rearrangement. We also optimized part of the encoder with assembly code and new memory arrangement. The proposed and updated codec can achieve the speed of 22.6 FPS for VGA (640£480) size and realtime (more than 40 FPS) for CIF (352£288) size video sequence.
Lai, Yi-Lun, i 賴逸倫. "Speed Optimization of H.264 Encoder on General-Purpose Processors with Media Instructions". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/25112098542380400700.
Pełny tekst źródła國立中正大學
資訊工程研究所
92
H.264 is the newest and the most coding efficiency standard developed by JVT. With many advanced coding techniques, it can achieve significantly higher coding performance compared to the existing coding standards. However, the improved coding performance of H.264 comes with a great amount of time and space complexity, making it impractical for realtime encoding applications. The speed optimization of H.264 codec thus becomes a crucial issue. The utilization of media instructions embedded in modern general-purpose processors is considered an efficient means of optimizing the H.264 codec since it can fully exploit the code-level parallelism without sacrificing the performance. This is the central focus of this thesis work. This work proposes an optimized H.264 encoder with joint algorithm- and code-level optimization techniques. We first modify one state-of-the-art fast motion estimation scheme for the algorithm-level optimization. We then use a commercial profiling tool to identify most time consuming modules which are suitable for SIMD implementations or other software optimization techniques. Several code-level optimization techniques, including frame-memory rearrangement, SIMD implementations based on the Intel MMX/SSE2 instruction sets, search mode reordering and early termination for variable block-size motion estimation, are then applied on these time-critical modules. Simulation results show that without sacrificing too much coding efficiency, the proposed encoder which can achieve a speed-up gain of up to 10-12 times to the original JM7.3 encoder when all the coding modes are applied.
Lai, Yen-Wen, i 賴彥汶. "Study of MPEG-4 Advance Simple Profile Video Encoder--Optimization Transcoding and Streaming". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/60085214040104777006.
Pełny tekst źródła國立成功大學
資訊工程學系碩博士班
92
Due to dynamically changing bandwidth of network environment, rate transcoding of a compressed video is necessary to fit the estimated network bandwidth. In this thesis, a rate control algorithm and a real-time network bandwidth estimator are proposed for the above purpose. Therefore, only one copy of a compressed video source is necessary. This will greatly save the storage space on the server. We also implement the corresponding transcoding server and the bitstream can be accepted and viewed by using the popular Quick-Time player. The result is quite satisfactory. Real-time demo station is available. The second part of the thesis is to deal with the determination of the I, P, B compression type of video frames for MPEG4 advance simple profile encoder. Most papers collect useful information of a whole GOP and then determine the distributions of I, P, B compression type in a GOP. We proposed a dynamic scheme to determine compression type. Much less computation and required memory space are needed in our scheme instead at the expected expense of reduced coding efficiency.
CHEN, LUNG-CHENG, i 陳蘢檉. "Rate Allocation Techniques for 3D HEVC Video Encoder Based on 3D Quality Optimization". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/37337338826316984997.
Pełny tekst źródła國立中正大學
電機工程研究所
104
In this paper, we proposed a new algorithm in rate allocation and color+depth joint rate control based on 3D-HEVC. In our algorithm, "joint rate control" can provide a smaller bitrate error than standard SHVC coding tool. In "rate allocation," we have two method to seperate bits. First, we prposed a method to seperate different joint (color+depth) LCUs in one joint (color+depth) frame, this is called "3D Quality Contribution." In 3D quality contribution, we use motion vector and edge matching to judge each LCU whether it is important to human visual system. Another method to seperate target bits is "color/depth LCU rate allocation via SVR model." We use different features with training sequences and 9 different target bitrates to build 9 SVR model. After SVR model is established, we can use test sequences features and SVR models to get suggested best color/depth LCUs distribution rate. In our experiments, our joint rate control system can reduce bitrate error about 0.8% than standard SHM 6.0. And we have tested 15 different sequence with different target bitrate, we have the best 3D quality performance in 12 different sequence with different target bitrate.
Lin, Yin-Ling, i 林映伶. "MPEG-2/4 Low Complexity AAC Encoder Optimization and Implementation on a StrongARM Platform". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/43493370133617970596.
Pełny tekst źródła國立交通大學
電機與控制工程系所
93
In this thesis, we present an optimized AAC encoding scheme and also proposed a data embedded method integrated into AAC encoding system. Both of them are finally realized on a 32-bit fixed-point processor, StrongARM SA-1110. Experimental result shows that at least 1 encoding speed is achieved. In the AAC encoding algorithm, we propose several approaches including the removal of block switching, fast MDCT, simplified TNS, simplified M/S stereo coding, mathematical function optimization and fast quantization. To compensate the error caused by fixed-point conversion, a bandwidth control and a dynamic data precision MDCT are applied. Finally, a data embedded method is implemented to further increase its utility.
Jing-XinWang i 王景新. "Parallel H.264/AVC Rate-Distortion Optimization Baseline Profile Encoder on Distributed Shared Memory System". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/51038557556548773314.
Pełny tekst źródła國立成功大學
資訊工程學系碩博士班
98
H.264/AVC video coding standard incorporates many coding tools into its design to improve its compression performance. In a H.264/AVC rate-distortion optimization (RDO) encoder, computation time is primarily spent on calculating the rate-distortion cost (RD) of choosing the best coding mode. Parallel computation is one of the methods to speed up the encoder. However, calculating the rate-distortion cost requires lots of reference data obtained from coded adjacent macroblocks. This is not a good property for any parallel computing strategy, especially for distributed shared memory (DSM) system. In a cluster computing system, DSM provides the virtual shared memory scheme to write the parallel program more easily. But the amount of transferring data and the frequency of transferring data on each computer affect the speedup. To gain more speedup, this thesis proposes a parallel H.264/AVC RDO encoder architecture to reduce the frequency of transferring reference data. Based on this architecture, three parallel computing schemes, including Parallel Slice Scheme (PSS), Parallel Multiple Reference Frames Scheme (PMRFS) and Parallel Block Mode Scheme (PBM) are proposed. Parallel slice scheme (PSS) outperforms other two schemes on a DSM system. However, the video quality would be decreased in our proposed parallel architecture with PSS. To improve more video quality, this thesis also proposes the modified parallel architecture and the modified PSS (PSS_M) based on PSS. PSS_M is run over a DSM system consisting of 5 PC computers (one master node with four slave processing nodes). Each computer has two dual-core processors. The difference in PSNR curve between PSS_M and H.264/AVC RDO encoder without parallelism is slight in slow motion sequence such as Akiyo. The maximum speedup of PSS_M is 4.22 in n=5/p=1 (five computers are used and each computer only uses one core). In addition, PSS_M combined with wavefront order scheme (PSS_MW) in n=5/p=4 had executed in this thesis. The maximum improvement in speedup in p=4 is 2.61. The video quality and speedup of our proposed three schemes are shown in this thesis. Although PSS_M obtains more coding efficiency than the other method, it is possible to combine three schemes to get more video quality and speedup when more number of the computers is used. This thesis provides a good reference for implementing the combined scheme.
Huang, Yi-Hsin, i 黃翊鑫. "Integrated Fast Mode Decision Algorithm and SSIM-Based Rate-Distortion Optimization for H.264 Encoder". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53152076155231006602.
Pełny tekst źródła國立臺灣大學
電信工程學研究所
97
The success of H.264 standardization implies that the video coding tools of the next-generation video coding standard, for example, H.265, will become more complicated and require extensive computations for high quality video. To satisfy the real-time requirements of many consumer electronic and multimedia communication applications, it is absolutely necessary to enhance the computational efficiency of such advanced coding tools. On the other hand, because the video quality is ultimately judged by human eyes, we strongly believe that the characteristics of human visual system must be taken into account in the design of the next-generation video coding system. Motivated by these requirements of next-generation video coding, this thesis targets the development of algorithm for 1) integrated fast mode decision algorithm and 2) structural similarity based rate distortion optimization. In the first part, three fast intra mode decision algorithms for different stages in the mode decision hierarchy of H.264 are proposed, which are variance-based MB mode decision, improved filter-based prediction mode decision, and an R-D characteristic based selective intra mode decision. Their integration is also investigated and we propose integrated fast algorithms for intra-frame coding and inter-frame coding, respectively. The integrated algorithms achieve high complexity reduction without introducing noticeable R-D performance loss. The experimental results are provided to show the superiority of the proposed algorithms. In the second part, we develop a rate-distortion optimization framework based on structural similarity for the mode decision process in H.264, and propose a predictive Lagrangian multiplier selection method for the proposed framework. To estimate the Lagrangian multiplier, approaches with different computational overhead are presented to meet the requirement of different target applications. The proposed method achieves about 5%-10% bit rate reduction with same quality in terms of SSIM index. From the subjective evaluation, the proposed method preserves more detail and introduces less block artifact than the MSE-based H.264 encoder with the same bit-rate constraint.
Lin, Chih-Yuan, i 林志遠. "Research on the performance optimization of DSP program, with a casestudy of the H.264 Encoder". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/83556140717006270372.
Pełny tekst źródła明志科技大學
電機工程研究所
98
This thesis focuses on the research of the efficiency optimization that is applied inside the Digital Signal Processor and uses the H.264 video encoder, made by the team, as an example of improving efficiency. The platform used in the research is the DM6437 DSP system development board that was made by Texas Instruments. To test our encoding system, we use the officially released H.264 video testing file. This system was developed by referring to the original code of the coding software JM8.0 that was released by the official H.264. However, while the original JM8.0 was written using x86 CPU as the platform, in order to understand the application efficiency of JM8.0 when using DM6437, we also developed an implanted version of JM8.0 on the DM6437 DSP platform. The video compression rates in the JM8.0 implanted version, the team-designed version, and our enhanced version are 0.16 pages per second, 1.31 pages per second, and 6.73 pages per second. This research processes the optimization in 2 ways. The first way is to utilize the TI Code Composer Studio (CCS) 3.3 for the optimization of system coding which is divided into 4 levels (o0 to o3) with the o0 level being minimal optimized and the o3 level being the most optimized level. When using the o3 option, the CCS will activate pipeline and parallel processing functions. At this moment, the DM6437 platform requires a large amount of memory for compiling. The second way is to utilize the cache memory allocation that is equipped inside the DM6437 so the system code/data that is frequently used can remain inside the cache memory as long as possible. Usage of the memory cache could achieve optimization because it decreases the number of times that DSP needs to access the main memory. This research analyzed the overall efficiency of the compiling optimization/cache optimization and used the complicated H.264 encoder as an example of efficiency promoting research. If one could take further steps and conquer the final obstacles by rewriting the system encoder to assembly language and applying EDMA for more optimization research, it is possible that the efficiency of the H.264 could be further improved and promoted in the future.
Yang, Chung-Yu. "The Optimization and Complexity Reduction of H.264/AVC Baseline Encoder Using TI DM642 Digital Signal Processor". 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0023-2608200810274100.
Pełny tekst źródłaLiao, Irene M. J., i 廖美貞. "A Carry-Select-Adder Optimization Technique for High-Performance Booth-Encoded Wallace-Tree Multipliers". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/45267729737681347070.
Pełny tekst źródła國立清華大學
資訊工程學系
89
In this thesis, we present two carry-select adder partitioning algorithms for high-performance Booth-encoded Wallace-tree multipliers. By taking various data arrival times into account, we propose a branch-and-bound algorithm and a heuristic algorithm to partition an n-bit carry-select adder into a number of adder blocks such that the overall delay of the design is minimized. The experimental results show that our proposed algorithm can achieve on an average 9.1% delay reduction with less than 1% of area overhead on 15 multipliers ranges from 16X16-bit to 64X64-bit.
Lammers, Christoph. "A method for the genetically encoded incorporation of FRET pairs into proteins". Doctoral thesis, 2014. http://hdl.handle.net/11858/00-1735-0000-0022-5F65-3.
Pełny tekst źródłaAlain, Guillaume. "Auto-Encoders, Distributed Training and Information Representation in Deep Neural Networks". Thèse, 2018. http://hdl.handle.net/1866/22572.
Pełny tekst źródłaAlmeida, Inês Ferreira de. "Optimization of in vivo electroporation and comparison to microinjection as delivery methods for transgenesis in zebrafish (Danio rerio). Generation of a new neuronal zebrafish line". Master's thesis, 2022. http://hdl.handle.net/10362/132850.
Pełny tekst źródłaOs peixes-zebra transgénicos são modelos importantes para a pesquisa biomédica. Existem várias tecnologias disponíveis para a geração de transgénicos e edição do genoma. No entanto, os métodos para a entrega de componentes exógenos permanecem limitados. No peixe-zebra, o método mais utilizado é a microinjeção, que requer habilidades técnicas sofisticadas e apresenta taxa de integração de grandes construções reduzida. Alternativamente, alguns estudos relataram o uso de eletroporação como um método de entrega para a geração de peixes-zebra transgénicos; no entanto, esses protocolos contêm algumas limitações que reduzem sua aplicabilidade generalizada. Como tal, tendo por base um trabalho publicado recentemente relatando a eletroporação de embriões de peixe-zebra, implementaram-se otimizações a fim de aumentar o número de embriões eletroporados, a eficiência da entrega de DNA plasmídico e a sua integração na linha germinativa. Ciclos de eletroporação de 30 embriões de peixe-zebra no estado de uma célula com 300 ng / uL de DNA plasmídico em PBS usando um pulso de formação de poros de 35 V e pulso de transferência de 5 V obtiveram a maior taxa de sobrevivência e eficiência. Comparado à microinjeção, o protocolo de eletroporação otimizado alcançou uma intensidade de fluorescência e padrão de expressão semelhantes, abrindo caminho para se tornar uma alternativa prática e eficiente à microinjeção. Em paralelo, uma nova linha de peixe-zebra transgénica pan-neuronal, elalv3: GCaMP6fEF05 foi gerada, através da microinjeção em embriões no estado de uma célula, seguida por 3 rondas de cruzamentos de peixes, screens, seleção e criação. A otimização dos métodos de entrega, como a eletroporação, permite expandir a geração de novas linhas de peixe-zebra para o estudo da biologia molecular e do desenvolvimento que, em última análise, permite a exploração de novos caminhos terapêuticos para humanos.
Dauphin, Yann. "Advances in scaling deep learning algorithms". Thèse, 2015. http://hdl.handle.net/1866/13710.
Pełny tekst źródła