Tesi sul tema "Encoder optimization"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Encoder optimization.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-29 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Encoder optimization".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Mallikarachchi, Thanuja. "HEVC encoder optimization and decoding complexity-aware video encoding". Thesis, University of Surrey, 2017. http://epubs.surrey.ac.uk/841841/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The increased demand for high quality video evidently elevates the bandwidth requirements of the communication channels being used, which in return demands for more efficient video coding algorithms within the media distribution tool chain. As such, High Efficiency Video Coding (HEVC) video coding standard is a potential solution that demonstrates a significant coding efficiency improvement over its predecessors. HEVC constitutes an assortment of novel coding tools and features that contribute towards its superior coding performance, yet at the same time demand more computational, processing and energy resources; a crucial bottleneck, especially in the case of resource constrained Consumer Electronic (CE) devices. In this context, the first contribution in this thesis presents a novel content adaptive Coding Unit (CU) size prediction algorithm for HEVC-based low-delay video encoding. In this case, two independent content adaptive CU size selection models are introduced while adopting a moving window-based feature selection process to ensure that the framework remains robust and dynamically adapts to any varying video content. The experimental results demonstrate a consistent average encoding time reduction ranging from 55% - 58% and 57% - 61% with average Bjøntegaard Delta Bit Rate (BDBR) increases of 1.93% - 2.26% and 2.14% - 2.33% compared to the HEVC 16.0 reference software for the low delay P and low delay B configurations, respectively, across a wide range of content types and bit rates. The video decoding complexity and the associated energy consumption are tightly coupled with the complexity of the codec as well as the content being decoded. Hence, video content adaptation is extensively considered as an application layer solution to reduce the decoding complexity and thereby the associated energy consumption. In this context, the second contribution in this thesis introduces a decoding complexity-aware video encoding algorithm for HEVC using a novel decoding complexity-rate-distortion model. The proposed algorithm demonstrates on average a 29.43% and 13.22% decoding complexity reductions for the same quality with only a 6.47% BDBR increase when using the HM 16.0 and openHEVC decoders, respectively. Moreover, decoder energy consumption analysis reveals an overall energy reduction of up to 20% for the same video quality. Adaptive video streaming is considered as a potential solution in the state-of-the-art to cope with the uncertain fluctuations in the network bandwidth. Yet, the simultaneous consideration of both bit rate and decoding complexity for content adaptation with minimal quality impact is extremely challenging due to the dynamics of the video content. In response, the final contribution in this thesis introduces a content adaptive decoding complexity and rate controlled encoding framework for HEVC. The experimental results reveal that the proposed algorithm achieves a stable rate and decoding complexity controlling performance with an average error of only 0.4% and 1.78%, respectively. Moreover, the proposed algorithm is capable of generating HEVC bit streams that exhibit up to 20.03 %/dB decoding complexity reduction which result in up to 7.02 %/dB decoder energy reduction per 1dB Peak Signal-to-Noise Ratio (PSNR) quality loss.
2

Syu, Eric. "Implementing rate-distortion optimization on a resource-limited H.264 encoder". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33365.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaves 57-59).
This thesis models the rate-distortion characteristics of an H.264 video compression encoder to improve its mode decision performance. First, it provides a background to the fundamentals of video compression. Then it describes the problem of estimating rate and distortion of a macroblock given limited computational resources. It derives the macroblock rate and distortion as a function of the residual SAD and H.264 quantization parameter QP. From the resulting equations, this thesis implements and verifies rate-distortion optimization on a resource-limited H.264 encoder. Finally, it explores other avenues of improvement.
by Eric Syu.
M.Eng.
3

Carriço, Nuno Filipe Marques. "Transformer approaches on hyper-parameter optimization and anomaly detection with applications in stream tuning". Master's thesis, Universidade de Évora, 2022. http://hdl.handle.net/10174/31068.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Hyper-parameter Optimisation consists of finding the parameters that maximise a model’s performance. However, this mainly concerns processes in which the model shouldn’t change over time. Hence, how should an online model be optimised? For this, we pose the following research question: How and when should the model be optimised? For the optimisation part, we explore the transformer architecture as a function mapping data statistics into model parameters, by means of graph attention layers, together with reinforcement learning approaches, achieving state of the art results. On the other hand, in order to detect when the model should be optimised, we use the transformer architecture to empower already existing anomaly detection methods, in this case, the Variational Auto Encoder. Finally, we join these developed methods in a framework capable of deciding when an optimisation should take part and how to do it, aiding the stream tuning process; Sumário: Abordagens de Transformer em Optimização de Hiper-Parâmetros e Deteção de Anomalias com Aplicações em Stream Tuning Optimização de hiper parâmetros consiste em encontrar os parâmetros que maximizam a performance de um modelo. Contudo, maioritariamente, isto diz respeito a processos em que o modelo não muda ao longo do tempo. Assim, como deve um modelo online ser optimizado? Para este fim, colocamos a seguinte pergunta: Como e quando deve ser o modelo optimizado? Para a fase de optimização, exploramos a arquitectura de transformador, como uma função que mapeia estatísticas sobre dados para parâmetros de modelos, utilizando atenção de grafos junto de abordagens de aprendizagem por reforço, alcançando resultados de estado da arte. Por outro lado, para detectar quando o modelo deve ser optimizado, utilizamos a arquitectura de transformador, reforçando abordagens de detecção de anomalias já existentes, o Variational Auto Encoder. Finalmente, juntamos os métodos desenvolvidos numa framework capaz de decidir quando se deve realizar uma optimização e como o fazer, auxiliando o processo de tuning em stream.
4

Hägg, Ragnar. "Scalable High Efficiency Video Coding : Cross-layer optimization". Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257558.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In July 2014, the second version of the HEVC/H.265 video coding standard was announced, and it included the Scalable High efficiency Video Coding (SHVC) extension. SHVC is used for coding a video stream with subset streams of the same video with lower quality, and it supports spatial, temporal and SNR scalability among others. This is used to enable easy adaption of a video stream, by dropping or adding packages, to devices with different screen sizes, computing power and bandwidth. In this project SHVC has been implemented in Ericsson's research encoder C65. Some cross-layer optimizations have also been implemented and evaluated. The main goal of these optimizations are to make better decisions when choosing the reference layer's motion parameters and QP, by doing multi-pass coding and using the coded enhancement layer information from the first pass.
5

Sun, Hui [Verfasser], Ralph [Akademischer Betreuer] Kennel, Alexander W. [Gutachter] Koch e Ralph [Gutachter] Kennel. "Optimization of Velocity and Displacement Measurement with Optical Encoder and Laser Self-Mixing Interferometry / Hui Sun ; Gutachter: Alexander W. Koch, Ralph Kennel ; Betreuer: Ralph Kennel". München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/1230552693/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Al-Hasani, Firas Ali Jawad. "Multiple Constant Multiplication Optimization Using Common Subexpression Elimination and Redundant Numbers". Thesis, University of Canterbury. Electrical and Computer Engineering, 2014. http://hdl.handle.net/10092/9054.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The multiple constant multiplication (MCM) operation is a fundamental operation in digital signal processing (DSP) and digital image processing (DIP). Examples of the MCM are in finite impulse response (FIR) and infinite impulse response (IIR) filters, matrix multiplication, and transforms. The aim of this work is minimizing the complexity of the MCM operation using common subexpression elimination (CSE) technique and redundant number representations. The CSE technique searches and eliminates common digit patterns (subexpressions) among MCM coefficients. More common subexpressions can be found by representing the MCM coefficients using redundant number representations. A CSE algorithm is proposed that works on a type of redundant numbers called the zero-dominant set (ZDS). The ZDS is an extension over the representations of minimum number of non-zero digits called minimum Hamming weight (MHW). Using the ZDS improves CSE algorithms' performance as compared with using the MHW representations. The disadvantage of using the ZDS is it increases the possibility of overlapping patterns (digit collisions). In this case, one or more digits are shared between a number of patterns. Eliminating a pattern results in losing other patterns because of eliminating the common digits. A pattern preservation algorithm (PPA) is developed to resolve the overlapping patterns in the representations. A tree and graph encoders are proposed to generate a larger space of number representations. The algorithms generate redundant representations of a value for a given digit set, radix, and wordlength. The tree encoder is modified to search for common subexpressions simultaneously with generating of the representation tree. A complexity measure is proposed to compare between the subexpressions at each node. The algorithm terminates generating the rest of the representation tree when it finds subexpressions with maximum sharing. This reduces the search space while minimizes the hardware complexity. A combinatoric model of the MCM problem is proposed in this work. The model is obtained by enumerating all the possible solutions of the MCM that resemble a graph called the demand graph. Arc routing on this graph gives the solutions of the MCM problem. A similar arc routing is found in the capacitated arc routing such as the winter salting problem. Ant colony optimization (ACO) meta-heuristics is proposed to traverse the demand graph. The ACO is simulated on a PC using Python programming language. This is to verify the model correctness and the work of the ACO. A parallel simulation of the ACO is carried out on a multi-core super computer using C++ boost graph library.
7

Nasrallah, Anthony. "Novel compression techniques for next-generation video coding". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT043.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le contenu vidéo occupe aujourd'hui environ 82% du trafic Internet mondial. Ce pourcentage important est dû à la révolution des contenus vidéo. D’autre part, le marché exige de plus en plus des vidéos avec des résolutions et des qualités plus élevées. De ce fait, développer des algorithmes de codage encore plus efficaces que ceux existants devient une nécessité afin de limiter afin de limiter l’augmentation de la quantité de données vidéo circulant sur internet et assurer une meilleure qualité de service. En outre, la consommation impressionnante de contenu multimédia dans les produits électroniques impacte l’aspect écologique. Par conséquent, trouver un compromis entre la complexité des algorithmes et l’efficacité des implémentations s’impose comme nouveau défi. Pour cela, une équipe collaborative a été créée dans le but de développer une nouvelle norme de codage vidéo, Versatile Video Coding – VVC/H.266. Bien que VVC ait pu aboutir à une réduction de plus de 40% du débit par rapport à HEVC, cela ne signifie pas du tout qu’il n’y a plus de besoin pour améliorer encore l’efficacité du codage. De plus, VVC ajoute une complexité remarquable par rapport à HEVC. Cette thèse vient répondre à ces problématiques en proposant trois nouvelles méthodes d'encodage. Les apports de cette recherche se répartissent en deux axes principaux. Le premier axe consiste à proposer et mettre en œuvre de nouveaux outils de compression dans la nouvelle norme, capables de générer des gains de codage supplémentaires. Deux méthodes ont été proposées pour ce premier axe. Le point commun entre ces deux méthodes est la dérivation des informations de prédiction du côté du décodeur. En effet, l’augmentation des choix de l’encodeur peut améliorer la précision des prédictions et donne moins de résidus d’énergie, conduisant à une réduction du débit. Néanmoins, plus de modes de prédiction impliquent plus de signalisation à envoyer dans le flux binaire pour informer le décodeur des choix qui ont été faits au niveau de l’encodeur. Les gains mentionnés ci-dessus sont donc largement compensés par la signalisation ajoutée. Si l’information de prédiction est dérivée au niveau du décodeur, ce dernier n’est plus passif, mais devient actif, c’est le concept de décodeur intelligent. Ainsi, il sera inutile de signaler l’information, d’où un gain en signalisation. Chacune des deux méthodes propose une technique intelligente différente pour prédire l’information au niveau du décodeur. La première technique construit un histogramme de gradients pour déduire différents modes de prédiction intra pouvant ensuite être combinés, pour obtenir le mode de prédiction intra final pour un bloc donné. Cette propriété de fusion permet de prédire plus précisément les zones avec des textures complexes, ce qui, dans les schémas de codage conventionnels, nécessiterait plutôt un partitionnement et/ou une transmission plus fine des résidus à haute énergie. La deuxième technique consiste à donner à VVC la possibilité de basculer entre différents filtres d’interpolation pour la prédiction inter. La déduction du filtre optimal sélectionné par l’encodeur est réalisée grâce à des réseaux de neurones convolutifs. Le deuxième axe, contrairement au premier, ne cherche pas à ajouter une contribution à l’algorithme de base de VVC. Cet axe vise plutôt à permettre une utilisation optimisée de l’algorithme déjà existant. L’objectif ultime est de trouver le meilleur compromis possible entre l’efficacité de compression fournie et la complexité imposée par les outils VVC. Ainsi, un système d’optimisation est conçu pour déterminer une technique efficace d’adaptation de l’activation des outils au contenu. La détermination de ces outils peut être effectuée soit en utilisant des réseaux de neurones artificiels, soit sans aucune technique d’intelligence artificielle
Video content now occupies about 82% of global internet traffic. This large percentage is due to the revolution in video content consumption. On the other hand, the market is increasingly demanding videos with higher resolutions and qualities. This causes a significant increase in the amount of data to be transmitted. Hence the need to develop video coding algorithms even more efficient than existing ones to limit the increase in the rate of data transmission and ensure a better quality of service. In addition, the impressive consumption of multimedia content in electronic products has an ecological impact. Therefore, finding a compromise between the complexity of algorithms and the efficiency of implementations is a new challenge. As a result, a collaborative team was created with the aim of developing a new video coding standard, Versatile Video Coding – VVC/H.266. Although VVC was able to achieve a more than 40% reduction in throughput compared to HEVC, this does not mean at all that there is no longer a need to further improve coding efficiency. In addition, VVC adds remarkable complexity compared to HEVC. This thesis responds to these problems by proposing three new encoding methods. The contributions of this research are divided into two main axes. The first axis is to propose and implement new compression tools in the new standard, capable of generating additional coding gains. Two methods have been proposed for this first axis. These two methods rely on the derivation of prediction information at the decoder side. This is because increasing encoder choices can improve the accuracy of predictions and yield less energy residue, leading to a reduction in bit rate. Nevertheless, more prediction modes involve more signaling to be sent into the binary stream to inform the decoder of the choices that have been made at the encoder. The gains mentioned above are therefore more than offset by the added signaling. If the prediction information has been derived from the decoder, the latter is no longer passive, but becomes active hence the concept of intelligent decoder. Thus, it will be useless to signal the information, hence a gain in signalization. Each of the two methods offers a different intelligent technique than the other to predict information at the decoder level. The first technique constructs a histogram of gradients to deduce different intra-prediction modes that can then be combined by means of prediction fusion, to obtain the final intra-prediction for a given block. This fusion property makes it possible to more accurately predict areas with complex textures, which, in conventional coding schemes, would rather require partitioning and/or finer transmission of high-energy residues. The second technique gives VVC the ability to switch between different interpolation filters of the inter prediction. The deduction of the optimal filter selected by the encoder is achieved through convolutional neural networks. The second axis, unlike the first, does not seek to add a contribution to the VVC algorithm. This axis rather aims to build an optimized use of the already existing algorithm. The ultimate goal is to find the best possible compromise between the compression efficiency delivered and the complexity imposed by VVC tools. Thus, an optimization system is designed to determine an effective technique for activating the new coding tools. The determination of these tools can be done either using artificial neural networks or without any artificial intelligence technique
8

Luo, Fangyi. "Post-Layout DFM optimization based on hybrid encoded topological layout /". Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Zhang, Yuanzhi. "Algorithms and Hardware Co-Design of HEVC Intra Encoders". OpenSIUC, 2019. https://opensiuc.lib.siu.edu/dissertations/1769.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Digital video is becoming extremely important nowadays and its importance has greatly increased in the last two decades. Due to the rapid development of information and communication technologies, the demand for Ultra-High Definition (UHD) video applications is becoming stronger. However, the most prevalent video compression standard H.264/AVC released in 2003 is inefficient when it comes to UHD videos. The increasing desire for superior compression efficiency to H.264/AVC leads to the standardization of High Efficiency Video Coding (HEVC). Compared with the H.264/AVC standard, HEVC offers a double compression ratio at the same level of video quality or substantial improvement of video quality at the same video bitrate. Yet, HE-VC/H.265 possesses superior compression efficiency, its complexity is several times more than H.264/AVC, impeding its high throughput implementation. Currently, most of the researchers have focused merely on algorithm level adaptations of HEVC/H.265 standard to reduce computational intensity without considering the hardware feasibility. What’s more, the exploration of efficient hardware architecture design is not exhaustive. Only a few research works have been conducted to explore efficient hardware architectures of HEVC/H.265 standard. In this dissertation, we investigate efficient algorithm adaptations and hardware architecture design of HEVC intra encoders. We also explore the deep learning approach in mode prediction. From the algorithm point of view, we propose three efficient hardware-oriented algorithm adaptations, including mode reduction, fast coding unit (CU) cost estimation, and group-based CABAC (context-adaptive binary arithmetic coding) rate estimation. Mode reduction aims to reduce mode candidates of each prediction unit (PU) in the rate-distortion optimization (RDO) process, which is both computation-intensive and time-consuming. Fast CU cost estimation is applied to reduce the complexity in rate-distortion (RD) calculation of each CU. Group-based CABAC rate estimation is proposed to parallelize syntax elements processing to greatly improve rate estimation throughput. From the hardware design perspective, a fully parallel hardware architecture of HEVC intra encoder is developed to sustain UHD video compression at 4K@30fps. The fully parallel architecture introduces four prediction engines (PE) and each PE performs the full cycle of mode prediction, transform, quantization, inverse quantization, inverse transform, reconstruction, rate-distortion estimation independently. PU blocks with different PU sizes will be processed by the different prediction engines (PE) simultaneously. Also, an efficient hardware implementation of a group-based CABAC rate estimator is incorporated into the proposed HEVC intra encoder for accurate and high-throughput rate estimation. To take advantage of the deep learning approach, we also propose a fully connected layer based neural network (FCLNN) mode preselection scheme to reduce the number of RDO modes of luma prediction blocks. All angular prediction modes are classified into 7 prediction groups. Each group contains 3-5 prediction modes that exhibit a similar prediction angle. A rough angle detection algorithm is designed to determine the prediction direction of the current block, then a small scale FCLNN is exploited to refine the mode prediction.
10

Nguyen, Ngoc-Mai. "Stratégies d'optimisation de la consommation pour un système sur puce encodeur H.264". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT049/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La consommation d'énergie des systèmes sur puces induit des contraintes fortes lors de leur conception. Elle affecte la fiabilité du système, le coût du refroidissement de la plateforme, et la durée de vie de la batterie lorsque le circuit est alimenté par des batteries. En fait, avec la diminution de la tailles de la technologie des semi-conducteurs, l'optimisation de la puissance consommée est devenue un enjeu majeur, au même titre que le coût lié à la surface silicium et l'optimisation des performances, en particulier pour les applications mobiles. Des puces codec vidéo dédiées ont été utilisés dans diverses applications telles que les systèmes de vidéoconférence, de sécurité et de surveillance, ou encore et des applications de divertissement. Pour répondre aux contraintes des applications mobiles en termes de performance et de consommation énergétique, le codec vidéo est généralement implémenté en matériel plutôt qu'en logiciel, ce qui permet de garantir les contraintes d'efficacité énergétique et de traitement en temps réel. L'une des normes les plus efficaces pour les applications vidéo est aujourd'hui la norme H.264 Encodage Vidéo Avancé (H.264/AVC), qui offre une meilleure qualité vidéo à un débit binaire plus bas que les normes précédentes. Pour pouvoir effectivement intégrer cette norme dans des produits commerciaux, en particulier pour les appareils mobiles, lors de la conception du codec vidéo en matériel, les concepteurs devront utiliser des approches spécifiques de conception de circuits basse consommation et implanter des mécanismes de contrôle de la consommation. Cette thèse de doctorat s'est déroulée dans le cadre de la conception de l'encoder matériel au format H.264, appelé plateforme VENGME. La plateforme est découpée en différents modules et le module EC-NAL a été développé durant la thèse, en prenant en compte différentes solutions apparues dans la littérature pour minimiser la consommation de ce module. Les résultats en simulation montrent que le module EC-NAL présente de meilleurs résultats d'un point de vue consommation que ses concurrents de la littérature. L'architecture de la plateforme VENGME a ensuite été analysée, et des simulations au niveau RTL ont été menées pour évaluer sa consommation globale. Il en est ressorti une possibilité de diminuer encore plus la consommation de la plateforme matérielle en contrôlant la fréquence de certains modules. Cette approche a été appliquée au module EC-NAL qui possède en interne une FIFO. Dont le niveau peut être contrôlé en ajustant la fréquence d'horloge du côté du sous-module NAL. Cela a donc conduit à implémenter une approche d'adaptation automatique de la fréquence en fonction du niveau de remplissage de la FIFO. Le contrôleur a été implémenté en matériel et la stabilité du système bouclé a été étudiée. Les résultats en simulation montrent l'intérêt de la démarche adoptée qui devra être étendue à l'ensemble de la plateforme
Power consumption for Systems-on-Chip induces strong constraints on their design. Power consumption affects the system reliability, cooling cost, and battery lifetime for Systems-on-Chips powered by battery. With the pace of semiconductor technology, power optimization has become a tremendous challenging issue together with Silicon area and/or performance optimization, especially for mobile applications. Video codec chips are used in various applications ranging for video conferencing, security and monitoring systems, but also entertainment applications. To meet the performance and power consumptions constraints encountered for mobile applications, video codecs are favorably preferred to be implemented in hardware rather than in software. This hardware implementation will lead to better power efficiency and real-time requirements. Nowadays, one of the most efficient standards for video applications is the H.264 Advanced Video Coding (H.264/AVC) which provides better video quality at a lower bit-rate than the previous standards. To bring the standard into commercial products, especially for hand-held devices, designers need to apply design approaches dedicated to low-power circuits. They also need to implement mechanisms to control the circuit power consumption. This PhD thesis is conducted in the framework of the VENGME H.264/AVC hardware encoder design. The platform is split in several modules and the VENGME Entropy Coder and bytestream Network Abstraction Layer data packer (EC-NAL) module has been designed during this PhD thesis, taking into account and combining several state-of-the-art solutions to minimise the power consumption. From simulation results, it has been seen that the EC-NAL module presents better power figures than the already published solutions. Then, the VENGME H.264 encoder architecture has been analyzed and power estimations at RTL level have been performed to extract the platform power figures. Then, from these power figures, it has been decided to implement power control on the EC-NAL module. This latter contains a FIFO whose level can be controlled via an appropriate scaling of the clock frequency on the NAL side, which leads to the implementation of a Dynamic Frequency Scaling (DFS) approach based on the control of the FIFO occupancy level. The control law has been implemented in hardware (full-custom) and the closed-loop system stability has been studied. Simulation results show the effectiveness of the proposed DVS strategy that should be extended to the whole H.264 encoder platform
11

Howard, Dawne E. "The Finding Aid Container List Optimization Survey: Recommendations for Web Usability". Thesis, School of Information and Library Science, 2006. http://hdl.handle.net/1901/340.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper examines the results of a usability study for finding aids from the Special Collections Research Center at North Carolina State University. In 2005, the Special Collections Research Center reformatted its finding aids so that the container information, typically located on the left-hand side of the document, moved to the right-hand side of the document. The study tested the effectiveness of this change, and determined that traditional finding aids performed better. The analysis of the study’s results is followed by a discussion about Web usability guidelines for online finding aids.
12

DAHLQVIST, ANTON, e Victor Karlsson. "Design and optimization of a signal converter for incremental encoders : A study about maximizing the boundary limits of quadrature pulse conversion". Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192301.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This project was carried out during spring 2016 in collaboration with BOSCH Rexroth, Mellansel, and aimed to investigate the possibility of implementing a converter with the ability to scale incremental pulse resolution and also convert the velocity to a 4-20mA current loop reference. An investigation on how a fault tolerance mechanism could be implemented to increase the dependability of the encoder was done and the out-coming results were implemented. The software and hardware were revised and optimized to facilitate a future encapsulation, and still keep the converter reprogrammable. A background study was performed during 8 continuous weeks to acquire enough knowledge about possible conversion techniques, to nally derive these down to two for further testing. The nal conversion algorithms were multiplication and extrapolation techniques, which would be utilized to scale the pulse signal. The background study also involved writing ecient c code and a general study about fault tolerance. With the information from the background study, two algorithms were implemented on a specially designed hardware platform. Tests were designed from the requirements given from Bosch and these were performed at a test rig with a magnetic ring encoder connected to dSPACE control desk. A converter that met the criteria was designed and implemented. The test results showed that the most successful algorithm implemented was the multiplication algorithm, optimized with adaptive resolution, which decreases the input update rate with increasing speed. Although extrapolation caused more noiseand also a static error on the signal, this is the one leaving most room for future optimizations. Dependability means were implemented which stops the converter from outputting erroneous pulses, and also to reboot the software in case of invalid inputs. Whether this made the converter fail-safe or not is dicult to tell since fail-safe is a vague term and applies di erent for each situation. It was concluded that the implemented fault tolerance mechanism worked though. The software and hardware were designed so reprogramming is possible even though the component is casted. This particular function was not tested since the development board did not provide access to the required pins.
Detta projekt genomfordes under varen 2016 i samarbete med Bosch Rexroth i Mellansel. Syftet var att undersoka mojligheten att implementera en konverterare med mojligheten att steglost skala encodersignaler med olika pulstal, samt konvertera detta till en 4-20mA stromsignal. En utredning om hur felsakerheten skulle kunna forbattras pa konverteraren gjordes och resultaten fran undersokningen implementerades. Mjukvaran och hardvaran reviderades och optimerades for hur denna skulle kunna gjutas in for att klara industrins harda inkapslingskrav men samtidigt vara mojlig att programmera om. En bakgrundsstudie utfordes underatta veckor for att tillgodose tillracklig kunskap om konverteringsalgoritmer for att komma fram till tva stycken for implementation. De slutgiltiga konverteringsalgoritmerna som togs vidare till implementation och testning blev multiplikation och extrapolering av pulstider. Forstudien omfattade aven hur skrivande av e ektiv c kod kan goras da hog hastighet och upplosning kraver mycket berakning av processorn, och en bakgrundsstudie om felsakerhet i elektroniska system. Testfall designades for att testa onskade egenskaper och gransvarden for konverteringarna och dessa utfordes pa en testrig med en magnetencoder uppkopplad mot dSPACE Control Desk. En konverterare, vilket inte kunde hittas i bakgrundsstudien, designades och implementerades. Testresultaten visade att den mest framgangsrika algoritmen i dessa tester var multiplikationen med en adaptiv upplosning vilken reducerar antalet matpunkter pa insignalen vid hogre hastigheter. Aven om extrapolation i detta fall orsakade bade mer statiskt fel och brus pa signalen sa ar det fortfarande den algoritm dar mest utrymme nns for vidare utveckling och forbattring. Felsakerhetsfunktion implementerades vilket hindrar konverteraren fran att skicka ut ogiltiga pulser vid felaktiga insignaler, och som aven startar om enheten om nagot gatt fel som ett forsok att ratta till detta. Huruvida detta gjorde konverteraren felsaker eller inte ar svart att saga, da termen ar ganska vid och utspelar sig olika fran fall till fall. Slutsatsen ar dock att den implementerade funktionen gjorde konverteraren mer felsaker med sina forvantade funktioner. Mjukvara och hardvara optimerades och designades for att en framtida ingjutning skulle vara mojlig av konverteraren. Denna funktion kunde dock ej testas da utveckligsplattformen ej gav tillgang till de nodvandiga ingagarna pa processorn.
13

Le, Thu Anh. "An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure". Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849728/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.
14

Chang, Chih-Yang, e 張志揚. "Encoder Optimization for Desktop Sharing Using Screen Video Codec 2". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/b2jkxf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺北科技大學
電機工程系所
102
Big Blue Button (BBB) is open-source software for video-conferencing, which provides sharing functions for desktop, video, audio, PPT and PDF, etc. For the desktop sharing, BBB adopts an encoder tailored to the Screen Video Codec 2(SVC2) of Adobe’s standard. The goal of this work is to improve the encoder performance of BBB desktop sharing, while keeping the output video stream conforming to SVC2. This work improves BBB in many aspects, including detection of frame change, decision of frame replenishment, quantization of color, reduction of frame scaling complexity, pipeline structure of screen capture, type conversion of the capture frames. Experiment results shows the frame rate and bandwidth is improved by our proposed method.
15

Hung, Chi-che, e 洪啟哲. "Optimization of H.264 Video Compression Encoder Based on DSP Platform". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/16136650177538161792.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
大同大學
電機工程學系(所)
99
With the advancement of the digital signal processing, real-time video transmission becomes an essential element in our daily life. In this paper, a implementation and optimization scheme of H.264/MPEG-4 encoder based on TMS320C6416 DSP is presented. For the H.264 encoder, the open source code JM is used as the basis to build a DSP-executable program. We choose the Baseline Profile as our main research from the H.264 encoder architecture, and this profile offer the intra prediction, inter prediction, and the entropy coding adopts CAVLC. The hardware platform used is TI TMS320C6416 DSK, the main function is the digital signal processing depending on its special hardware module designed. The TMS320C6416 DSP operating at 1GHz, eight functional units, operating highest may reach 8000MIPS. The procedure of code immigration, how to optimize the algorithm by using TI CCS, using TI intrinsic setting functions, and writing the linear assembly code to optimize the system are discussed as follows. Furthermore, we use several DSP codes acceleration techniques including memory management, TI DSP intrinsic functions and others. Through the code modifications, we can reduce the computation by 4-11%.
16

Yang, Chung-Yu, e 楊中瑜. "The Optimization and Complexity Reductionof H.264/AVC Baseline Encoder Using TIThe Optimization and Complexity Reduction of H.264/AVC Baseline Encoder Using TI DM642 Digital Signal Processor". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/18943057802547551809.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺北大學
通訊工程研究所
96
H.264/AVC is the most advanced compression technology which offers better compression ratio and lower distortion than the previous video compression standards such as MPEG-4 and MPEG-2. However, the computational cost is really high. x264 has the best performance/ cost time ratio in most of the H.264-based algorithms. The main objective of this thesis is to realize x264 on TI DM642 DSP framework. In this thesis, the complexity will be reduced by using algorithm optimization and programming structure rearrangement. We also optimized part of the encoder with assembly code and new memory arrangement. The proposed and updated codec can achieve the speed of 22.6 FPS for VGA (640£480) size and realtime (more than 40 FPS) for CIF (352£288) size video sequence.
17

Lai, Yi-Lun, e 賴逸倫. "Speed Optimization of H.264 Encoder on General-Purpose Processors with Media Instructions". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/25112098542380400700.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立中正大學
資訊工程研究所
92
H.264 is the newest and the most coding efficiency standard developed by JVT. With many advanced coding techniques, it can achieve significantly higher coding performance compared to the existing coding standards. However, the improved coding performance of H.264 comes with a great amount of time and space complexity, making it impractical for realtime encoding applications. The speed optimization of H.264 codec thus becomes a crucial issue. The utilization of media instructions embedded in modern general-purpose processors is considered an efficient means of optimizing the H.264 codec since it can fully exploit the code-level parallelism without sacrificing the performance. This is the central focus of this thesis work. This work proposes an optimized H.264 encoder with joint algorithm- and code-level optimization techniques. We first modify one state-of-the-art fast motion estimation scheme for the algorithm-level optimization. We then use a commercial profiling tool to identify most time consuming modules which are suitable for SIMD implementations or other software optimization techniques. Several code-level optimization techniques, including frame-memory rearrangement, SIMD implementations based on the Intel MMX/SSE2 instruction sets, search mode reordering and early termination for variable block-size motion estimation, are then applied on these time-critical modules. Simulation results show that without sacrificing too much coding efficiency, the proposed encoder which can achieve a speed-up gain of up to 10-12 times to the original JM7.3 encoder when all the coding modes are applied.
18

Lai, Yen-Wen, e 賴彥汶. "Study of MPEG-4 Advance Simple Profile Video Encoder--Optimization Transcoding and Streaming". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/60085214040104777006.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立成功大學
資訊工程學系碩博士班
92
Due to dynamically changing bandwidth of network environment, rate transcoding of a compressed video is necessary to fit the estimated network bandwidth. In this thesis, a rate control algorithm and a real-time network bandwidth estimator are proposed for the above purpose. Therefore, only one copy of a compressed video source is necessary. This will greatly save the storage space on the server. We also implement the corresponding transcoding server and the bitstream can be accepted and viewed by using the popular Quick-Time player. The result is quite satisfactory. Real-time demo station is available.   The second part of the thesis is to deal with the determination of the I, P, B compression type of video frames for MPEG4 advance simple profile encoder. Most papers collect useful information of a whole GOP and then determine the distributions of I, P, B compression type in a GOP. We proposed a dynamic scheme to determine compression type. Much less computation and required memory space are needed in our scheme instead at the expected expense of reduced coding efficiency.
19

CHEN, LUNG-CHENG, e 陳蘢檉. "Rate Allocation Techniques for 3D HEVC Video Encoder Based on 3D Quality Optimization". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/37337338826316984997.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立中正大學
電機工程研究所
104
In this paper, we proposed a new algorithm in rate allocation and color+depth joint rate control based on 3D-HEVC. In our algorithm, "joint rate control" can provide a smaller bitrate error than standard SHVC coding tool. In "rate allocation," we have two method to seperate bits. First, we prposed a method to seperate different joint (color+depth) LCUs in one joint (color+depth) frame, this is called "3D Quality Contribution." In 3D quality contribution, we use motion vector and edge matching to judge each LCU whether it is important to human visual system. Another method to seperate target bits is "color/depth LCU rate allocation via SVR model." We use different features with training sequences and 9 different target bitrates to build 9 SVR model. After SVR model is established, we can use test sequences features and SVR models to get suggested best color/depth LCUs distribution rate. In our experiments, our joint rate control system can reduce bitrate error about 0.8% than standard SHM 6.0. And we have tested 15 different sequence with different target bitrate, we have the best 3D quality performance in 12 different sequence with different target bitrate.
20

Lin, Yin-Ling, e 林映伶. "MPEG-2/4 Low Complexity AAC Encoder Optimization and Implementation on a StrongARM Platform". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/43493370133617970596.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立交通大學
電機與控制工程系所
93
In this thesis, we present an optimized AAC encoding scheme and also proposed a data embedded method integrated into AAC encoding system. Both of them are finally realized on a 32-bit fixed-point processor, StrongARM SA-1110. Experimental result shows that at least 1 encoding speed is achieved. In the AAC encoding algorithm, we propose several approaches including the removal of block switching, fast MDCT, simplified TNS, simplified M/S stereo coding, mathematical function optimization and fast quantization. To compensate the error caused by fixed-point conversion, a bandwidth control and a dynamic data precision MDCT are applied. Finally, a data embedded method is implemented to further increase its utility.
21

Jing-XinWang e 王景新. "Parallel H.264/AVC Rate-Distortion Optimization Baseline Profile Encoder on Distributed Shared Memory System". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/51038557556548773314.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
博士
國立成功大學
資訊工程學系碩博士班
98
H.264/AVC video coding standard incorporates many coding tools into its design to improve its compression performance. In a H.264/AVC rate-distortion optimization (RDO) encoder, computation time is primarily spent on calculating the rate-distortion cost (RD) of choosing the best coding mode. Parallel computation is one of the methods to speed up the encoder. However, calculating the rate-distortion cost requires lots of reference data obtained from coded adjacent macroblocks. This is not a good property for any parallel computing strategy, especially for distributed shared memory (DSM) system. In a cluster computing system, DSM provides the virtual shared memory scheme to write the parallel program more easily. But the amount of transferring data and the frequency of transferring data on each computer affect the speedup. To gain more speedup, this thesis proposes a parallel H.264/AVC RDO encoder architecture to reduce the frequency of transferring reference data. Based on this architecture, three parallel computing schemes, including Parallel Slice Scheme (PSS), Parallel Multiple Reference Frames Scheme (PMRFS) and Parallel Block Mode Scheme (PBM) are proposed. Parallel slice scheme (PSS) outperforms other two schemes on a DSM system. However, the video quality would be decreased in our proposed parallel architecture with PSS. To improve more video quality, this thesis also proposes the modified parallel architecture and the modified PSS (PSS_M) based on PSS. PSS_M is run over a DSM system consisting of 5 PC computers (one master node with four slave processing nodes). Each computer has two dual-core processors. The difference in PSNR curve between PSS_M and H.264/AVC RDO encoder without parallelism is slight in slow motion sequence such as Akiyo. The maximum speedup of PSS_M is 4.22 in n=5/p=1 (five computers are used and each computer only uses one core). In addition, PSS_M combined with wavefront order scheme (PSS_MW) in n=5/p=4 had executed in this thesis. The maximum improvement in speedup in p=4 is 2.61. The video quality and speedup of our proposed three schemes are shown in this thesis. Although PSS_M obtains more coding efficiency than the other method, it is possible to combine three schemes to get more video quality and speedup when more number of the computers is used. This thesis provides a good reference for implementing the combined scheme.
22

Huang, Yi-Hsin, e 黃翊鑫. "Integrated Fast Mode Decision Algorithm and SSIM-Based Rate-Distortion Optimization for H.264 Encoder". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53152076155231006602.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立臺灣大學
電信工程學研究所
97
The success of H.264 standardization implies that the video coding tools of the next-generation video coding standard, for example, H.265, will become more complicated and require extensive computations for high quality video. To satisfy the real-time requirements of many consumer electronic and multimedia communication applications, it is absolutely necessary to enhance the computational efficiency of such advanced coding tools. On the other hand, because the video quality is ultimately judged by human eyes, we strongly believe that the characteristics of human visual system must be taken into account in the design of the next-generation video coding system. Motivated by these requirements of next-generation video coding, this thesis targets the development of algorithm for 1) integrated fast mode decision algorithm and 2) structural similarity based rate distortion optimization. In the first part, three fast intra mode decision algorithms for different stages in the mode decision hierarchy of H.264 are proposed, which are variance-based MB mode decision, improved filter-based prediction mode decision, and an R-D characteristic based selective intra mode decision. Their integration is also investigated and we propose integrated fast algorithms for intra-frame coding and inter-frame coding, respectively. The integrated algorithms achieve high complexity reduction without introducing noticeable R-D performance loss. The experimental results are provided to show the superiority of the proposed algorithms. In the second part, we develop a rate-distortion optimization framework based on structural similarity for the mode decision process in H.264, and propose a predictive Lagrangian multiplier selection method for the proposed framework. To estimate the Lagrangian multiplier, approaches with different computational overhead are presented to meet the requirement of different target applications. The proposed method achieves about 5%-10% bit rate reduction with same quality in terms of SSIM index. From the subjective evaluation, the proposed method preserves more detail and introduces less block artifact than the MSE-based H.264 encoder with the same bit-rate constraint.
23

Lin, Chih-Yuan, e 林志遠. "Research on the performance optimization of DSP program, with a casestudy of the H.264 Encoder". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/83556140717006270372.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
明志科技大學
電機工程研究所
98
This thesis focuses on the research of the efficiency optimization that is applied inside the Digital Signal Processor and uses the H.264 video encoder, made by the team, as an example of improving efficiency. The platform used in the research is the DM6437 DSP system development board that was made by Texas Instruments. To test our encoding system, we use the officially released H.264 video testing file. This system was developed by referring to the original code of the coding software JM8.0 that was released by the official H.264. However, while the original JM8.0 was written using x86 CPU as the platform, in order to understand the application efficiency of JM8.0 when using DM6437, we also developed an implanted version of JM8.0 on the DM6437 DSP platform. The video compression rates in the JM8.0 implanted version, the team-designed version, and our enhanced version are 0.16 pages per second, 1.31 pages per second, and 6.73 pages per second. This research processes the optimization in 2 ways. The first way is to utilize the TI Code Composer Studio (CCS) 3.3 for the optimization of system coding which is divided into 4 levels (o0 to o3) with the o0 level being minimal optimized and the o3 level being the most optimized level. When using the o3 option, the CCS will activate pipeline and parallel processing functions. At this moment, the DM6437 platform requires a large amount of memory for compiling. The second way is to utilize the cache memory allocation that is equipped inside the DM6437 so the system code/data that is frequently used can remain inside the cache memory as long as possible. Usage of the memory cache could achieve optimization because it decreases the number of times that DSP needs to access the main memory. This research analyzed the overall efficiency of the compiling optimization/cache optimization and used the complicated H.264 encoder as an example of efficiency promoting research. If one could take further steps and conquer the final obstacles by rewriting the system encoder to assembly language and applying EDMA for more optimization research, it is possible that the efficiency of the H.264 could be further improved and promoted in the future.
24

Yang, Chung-Yu. "The Optimization and Complexity Reduction of H.264/AVC Baseline Encoder Using TI DM642 Digital Signal Processor". 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0023-2608200810274100.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Liao, Irene M. J., e 廖美貞. "A Carry-Select-Adder Optimization Technique for High-Performance Booth-Encoded Wallace-Tree Multipliers". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/45267729737681347070.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
國立清華大學
資訊工程學系
89
In this thesis, we present two carry-select adder partitioning algorithms for high-performance Booth-encoded Wallace-tree multipliers. By taking various data arrival times into account, we propose a branch-and-bound algorithm and a heuristic algorithm to partition an n-bit carry-select adder into a number of adder blocks such that the overall delay of the design is minimized. The experimental results show that our proposed algorithm can achieve on an average 9.1% delay reduction with less than 1% of area overhead on 15 multipliers ranges from 16X16-bit to 64X64-bit.
26

Lammers, Christoph. "A method for the genetically encoded incorporation of FRET pairs into proteins". Doctoral thesis, 2014. http://hdl.handle.net/11858/00-1735-0000-0022-5F65-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Alain, Guillaume. "Auto-Encoders, Distributed Training and Information Representation in Deep Neural Networks". Thèse, 2018. http://hdl.handle.net/1866/22572.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Almeida, Inês Ferreira de. "Optimization of in vivo electroporation and comparison to microinjection as delivery methods for transgenesis in zebrafish (Danio rerio). Generation of a new neuronal zebrafish line". Master's thesis, 2022. http://hdl.handle.net/10362/132850.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Transgenic zebrafish are important models for biomedical research. There are several technologies available for the generation of transgenics and for genome editing. However, methods for the delivery of exogenous components remain limited. In Zebrafish, the most used method is microinjection, which requires sophisticated technical skills and presents a low integration rate of large constructs. Alternatively, a few studies reported the use of electroporation as a delivery method for the generation of transgenic zebrafish; however, these protocols contain some limitations that reduce their widespread applicability. To overcome this, we based on the most recent published work reporting electroporation in zebrafish embryos, to implement optimizations in order to increase the number of embryos electroporated, the efficiency of plasmid DNA delivery and its integration in the germline. Electroporation rounds of 30 one-cell stage zebrafish embryos with 300 ng/uL of plasmid DNA in PBS using 35 V poring pulse and 5 V transfer pulse yielded the highest survival and efficiency. Compared to microinjection, the optimized electroporation protocol achieved similar fluorescence intensity and expression pattern, opening the way to becoming a practical and efficient alternative to microinjection. In parallel, a new calcium indicator pan-neuronal transgenic zebrafish line, elalv3:GCaMP6fEF05 was generated, through microinjection into one-cell stage zebrafish embryos, followed by 3 rounds of fish crosses, screens, selection and raising. The improvement of delivery methods, such as electroporation, will expand the generation of new zebrafish lines for the study of developmental and molecular biology that ultimately allows the exploration of new human therapeutic avenues.
Os peixes-zebra transgénicos são modelos importantes para a pesquisa biomédica. Existem várias tecnologias disponíveis para a geração de transgénicos e edição do genoma. No entanto, os métodos para a entrega de componentes exógenos permanecem limitados. No peixe-zebra, o método mais utilizado é a microinjeção, que requer habilidades técnicas sofisticadas e apresenta taxa de integração de grandes construções reduzida. Alternativamente, alguns estudos relataram o uso de eletroporação como um método de entrega para a geração de peixes-zebra transgénicos; no entanto, esses protocolos contêm algumas limitações que reduzem sua aplicabilidade generalizada. Como tal, tendo por base um trabalho publicado recentemente relatando a eletroporação de embriões de peixe-zebra, implementaram-se otimizações a fim de aumentar o número de embriões eletroporados, a eficiência da entrega de DNA plasmídico e a sua integração na linha germinativa. Ciclos de eletroporação de 30 embriões de peixe-zebra no estado de uma célula com 300 ng / uL de DNA plasmídico em PBS usando um pulso de formação de poros de 35 V e pulso de transferência de 5 V obtiveram a maior taxa de sobrevivência e eficiência. Comparado à microinjeção, o protocolo de eletroporação otimizado alcançou uma intensidade de fluorescência e padrão de expressão semelhantes, abrindo caminho para se tornar uma alternativa prática e eficiente à microinjeção. Em paralelo, uma nova linha de peixe-zebra transgénica pan-neuronal, elalv3: GCaMP6fEF05 foi gerada, através da microinjeção em embriões no estado de uma célula, seguida por 3 rondas de cruzamentos de peixes, screens, seleção e criação. A otimização dos métodos de entrega, como a eletroporação, permite expandir a geração de novas linhas de peixe-zebra para o estudo da biologia molecular e do desenvolvimento que, em última análise, permite a exploração de novos caminhos terapêuticos para humanos.
29

Dauphin, Yann. "Advances in scaling deep learning algorithms". Thèse, 2015. http://hdl.handle.net/1866/13710.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia