Rozprawy doktorskie na temat „AVC scheme”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „AVC scheme”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Tan, Keyu. "Error Resilience Scheme in H.264/AVC and UEP Application in DVB-H Link Layer". Thesis, Queen Mary, University of London, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531472.
Pełny tekst źródłaDigby, G. "Harmonic analysis of A.C. traction schemes". Thesis, Swansea University, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233938.
Pełny tekst źródłaHuang, Jia Heng. "A study on arc and jet schemes". Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950591.
Pełny tekst źródłaLinder, Martin, i Tobias Nylin. "Pricing of radar data". Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-104020.
Pełny tekst źródłaBarcenas, Everardo. "Raisonnement automatisé sur les arbres avec des contraintes de cardinalité". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00578972.
Pełny tekst źródłaXu, Ping. "Evaluation of Repeated Biomarkers: Non-parametric Comparison of Areas under the Receiver Operating Curve Between Correlated Groups Using an Optimal Weighting Scheme". Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4261.
Pełny tekst źródłaGroza, Mayya. "Modélisation et discrétisation des écoulements diphasiques en milieux poreux avec réseaux de fractures discrètes". Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4093/document.
Pełny tekst źródłaThis thesis presents the work on modelling and discretisation of two-phase flows in the fractured porous media. These models couple the flow in the fractures represented as the surfaces of codimension one with the flow in the surrounding matrix. The discretisation is made in the framework of Gradient schemes which accounts for a large family of conforming and nonconforming discretizations. The test cases are motivated by the target application of the thesis concerning the gas recovery under the hydraulic fracturing process in low-permeability reservoirs
Exposito, Victor. "Réseaux de multidiffusion avec coopération interactive entre récepteurs". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC006.
Pełny tekst źródłaThe present thesis concentrates on downlink communications. In order to tackle one part of this challenging problem, we focus on the multicast channel in which one transmitter broadcasts a common message intended to a whole group of users. To ensure that the transmission rate is not limited by the weakest user in terms of channel quality, different solutions using massive multiple-input multiple-output or multirate strategies have been proposed in the literature. However, if all users wish to obtain the same content quality, the weakest user would set the rate and/or require a disproportionate amount of resource, and thus impact the whole group. With the recent study of device-to-device mechanisms, user cooperation in close proximity becomes possible and would benefit to all users by ensuring the same content quality while maintaining a low cost in terms of amount of resource and energy. Consequently, this thesis is centered around the multicast network with receiver cooperation. Information-theoretic tools formalize the study of the network considered and provide general bounds on the achievable transmission rate. The proposed cooperation scheme is based on an appropriate superposition of compress-forward (CF) and decode-forward (DF) operations, and provenly outperform non-interactive schemes in the two-receiver scenario. Properties of the interactive cooperation emerge from the asymmetric construction of the scheme which permits to adapt the order of CFs and DFs according to the channel condition. The core idea of the interaction, some insights on key construction points, and numerical results are given for small size networks. System level simulations illustrate the potential gain of receiver cooperation for larger networks
Jou, Kyong-Bok. "Etude du schème d'expansion dans la structuration et l'élaboration du sens avec application au français écrit". Paris 5, 1986. http://www.theses.fr/1986PA054090.
Pełny tekst źródłaDubois, Joanne. "Modélisation, approximation numérique et couplage du transfert radiatif avec l'hydrodynamique". Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13962/document.
Pełny tekst źródłaThe present work is dedicated to the numerical approximation of the M1 moments model solutions for radiative transfer. The objective is to develop efficient and accurate numerical solvers, able to provide with precise and robust computations of flows where radiative transfer effects are important. With this aim, several numerical methods have been considered in order to derive numerical schemes based on Godunov type solvers. A particular attention has been paid to solvers preserving the stationary contact waves. Namely, a relaxation scheme and a HLLC solver are presented in this thesis. The robustness of each of these solvers has been established (radiative energy positivity and radiative flux limitation). Several numerical experiments in one and two space dimensions validate the developed methods and outline their interest
Preux, Anthony. "Transport optimal et équations des gaz sans pression avec contrainte de densité maximale". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS435/document.
Pełny tekst źródłaIn this thesis, we consider the pressureless Euler equations with a congestion constraint.This system still raises many open questions and aside from its one-dimensional version,very little is known. The strategy that we propose relies on previous works of crowd motion models withcongestion in the framework of the Wasserstein space, and on a microscopic granularmodel with inelastic collisions. It consists of the study of a time-splitting scheme. The first step is about the projection of the current velocity field on a set, avoiding the factthat trajectories do not cross during the time step. Then the scheme moves the density with the new velocity field. This intermediate density may violate the congestion constraint. The third step projects it on the set of admissible densities. Finally, the velocity field is updated taking into account the positions of physical particles during the scheme. In the one-dimensional case, solutions computed by the algorithm matchwith the ones that we know for these equations. In the two-dimensional case, computed solutions respect some properties that can be expected to be verified by the solutions to these equations. In addition, we notice some similarities between solutions computed by the scheme and the ones of the granular model with inelastic collisions. Later, this scheme is discretized with respect to the space variable in the purpose of numerical computations of solutions. The resulting algorithm uses a new method to discretize the Wasserstein cost. This method, called Transverse Sweeping Method consists in expressing the cost using the mass flow from any cell and crossing hyperplanes defined by interfaces between cells
Marriere, Nicolas. "Cryptanalyse de chiffrements par blocs avec la méthode des variances". Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0922/document.
Pełny tekst źródłaThe first part of the thesis is the cryptanalysis of generalized Feistel networks with the use of the variance method.This method allows to improve existing attacks by two ways: data complexity or the number of rounds. In order to do that, we have developed a tool which computes the right values of expectations and variances.It provides a better analysis of the attacks.In the second part, we have studied the EGFN a new family of generalized Feistel networks. We have used the variance method and our tool in order to build some differential attacks. Simulations were made to confirm the theoritical study.In the last part, we have studied LILLIPUT, a concret cipher based on the EGFN.We have provided a differential analysis and build differential attacks which have unusual conditions. These attacks were found empirically by a tool that automatically look for differential attacks. In particular, we have highlighted some improbable differential attacks
Thériault, Nathalie. "Analyse de sensibilité et amélioration des simulations d’albédo de surfaces enneigées dans les zones subarctiques et continentales humides à l’est du Canada avec le schéma de surface CLASS". Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6946.
Pełny tekst źródłaAbstract : The surface energy balance of northern regions is closely linked to surface albedo (fraction of solar radiation reflected by a surface) variations. These variations are strongly influenced by the presence, depth and physical properties of the snowpack. Climate change affects significantly snow cover evolution, and decreases surface albedo and snow albedo with positive feedback to climate. Despite the importance of the albedo, many models empirically compute it, which can induce significant biases with albedo observations depending on studied surfaces. The Canadian Land Surface Scheme, CLASS (used in Canada into the Canadian Regional Climate Model, and the Global Climate Model), simulates the spatial and temporal evolution of snow state variables including the albedo. The albedo is computed according to the depth of snow on the ground as well as the accumulation of snow in trees. The albedo seasonal evolution for snow on ground is estimated in CLASS from an empirical aging expression with time and temperature and a “refresh” based on a threshold of snowfall depth. The seasonal evolution of snow on canopy is estimated from an interception expression with trees type and snowfall density and an empirical expression for unloading rate with time. The objectives of this project are to analyse albedo behavior (simulated and measured) and to improve CLASS simulations in winter for Eastern Canada. To do so, sensitivity test were performed on prescribed parameters (parameters that are used in CLASS computation, their values are fixed, and determined empirically). Also, albedo evolution with time and meteorological conditions were analysed for grass and coniferous terrain. Finally, we tried to improve simulations by optimizing sensitive prescribed parameters for grass and coniferous terrain, and by modifying the refresh albedo value for grass terrain. First, we analysed albedo evolution and modelling biases. Grass terrain showed strong sensitivity to the precipitation rate threshold (for the albedo to refresh to its maximum value), and to the value of the albedo refresh. Both are affected by input data of precipitation rate and phase. The modification of precipitation threshold rate generates daily surface albedo to vary mainly (75 % of data in winter) between 0.62 and 0.8, which is a greater fluctuation than for a normal simulation over winter. The modification of the albedo refresh value generates surface albedo to vary mainly (75 %) between 0.66 and 0.79, but with extreme values, 25 % of data, from 0.48 to 0.9. Coniferous areas showed small sensitivity to studied prescribed parameters. Also, comparisons were made between simulated and measured mean albedo during winter. CLASS underestimates the albedo by -0.032 (4.3 %) at SIRENE (grass in Southern Quebec), by -0.027 (3.4 %) at Goose Bay (grass in arctic site) and by -0.075 (27.1 %) at James Bay (boreal forest) (or -0.011 (5.2 %) compared to MODIS (MODerate resolution Imaging Spectroradiometer) data). A modelling issue in grass terrain is the small and steady maximum albedo value (0.84) compared to measured data in arctic condition (0.896 with variation of an order of 0.09 at Goose Bay, or 0.826 at SIRENE with warmer temperatures). In forested areas, a modelling issue is the small albedo increase (+0.17 in the visible range, +0.04 in NIR) for the part of the vegetation that is covered by snow (total surface albedo gets to a maximum of 0.22) compared to events of high surface albedo (0.4). Another bias comes from the albedo value of the snow trapped on canopy which does not decrease with time in opposition to observed surface albedo which is lower at the end of winter and which suggests snow metamorphism occurred. Secondly, we tried to improve simulations by optimizing prescribed parameters and by modifying the albedo’s maximum value computation. Optimisations were made on sensitive prescribed parameters or on those that seemed unsuited. No significant RMSE (Root Mean Square Error) improvements were obtained from optimisations in both grass and coniferous area. Improvements of albedo simulations were tried by adjusting the maximum value (normally fixed) with temperature and precipitation rate, in grass terrain. Results show that these modifications did not significantly improved simulations’ RMSE. Nevertheless, the latter modification improved the correlation between simulated and measured albedo. These statistics were made with the whole dataset which can reduce the impact of modifications (they were mainly affecting albedo during a precipitation event), but it allows to overview the new model performance. Modifications also added variability to maximum values (closer to observed albedo) and they increased our knowledge on surface albedo behavior (simulated and measured). The methodology is also replicable for other studies that would aim to analyse and improve simulations of a surface model.
Deymier, Nicolas. "Étude d’une méthode d’éléments finis d’ordre élevé et de son hybridation avec d’autres méthodes numériques pour la simulation électromagnétique instationnaire dans un contexte industriel". Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0038/document.
Pełny tekst źródłaIn this thesis, we study the improvement of the Yee’s scheme to treat efficiently and in arelevant way the industrial issues we are facing nowadays. For that, we first of all try to reduce thenumerical errors of dispersion and then to improve the modeling of the curved surfaces and of theharness networks. To answer these needs, a solution based on a Galerkin Discontinuous (GD) methodhas been first considered. However, the use of such method on the entire modeling volume is quite costly ;moreover the wires are not taken into account in this method. That is the reason why, with the objectiveof an industrial tool and after a large bibliographic research, we headed for the study of finite elementsscheme (FEM) on a Cartesian mesh which has all the good properties of the Yee’s scheme. Especially,this scheme is exactly the Yee’s scheme when the spatial order of approximation is set to zero. Forthe higher orders, this new scheme allows to greatly reduce the numerical error of dispersion. In theframe of this thesis and for this scheme, we give a theoretical criterion of stability, study its theoreticalconvergence and we perform an analysis of the error of dispersion. To take into account the possibilityof the variable spatial orders of approximation in each direction, we put in place a strategy of orderaffectation according to the given mesh. This strategy allows to obtain an optimal time step for a givenselected precision while reducing the cost of the calculations. Once this new scheme has been adaptedto large industrial computing means, different EMC, antennas, NEMP or lightning problems are treatedto demonstrate the advantages and the potential of this scheme. As a conclusion of these numericalsimulations we demonstrate that this method is limited by a lack of precision when taking into accountcurved geometries. To improve the treatment of the curved surfaces, we propose an hybridization between this scheme andthe GD scheme. This hybridization can also be applied to other methods such as Finite Differences(FDTD) or Finite Volumes (FVTD). We demonstrate that the technique of hybridization proposed,allows to conserve the energy and is stable under a condition that we study theoretically. Some examplesare presented for validation. Finally and to take into account the cables, a thin wire model with a highorder of spatial approximation is proposed. Unfortunately, this model does not allow to cover all theindustrial cases. To solve this issue we propose an hybridization with a transmission line method. Theadvantage of this hybridization is demonstrated thanks to different cases which would not have beenfeasible with a more simple thin wire method
Chauveheid, Daniel. "Ecoulements multi-matériaux et multi-physiques : solveur volumes finis eulérien co-localisé avec capture d’interfaces, analyse et simulations". Thesis, Cachan, Ecole normale supérieure, 2012. http://www.theses.fr/2012DENS0032/document.
Pełny tekst źródłaThis work is devoted to the extension of a eulerian cell-centered finite volume scheme with interfaces capturing for the simulation of multimaterial fluid flows. Our purpose is to develop a simulation tool which could be able to handle multi-physics problems in the following sense. We address the case of radiating flows, modeled by a two temperature system of equations where the hydrodynamics are coupled to radiation transport. We address a numerical scheme for taking surface tension forces into account. An implicit scheme is proposed to handle low Mach number fluid flows by means of a renormalization of the numerical diffusion. Eventually, the scheme is extended to three-dimensional flows and to multimaterial flows, that is with an arbitrary number of materials. At each step, numerical simulations validate our schemes
Carlier, Julien. "Schémas aux résidus distribués et méthodes à propagation des ondes pour la simulation d’écoulements compressibles diphasiques avec transfert de chaleur et de masse". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY008/document.
Pełny tekst źródłaThe topic of this thesis is the numerical simulation of two-phase flows in an industrial framework. Two-phase flows modelling is a challenging domain to explore, mainly because of the complex phenomena involved, such as cavitation and other transfer processes between phases. Furthermore, these flows occur generally in complex geometries, which makes difficult the development of efficient resolution methods. The models that we consider belong to the class of diffuse interface models, and they allow an easy modelling of transfers between phases. The considered class of models includes a hierarchy of sub-models, which take into account different levels of interactions between phases. To pursue our studies, first we have compared the so-called four-equation and six-equation two-phase flow models, including the effects of mass transfer processes. We have then chosen to focus on the four-equation model. One of the main objective of our work has been to extend residual distribution schemes to this model. In the context of numerical solution methods, it is common to use the conservative form of the balance law. In fact, the solution of the equations under a non-conservative form may lead to a wrong solution to the problem. Nonetheless, solving the equations in non-conservative form may be more interesting from an industrial point of view. To this aim, we employ a recent approach, which allows us to ensure conservation while solving a non-conservative system, at the condition of knowing its conservative form. We then validate our method and apply it to problems with complex geometry. Finally, the last part of our work is dedicated to the evaluation of the validity of the considered diffuse interface model for applications to real industrial problems. By using uncertainty quantification methods, the objective is to get parameters that make our simulations the most plausible, and to target the possible extensions that can make our simulations more realistic
Karimou, Gazibo Mohamed. "Etudes mathématiques et numériques des problèmes paraboliques avec des conditions aux limites". Phd thesis, Université de Franche-Comté, 2013. http://tel.archives-ouvertes.fr/tel-00950759.
Pełny tekst źródłaHalftermeyer, Pierre. "Connexité dans les Réseaux et Schémas d’Étiquetage Compact d’Urgence". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0140/document.
Pełny tekst źródłaWe aim at assigning each vertex x of a n-vertices graph G a compact O(log n)-bit label L(x) in order to :1. construct, from the labels of the vertices of a forbidden set X C V (G), a datastructure S(X)2. decide, from S(X), L(u) and L(v), whether two vertices u and v are connected in G n X.We give a solution to this problem for the family of 3-connected graphs whith bounded genus.— We obtain O(g log n)-bit labels.— S(X) is computed in O(Sort([X]; n)) time.— Connection between vertices is decided in O(log log n) optimal time.We finally extend this result to H-minor-free graphs. This scheme requires O(polylog n)-bit labels
Chiarello, Felisia Angela. "Lois de conservation avec flux non-local pour la modélisation du trafic routier". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4076.
Pełny tekst źródłaIn this thesis, we provide mathematical traffic flow models with non-local fluxes and adapted numerical schemes to compute approximate solutions to such kind of equations. More precisely, we consider flux functions depending on an integral evaluation of the conserved variables through a convolution product. First of all, we prove the well-posedness of entropy weak solutions for a class of scalar conservation laws with non-local flux arising in traffic modeling. This model is intended to describe the reaction of drivers that adapt their velocity with respect to what happens in front of them. Here, the support of the convolution kernel is proportional to the look-ahead distance of drivers. We approximate the problem by a Lax- Friedrichs scheme and we provide some estimates for the sequence of approximate solutions. Stability with respect to the initial data is obtained through the doubling of variable technique. We study also the limit model as the kernel support tends to infinity. After that, we prove the stability of entropy weak solutions of a class of scalar conservation laws with non-local flux under higher regularity assumptions. We obtain an estimate of the dependence of the solution with respect to the kernel function, the speed and the initial datum. We also prove the existence for small times of weak solutions for non-local systems in one space dimension, given by a non-local multi-class model intended to describe the behaviour of different groups drivers or vehicles. We approximate the problem by a Godunov-type numerical scheme and we provide uniform L∞ and BV estimates for the sequence of approximate solutions, locally in time. We present some numerical simulations illustrating the behavior of different classes of vehicles and we analyze two cost functionals measuring the dependence of congestion on traffic composition. Furthermore, we propose alternative simple schemes to numerically integrate non-local multi- class systems in one space dimension. We obtain these schemes by splitting the non-local conservation laws into two different equations, namely, the Lagrangian and the remap steps. We provide some estimates recovered by approximating the problem with the Lagrangian- Antidiffusive Remap (L-AR) schemes, and we prove the convergence to weak solutions in the scalar case. Finally, we show some numerical simulations illustrating the efficiency of the LAR schemes in comparison with classical first and second order numerical schemes. Moreover, we recover the numerical approximation of the non-local multi-class traffic flow model proposed, presenting the multi-class version of the Finite Volume WENO (FV-WENO) schemes, in order to obtain higher order of accuracy. Simulations using FV-WENO schemes for a multi-class model for autonomous and human-driven traffic flow are presented. Finally, we introduce a traffic model for a class of non-local conservation laws at road junctions. Instead of a single velocity function for the whole road, we consider two different road segments, which may differ for their speed law and number of lanes. We use an upwind type numerical scheme to construct a sequence of approximate solutions and we provide uniform L∞ and BV estimates. Using a Lax-Wendroff type argument, we prove the well-posedness of the proposed model. Some numerical simulations are compared with the corresponding (discontinuous) local model
Magni, Adrien. "Méthodes particulaires avec remaillage : analyse numérique nouveaux schémas et applications pour la simulation d'équations de transport". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00623128.
Pełny tekst źródłaTurpault, Rodolphe. "Modelisation, approximation numerique et applications du transfert radiatif en desequilibre spectral couple avec l'hydrodynamique". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2003. http://tel.archives-ouvertes.fr/tel-00004620.
Pełny tekst źródłaPierron, Luc. "La protection sociale des fonctionnaires : étude critique d’un régime spécial". Thesis, Paris 2, 2016. http://www.theses.fr/2016PA020042.
Pełny tekst źródłaSpecial social security schemes for civil servants belong to the French mythology. Their mention is usually embraced by all, which is spoken of as long-acquired habits, categorical privileges or afterimages of the past. Legally qualify social protection of the civil servants of special social security schemes raise questions. The concept of « scheme » implies a relative overall consistency. The integration to the « social security » means to respect the same principles and operating modes as the rest of the institution. The adjective « special » suggests that the scheme has the same relationship to general scheme as special law to ordinary law. These three items are questionable. Social protection of the civil servants is an iterative construction, spread over more than a century, where the benefits and guarantees each based on its proper logic. A large part of this social protection consists of an administrative cover, endorsed by public employers. The general scheme is not the ordinary social security law. That being said, this is another study of social security in general and social protection of the civil servants in particular who can begin. All for finding an identity crisis: with its integration to social security, social protection of the civil servants succeeds in expressing its uniqueness; but it’s because this social protection tends to be equated with social security that relativity may be deduced from it
Saïd, Cirine. "L'activation des schémas cognitifs dans la douleur, la représentation du corps, la périnatalité et en lien avec le contrôle du moi et le coping". Thesis, Toulouse 2, 2013. http://www.theses.fr/2013TOU20061.
Pełny tekst źródłaEarly maladaptive schemas (EMS) are associated with different forms of psychopathology being not only a source of its manifestation but also a means for maintaining maladaptive behaviour. It is likely that EMS are not the only element linked to psychopathology; coping, irrational beliefs, and ego control are likely linked to both schemas and maladaptive behaviour. The objective of this work was to better understand activation of EMS as well as the influence of cognitive and psychopathological factors in the general population, in individuals suffering from chronic debilitating pain, in future parents, and in those suffering from eating disorders. The objective of the first study is the EMS activation and their relationship to coping strategies and ego control in the general population. The second study explores the expression of EMS in future parents and identify the relationship between EMS, coping, and ego control. The third study is about the EMS activation in relationship to body image and the manifestation of eating disorder sand. The fourth one studies EMS in relationship to chronic pain in a Tunisian sample. Significant and meaningful correlations were found between coping strategies, ego control and EMS. Gender differences were also identified and explored
Lebreton, Matthieu. "Développement d’un schéma de calcul déterministe APOLLO3® à 3 dimensions en transport et en évolution avec description fine des hétérogénéités pour le cœur du réacteur Jules Horowitz". Thesis, Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0249.
Pełny tekst źródłaJules Horowitz Reactor (JHR) is a material testing reactor. As this JHR core is highly heterogeneous and without simple repetitive pattern, the classical 2 steps modeling using to solve neutrons transport equation reaches its limits.A new neutronic scheme has been set up to explicitly describe core heterogeneity. This reference scheme is designed with the APOLLO3 code. It is based on a two-steps methodology improved in order to better predict the environment of subcritical sub-assemblies and by using explicit representation of some heterogeneities at the core stage. This scheme has been validated using standard Monte-Carlo calculations using TRIPOLI4 code and by quantifying approximations with non-standard options of APOLLO3 such as MOC-3D calculation.A precise self-shielding calculation taking account of physics specificities of fuel sub-assemblies is used at the lattice step. During this step, flux calculation are performed with the method of characteristic MOC-2D while exact collision probabilities are used for cross sections described with probability tables.The depletion core calculation of the JHR is carried out by solving the transport equation with the MINARET solver, which uses the discontinuous GALERKIN finite elements method. This method is naturally suitable for unstructured geometries defined with plans and without symmetry. Finally, a 3D calculation of JHR core can preserve heterogeneities like experimental devices, fuel plates or other core structures. It allows determining as precisely as possible depleted reaction rates on an exact geometry
Morán, Cañón Mario. "Étude schématique du schéma des arcs". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S079.
Pełny tekst źródłaThe arc scheme associated with an algebraic variety defined over a field parameterizes the formal germs of curves lying on the considered variety. We study some local schematic properties of the arc scheme of a variety. Given an affine plane curve singularity defined by a reduced homogeneous or weighted homogeneous polynomial, we compute, mainly using arguments from differential algebra, presentations of the ideal defining the Zariski closure of the smooth locus of the tangent space, which is always an irreducible component of this space. In particular, we obtain a Groebner basis of such ideal, which gives a complete description of the functions of the tangent space of the variety which are nilpotent in the arc scheme. On the other hand, we study the formal neighbourhood in the arc scheme of a normal toric variety of certain arcs belonging to the Nash set associated with a divisorial toric valuation. We establish a comparison theorem, in the arc scheme, between the formal neighbourhood of the generic point of the Nash set and that of a rational arc sufficiently generic in the same Nash set
Gutierrez, Romero Mario Fernando. "L'argumentation sur des questions socio-scientifiques : l'influence des contextes culturels dans la prise de décisions". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2075/document.
Pełny tekst źródłaThis thesis has the objective of analyzing the argumentative competence in Colombian students, when they must make decisions pertaining to agricultural mining projects and the impact that their ethnic origin and educational level have on their way of thinking about these topics. In particular special analysis is made in the argumentation of social scientific problematics that Colombian high school and university students sustain, who pertain to two different social contexts and cultures. The first part of this thesis had the aim of giving a panoramic view of the theoretical and empirical advances in relation to the study of argumentation and to express our theoretical opinions. The theoretical background of the empirical work was examined by way of diverse investigations. Lastly, psychological and linguistic models were considered to explain the analytical argumentation of the emotions.In the second part a detailed characterization of the Colombian Indian population was made, in particular of the Nasa-Kiwe (Páez) population. Their predominant characteristics were presented, their geographic location and a presentation of the Páez language, Nasaywe. The history of Colombia was described to offer an understanding of the social-scientific problematics utilized in this investigation. The individuals implicated in the experiment and all of the characteristics that resulted relevant in describing their origin and character, as well as the task that was presented to them and has allowed the presentation of the body of this investigation; as well as the statistics and graphics that allow a more global presentation of the findings.The third part of the investigation contains the analytical chapters. The interactional analysis of the discourse of the indigenous community highlights a collaborative discourse in which rational logic, agentivity, moral analysis and cosmic references are evoked to justify arguments. Also, a vindicated demand in relation to indigenous rights was found; these demands were defended by emotional arguments that made reference to aggressions experienced historically by the indigenous communities in Latin America. An aggression that recognizes the ethnic community, all though not independently of the educational level.In the majority of the urban population, there was no specific cosmogony or religion found. The analysis was principally realized through the argumentation of consequences, which was used to reflect the risks in relation to the environment and for the indigenous culture, specifically exploitation and the utilization of natural resources. All of the subjects situated the indigenous population as defenseless in the face of the aggressions to the environment. The final objective of many of the arguments was the protection of Mother Earth (from an indigenous perspective), or the preservation of the environment (from an urban point of view), now that the possibility of its disappearance is daunting in light of the different actors who are motivated by the economic exploitation in contrast of the perspective maintained by the indigenous communities
Odry, Nans. "Méthode de décomposition de domaine avec parallélisme hybride et accélération non linéaire pour la résolution de l'équation du transport Sn en géométrie non-structurée". Thesis, Aix-Marseille, 2016. http://www.theses.fr/2016AIXM4058/document.
Pełny tekst źródłaDeterministic calculation schemes are devised to numerically solve the neutron transport equation in nuclear reactors. Dealing with core-sized problems is very challenging for computers, so much that the dedicated core codes have no choice but to allow simplifying assumptions (assembly- then core-scale steps…). The PhD work aims to correct some of these ‘standard’ approximations, in order to get closer of reference calculations: thanks to important increases in calculation capacities (HPC), nowadays one can solve 3D core-sized problems, using both high mesh refinement and the transport operator. Developments were performed inside the Sn core solver Minaret, from the new CEA neutronics platform Apollo3® for fast neutrons reactors of the CFV-kind.This work focuses on a Domain Decomposition Method in space. The fundamental idea involves splitting a core-sized problem into smaller and 'independent' subproblems. Angular flux is exchanged between adjacent subdomains. In doing so, all combined subproblems converge to the global solution at the outcome of an iterative process. Domain decomposition is well-suited to massive parallelism, allowing much more ambitious computations in terms of both memory requirements and calculation time. An hybrid MPI/OpenMP parallelism is chosen to match the supercomputers architecture. A Coarse Mesh Rebalance accelration technique is added to balance the convergence penalty observed using Domain Decomposition. The potential of the new calculation scheme is demonstrated on a 3D core of the CFV-kind, using an heterogeneous description of the absorbent rods
Vivion, Léo. "Particules classiques et quantiques en interaction avec leur environnement : analyse de stabilité et problèmes asymptotiques". Thesis, Université Côte d'Azur, 2020. https://tel.archives-ouvertes.fr/tel-03135254.
Pełny tekst źródłaAt the beginning of the 2000's, inspired by the prioneering works of A.O. Caldeira and A.J. Leggett, L. Bruneau and S. de Bièvre introduced an Hamiltonian model describing exchanges of energy between a classical particle and its environment in a way that these exchanges lead to a friction effect on the particle. On one hand this model has been extended to the case of several particles and, when the number of particle is large, a kinetic model has also been derived. Hereafter this model will be referred as the Vlasov-Wave system. On the other hand, since this model is Hamiltonian, it is possible to consider its quantum version. We call this new model the Schrödinger-Wave system. The aim of this thesis is to study the asymptotic of particular dynamics of the Vlasov and Schrödinger-Wave systems.In the kinetic case there exists stationary solutions such that the particle density in the phase space is spatially homogeneous. Then, by analogy with the Vlasov-Poisson system we considered the question of the existence of a Landau damping effect for small perturbations of these particular solutions. We obtain a new linear stability criterion which allows us then to obtain, by adapting the works of J. Bedrossian, N. Masmoudi, C. Mouhot and C. Villani, a proof of non linear Landau damping in the free space and torus cases. In particular we exhibit new constraints (due to the interactions with the environment) on damping rates. We also exhibit a link between stable equilibria of the Vlasov-Wave system and those for the Vlasov-Poisson system and we highlight the similarity between a parameter of the system and the Jeans' length in the attractive Vlasov-Poisson case. This study led to a numerical one which allows us to reinforce our comprehension on the role of the system's parameters, more precisely on their role on solutions' dynamic.In the Schrödinger-Wave case we investigated the possibility of highlighting a friction effect on the quantum particle coming from the environment. As a first step we justify the existence of solitary waves (these solutions where the dispersion of the Schrödinger equation is perfectly compensated by an attractive effect) and the orbital stability of ground states (a solitary wave minimizing the energy under a mass constraint). This orbital stability result insures that a small perturbation of a ground state stays, up to the equation's invariances (here translation and change of phase), close to it uniformly in time. Then a ground state might possibly move and we study the existence of a friction effect through this possible displacement. If in the Schrödinger-Newton case the Galilean invariance allows to construct a solution which is a ground states moving on a straight line at constant momentum, the Schrödinger-Wave system is not Galilean invariant and the analogy with the classical case suggested that the momentum of a moving ground state converges to zero. This conjecture has been studied and confirmed numerically. The numerical investigations require the development of a time discretization of the considered equations taking into account the expression of the interactions between particles and the environment in order to insure that the energy exchanges at numerical ground are consistent with those at continuous level
Faucher, Florian. "Contributions à l'imagerie sismique par inversion des formes d’onde pour les équations d'onde harmoniques : Estimation de stabilité, analyse de convergence, expériences numériques avec algorithmes d'optimisation à grande échelle". Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3024/document.
Pełny tekst źródłaIn this project, we investigate the recovery of subsurface Earth parameters. Weconsider the seismic imaging as a large scale iterative minimization problem, anddeploy the Full Waveform Inversion (FWI) method, for which several aspects mustbe treated. The reconstruction is based on the wave equations because thecharacteristics of the measurements indicate the nature of the medium in whichthe waves propagate. First, the natural heterogeneity and anisotropy of the Earthrequire numerical methods that are adapted and efficient to solve the wavepropagation problem. In this study, we have decided to work with the harmonicformulation, i.e., in the frequency domain. Therefore, we detail the mathematicalequations involved and the numerical discretization used to solve the waveequations in large scale situations.The inverse problem is then established in order to frame the seismic imaging. Itis a nonlinear and ill-posed inverse problem by nature, due to the limitedavailable data, and the complexity of the subsurface characterization. However,we obtain a conditional Lipschitz-type stability in the case of piecewise constantmodel representation. We derive the lower and upper bound for the underlyingstability constant, which allows us to quantify the stability with frequency andscale. It is of great use for the underlying optimization algorithm involved to solvethe seismic problem. We review the foundations of iterative optimizationtechniques and provide the different methods that we have used in this project.The Newton method, due to the numerical cost of inverting the Hessian, may notalways be accessible. We propose some comparisons to identify the benefits ofusing the Hessian, in order to study what would be an appropriate procedureregarding the accuracy and time. We study the convergence of the iterativeminimization method, depending on different aspects such as the geometry ofthe subsurface, the frequency, and the parametrization. In particular, we quantifythe frequency progression, from the point of view of optimization, by showinghow the size of the basin of attraction evolves with frequency. Following the convergence and stability analysis of the problem, the iterativeminimization algorithm is conducted via a multi-level scheme where frequencyand scale progress simultaneously. We perform a collection of experiments,including acoustic and elastic media, in two and three dimensions. Theperspectives of attenuation and anisotropic reconstructions are also introduced.Finally, we study the case of Cauchy data, motivated by the dual sensors devicesthat are developed in the geophysical industry. We derive a novel cost function,which arises from the stability analysis of the problem. It allows elegantperspectives where no prior information on the acquisition set is required
Le, Minh Hoang. "Modélisation multi-échelle et simulation numérique de l’érosion des sols de la parcelle au bassin versant". Thesis, Orléans, 2012. http://www.theses.fr/2012ORLE2059/document.
Pełny tekst źródłaThe overall objective of this thesis is to study a multiscale modelling and to develop a suitable method for the numerical simulation of soil erosion on catchment scale. After reviewing the various existing models, we derive an analytical solution for the non-trivial coupled system modelling the bedload transport. Next, we study the hyperbolicity of the system with different sedimentation laws found in the literature. Relating to the numerical method, we present the validity domain of the time splitting method, consisting in solving separately the Shallow-Water system (modelling the flow routing) during a first time step for a fixed bed and updating afterward the topography on a second step using the Exner equation. On the modelling of transport in suspension at the plot scale, we present a system coupling the mechanisms of infiltration, runoff and transport of several classes of sediment. Numerical implementation and validation tests of a high order wellbalanced finite volume scheme are also presented. Then, we discuss on the model application and calibration using experimental data on ten 1 m2 plots of crusted soil in Niger. In order to achieve the simulation at the catchment scale, we develop a multiscale modelling in which we integrate the inundation ratio in the evolution equations to take into account the small-scale effect of the microtopography. On the numerical method, we study two well-balanced schemes : the first one is the Roe scheme based on a path conservative, and the second one is the scheme using a generalized hydrostatic reconstruction. Finally, we present a first model application with experimental data of the Ganspoel catchment where the parallel computing is also motived
Cisternino, Marco. "A parallel second order Cartesian method for elliptic interface problems and its application to tumor growth model". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00690743.
Pełny tekst źródłaBoujelben, Abir. "Géante éolienne offshore (GEOF) : analyse dynamique des pales flexibles en grandes transformations". Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2442.
Pełny tekst źródłaIn this work, a numerical model of fluid-structure interaction is developed for dynamic analysis of giant wind turbines with flexible blades that can deflect significantly under wind loading. The model is based on an efficient partitioned FSI approach for incompressible and inviscid flow interacting with a flexible structure undergoing large transformations. It seeks to provide the best estimate of true design aerodynamic load and the associated dynamic response of such system (blades, tower, attachments, cables). To model the structure, we developed a 3D solid element to analyze geometrically nonlinear statics and dynamics of wind turbine blades undergoing large displacements and rotations. The 3D solid bending behavior is improved by introducing rotational degrees of freedom and enriching the approximation of displacement field in order to describe the flexibility of the blades more accurately. This solid iscapable of representing high frequencies modes which should be taken under control. Thus, we proposed a regularized form of the mass matrix and robust time-stepping schemes based on energy conservation and dissipation. Aerodynamic loads are modeled by using the 3D Vortex Panel Method. Such boundary method is relatively fast to calculate pressure distribution compared to CFD and provides enough precision. The aerodynamic and structural parts interact with each other via a partitioned coupling scheme with iterative procedure where special considerations are taken into account for large overall motion. In an effort to introduce a fatigue indicator within the proposed framework, pre-stressed cables are added to the wind turbine, connecting the tower to the support and providing more stability. Therefore, a novel complementary force-based finite element formulation is constructed for dynamic analysis of elasto-viscoplastic cables. Each of theproposed methods is first validated with differents estexamples.Then,several numerical simulations of full-scale wind turbines are performed in order to better understand its dynamic behavior and to eventually optimize its operation
Coady, Allison Marie. "Examining the role of preventive diplomacy in South Africa’s foreign policy towards Zimbabwe, 2000-2009". Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/25681.
Pełny tekst źródłaDissertation (MA)--University of Pretoria, 2012.
Political Sciences
unrestricted
Hakala, Tim. "Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control". BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1061.
Pełny tekst źródłaLien, Chien-Hsiang, i 連健翔. "Study of the H.264/AVC Video Encryption Scheme". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/96995702986645384827.
Pełny tekst źródła國防大學理工學院
電子工程碩士班
99
Because the Internet is an unprotected space, if important data (e.g., military intelligent information) is illegally accessed while transmitting on the net, possibly being further modified and used, the damage to the personnel, enterprise, or country is uncounted. Therefore, how to protect the transmission security of video data has become a hot research topic nowadays. H.264/AVC is the newest video compression standard which is developed for improving compression rate and strengthening fault tolerance in various applications. Because of the high compression rate and good visual quality, H.264/AVC has become the hot video compression technique. Generally speaking, a good video encryption technique needs to meet four basic requirements: compressibility equivalent, complexity equivalent, format compliant, and perceptually secure. Based on the requirement of video protection, several H.264/AVC dedicated encryption schemes have been proposed. Unfortunately, most encryption schemes are focused on explaining how they can work well while ignoring systematic and objective discussions of the basic requirements. As a result, individuals or enterprises cannot get complete references from the existing H.264/AVC encryption techniques. The thesis is aimed to propose a high-performance H.264/AVC selective encryption scheme. We first identify the major and auxiliary data which is generated during H.264/AVC compression process and can influence the visual quality of H.264 compressed video. The two-part data is encrypted in the proposed encryption sheme to achieve the purpose of high visual security. Experimental results show that the proposed encryption scheme can provide higher level of visual security than the other H.264/AVC selective encryption methods under the constraints of compressibility equivalent, complexity equivalent, and format compliant.
Shiau, Shau-yu, i 蕭少宇. "A Privacy Protection Scheme in H.264/AVC by Information Hiding". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/80224462309459751734.
Pełny tekst źródła國立中央大學
資訊工程研究所
100
Protecting personal privacy on digital images and videos is important these days. In this research, we present a privacy protection mechanism in H.264/AVC videos. The private visual information on video frames is scrambled by processing the data in the compressed bitstream directly so that the private region is not visible to the regular users. Nevertheless, the scrambled region can be restored to the original content by authorized users. Basically, the scrambling is applied by extracting and removing some data from the H.264/AVC bitstream. These data will be embedded into the bitstream so that the recovery can be applied successfully by placing these data back. In other words, the de-scrambling is achieved via the methodology of information hiding. Since the H.264/AVC encoder makes use of the spatial and temporal dependency for reducing the data size, careless partial scrambling on H.264/AVC compressed bit-stream will result in the drift errors. To solve this problem, the restricted H.264/AVC encoding is employed to prevent the modified data from affecting the subsequent video content. Experimental results show that our method can effectively scramble the privacy region, which can be recovered by using the hidden information. In addition, the size of partially scrambled video is kept under good control.
Kun-Chin, Han. "A Novel Data-Embedding H.264/AVC Coding Scheme for Error Resilience". 2004. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0009-0112200611363769.
Pełny tekst źródłaHan, Kun-Chin, i 韓昆瑾. "A Novel Data-Embedding H.264/AVC Coding Scheme for Error Resilience". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/60398832849773793516.
Pełny tekst źródła元智大學
電機工程學系
93
Video Compression is good for reducing transmission data but very vulnerable to different transmission noise or errors like package loss. In this thesis, a novel data-embedding H.264/AVC coding scheme for error resilience is proposed to conceal and protect various transmission errors. This scheme uses two complementary methods to tackle and reduce the effect of transmission errors. First of all, the encoder uses a technique of data embedding to embed important information to the intra-coded (I) frame and the inter-coded (P) one. The important data includes the intra-coded type and motion vector which are extracted from the intra-coded frame and the inter-coded frame, respectively. When transmitting, the important data is embedded into the AC-coefficients of the H.264/AVC bitstream. Once the video stream is transmitted, the decoder will immediately extract desired embedded data from the stream for error recovering if a transmission error is detected. If unfortunately the embedded data can not be well extracted, the decoder will use the second method to conceal the effect of this error as can as possibly. The second method takes advantages of the BNM (Best neighborhood matching) scheme to find an optimal motion vector for video decoding. Since a smaller window mask is used and different weights are incorporated for balancing the contributions of neighbor windows, the task of correspondence search can be achieved very efficiently and effectively. Even though the corrupted video stream has higher packet loss rate (up to 20%), a higher quality of H.264/AVC video sequence still can be well recovered and maintained. Experimental results have proved the superiority of the proposed method in the efficiency, effectiveness, and robustness of error concealment and resilience for video coding.
Cheng-HongJiang i 江承鴻. "Rate Control Scheme Based on Fixed Lagrange Multipliers for H.264/AVC". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/41350229467372976262.
Pełny tekst źródła國立成功大學
電機工程學系碩博士班
98
In general video coding, the process of Rate Distortion Optimization (RDO) usually utilizes the Rate-Distortion (R-D) cost function involving the Lagrange multiplier to weight the ratio between bit rates and distortions to find the most suitable encoding mode. However, the quantization parameter (QP) has to be decided in advance, which is often not the best one. In this thesis, at first we propose a two-pass rate control (RC) mechanism based on a fixed Lagrange multiplier. An accurate lambda value is estimated after the first pass coding, so that it can be adopted as a fixed parameter in the second pass coding. After optimization, we proposed the one-pass RC scheme not only keep the video quality performance but also reduce the computational complexity. Experiment results show that, compared to the JVG-G012 method implemented in H.264/AVC reference software JM13.2, both the proposed two-pass and one-pass method could significantly improve the video quality. Besides, a proposed optimum procedure is also presented to reduce the computational complexity and accurately controls the bit rate.
Chen, Ing-fan, i 陳穎凡. "The H.264/AVC Video Content Authentication Scheme by Using Digital Watermarking". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/71500103656326121115.
Pełny tekst źródła國立中央大學
資訊工程研究所
97
Digitization of videos brings a lot of convenience to the transmission and archiving of visual data. However, the ease of manipulation of digital videos gives rise to some concerns about their authenticity, especially when digital videos are employed in the applications of surveillance. In this research, we try to tackle this problem by using the digital watermarking techniques. A practical digital video watermarking scheme for authenticating the H.264/AVC compressed videos is proposed to ensure their correct content order. The watermark signals, which represent the serial numbers of video segments, are embedded into nonzero quantization indices of frames to achieve both the effectiveness of watermarking and the compact data size. The human visual characteristics are taken into account to guarantee the imperceptibility of watermark signals and to attain an efficient implementation in H.264/AVC. The issues of synchronized watermark detections are settled by selecting the shot-change frames for calculating the distortion-resilient hash, which helps to determine the watermark sequence. The experimental results demonstrate the feasibility of the proposed scheme as the embedded watermarks can survive the allowed transcoding processes while the edited segments in the tampered video can be located.
Golikeri, Adarsh. "An improved scalar quantization-based digital video watermarking scheme for H.264/AVC". Thesis, 2005. http://hdl.handle.net/2429/16749.
Pełny tekst źródłaApplied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Fan, Kuan-Wei, i 樊冠緯. "Cost-Effective Rate Control Scheme for H.264/AVC by System-Level Design". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/55706662936608466639.
Pełny tekst źródła國立成功大學
電機工程學系碩博士班
96
This thesis presents a low cost and low complexity rate control scheme based on system-level design. Conventional rate control algorithms only focus on rate-distortion performance and did not consider their performance and complexity when implementing on an embedded system. Our proposed rate control scheme jointly considers the performance and its related architecture for hardware/software co-design. The proposed cost-effective macro-block (MB) layer rate control module works in hardware, and the typical frame layer rate control is executed in system CPU. It is less complex than the rate control module adopted by H.264 JM13.2 reference software and is more suitable for SoC system implementation. The experimental results show that the rate control ability of our proposed is better than H.264 frame layer rate control. Even if compared with MB layer rate control in H.264, the performance is almost similar.
Lee, Chun-Jung, i 李俊融. "A Video Layered Quality Segmentation Scheme Included Advertisements Based on H.264/AVC". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/71696639292321644450.
Pełny tekst źródła逢甲大學
資訊工程所
100
In recent years, internet bandwidths have become larger and larger. Video on demand is more and more popular than ever before. This paper proposed a video layered quality segmentation scheme included advertisements based on H.264. In our scheme a video would be divided into two different quality parts. Users without paying can watch worse quality part, which includes high-definition advertisements. Users who paid can watch better quality part without any advertisements. Unlike other current video websites, users need to re-download whole high-definition video, in this scheme, users who paid only need to load part of high-definition video, and transfer the standard-definition video to high-definition video, and remove advertisements.
Chan, Meng-Hsuan, i 詹孟軒. "Compromization scheme of memory and complexity reduction for reference frames of H.264/AVC". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/71698649369556888716.
Pełny tekst źródła國立中正大學
電機工程所
97
This work develops a compromised scheme of reducing memory and complexity associated with H.264/AVC in which multiple reference frames are used in motion estimation/compensation for inter-predictive coding. While multiple reference frames are adopted to increase coding efficiency of inter-predictive coding, they also consume a lot of memory resource. If the memory space is insufficient, a video sequence can not be completely and correctly decoded to result in quality degradation. This work presents a novel scheme to store a portion of macro blocks as compressed data according to the constrained memory space. Additionally, this insufficient memory space is effectively used to minimize the complexity of decoding. The simulation results demonstrate that the proposed method can utilize less memory than the conventional ones in H.264/AVC. The overhead of the proposed method is slightly increased computational complexity. Therefore, the method proposed herein can be widely employed in various devices with insufficient memory to successfully decode H.264/AVC video sequences.
Tsai, Hui-Hsien, i 蔡慧嫺. "Integrated 2-Dimensional Transform Designs with Design-for-Testability Scheme in H.264/AVC". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/29425669807769848847.
Pełny tekst źródła國立成功大學
電機工程學系碩博士班
95
This thesis focuses on the efficient implementation of the transform coding with Design-for-Testability capability in video coding systems, including two parts. In the first part, we aim a flexible inverse transform structure at the high profile in H.264/AVC. This architecture is efficiently combined all inverse transforms together and suitable for all profile in H.264/AVC decoder. The proposed structure is synthesized with 0.18 �慆 CMOS technology. The synthesized flexible transform architecture achieves 125 MHz clock frequency. In the second part, we firstly design a 2-D 4×4 transform architecture with a Design-for-Testability scheme for H.264/AVC. Its architecture is implemented by C-testability which fits for the more regular circuits with a constant test set regardless of the circuit size. The proposed architecture just needs eight test patterns for single stuck-at fault model to achieve 100% fault coverage.
Chen, Sheng-Shiung, i 陳勝雄. "Effective Memory Reduction Scheme Used for Reference Frames in H.264/AVC Video Codec". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/19309141552404255136.
Pełny tekst źródłaChen, Hong-kai, i 陳泓愷. "A fast trellis-based coding scheme for consistent quality control in H.264/AVC". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/81594361544723719886.
Pełny tekst źródła國立臺灣科技大學
資訊工程系
99
The traditional video coding techniques are difficult to achieve the consistent quality for video with high motion or frequent scene changes. Thus, that significantly reduces the viewing quality. Recently, Huang and Hang proposed a trellis-based algorithm for consistent quality control in H.264/AVC and it can improve the visual quality of the encoded video sequences significantly. Converting the quality consistency problem to that of finding an optimal path in a tree, the trellis-based algorithm encodes each encode in a consistent quality. However, the trellis-based algorithm suffers from time-consuming problem since it spends a lot of computation time in the tree construction process. In this paper, we develop an improved distortion model to reduce the computational complexity of trellis-based consistent quality control algorithm. Based on the proposed distortion model, we can predict the distortion of current frame caused by a specific quantization parameter. The predicted distortions of different quantization parameters can be used to remove unnecessary encoding procedures and it leads to a significant computation saving-effect. Experimental results demonstrate that the proposed fast consistent quality control algorithm has 47% execution-time improvement ratio and results in less than 0.1 dBs PSNR degradation when compared to Huang and Hang’s algorithm. As for the visual quality comparison, our proposed fast algorithm has the same quality as Huang and Hang’s algorithm.
Gowtham, Srinivas R. B. "An advection velocity correction scheme for interface tracking using the level-set method". Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5377.
Pełny tekst źródłaChen, Chih-Chang, i 陳志昌. "Fast Intra/Inter Mode Decision for H.264/AVC Using the Spatial-Temporal Prediction Scheme". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/10490873756153132527.
Pełny tekst źródła中華大學
資訊工程學系(所)
96
In the H.264/AVC coding standard, large flexibilities for the motion estimation mode, multiple reference frames, intra prediction modes for I-frame, motion estimation refinements, entropy coding… etc. are provided to obtain the optimum R-D cost. Especially, there are seven motion estimation modes from 44 to 1616 are used to find the minimum motion compensation error for each macroblock. For intra prediction mode, there are four prediction modes are used to encode the 88 chroma and 1616 luma blocks and eight prediction directions and one DC prediction are used to encode the intra 44 luma block. Hence, the high computation cost of the full search method in the reference software JM-14.0 make the encoding process inefficient. Therefore, the methods of applying the SAD (sum of absolute difference), homogeneous region analysis, and edge detection are developed to determine the optimum motion estimation mode. However, the additional computation cost of the image processing will reduce the efficiency of the motion compensation process. In this paper, the spatial-temporal correlations between the current frame and the reference frame are analyzed to develop a fast mode decision method for Inter/Intra frame encoding in which no extra image processes are used. Furthermore, the concept of drift compensation is adopted to avoid the error accumulation phenomenon during the mode decision process. The experimental results show that the total computation cost can be reduced about 70%, the total bit rate just increase less than 3.7 %, and the average PSNR is only dropped about 0.08 dB.
Huang, Jhih-Yu, i 黃誌宇. "An H.264/AVC Error Resilient Scheme using Reversible Data Embedding in the Frequency Domain". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/24733475212647563706.
Pełny tekst źródła國立東華大學
資訊工程學系
96
Abstract H.264/AVC is a novel video coding standard. This new standard can provide a better coding efficiency than other previous video coding standard. When digital video codec technology grows so rapidly, many multimedia applications develop using H.264. when transmission error occurs, however, bit stream can not be successfully decoded and video quality will be destroyed by errors blocks. This results is a broadcast TV, Internet broadcast video,…etc. Therefore, the error resilience or error concealment is an important research issue. Information embedding causes the damage of the original image of traditional error resilient scheme. If error free, the quality resulting from the traditional error resilient scheme is not the same as original image. Reversible data embedding technique using difference expansion is a simple and effective algorithm. In this thesis, we propose a error resilient scheme using reversible data embedding technique in the frequency domain. Simulation results demonstrate that the proposed method makes the video coding sequence more robust.