Gotowa bibliografia na temat „POSTERIORI ALGORITHM”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „POSTERIORI ALGORITHM”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "POSTERIORI ALGORITHM"

1

Utsugi, Akio, i Toru Kumagai. "Bayesian Analysis of Mixtures of Factor Analyzers". Neural Computation 13, nr 5 (1.05.2001): 993–1002. http://dx.doi.org/10.1162/08997660151134299.

Pełny tekst źródła
Streszczenie:
For Bayesian inference on the mixture of factor analyzers, natural conjugate priors on the parameters are introduced, and then a Gibbs sampler that generates parameter samples following the posterior is constructed. In addition, a deterministic estimation algorithm is derived by taking modes instead of samples from the conditional posteriors used in the Gibbs sampler. This is regarded as a maximum a posteriori estimation algorithm with hyperparameter search. The behaviors of the Gibbs sampler and the deterministic algorithm are compared on a simulation experiment.
Style APA, Harvard, Vancouver, ISO itp.
2

Lee, Jun, i Jaejin Lee. "Modified maximum a posteriori decoding algorithm". Electronics Letters 37, nr 11 (2001): 698. http://dx.doi.org/10.1049/el:20010486.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Xia, Meidong, Chengyou Wang i Wenhan Ge. "Weights-Based Image Demosaicking Using Posteriori Gradients and the Correlation of R–B Channels in High Frequency". Symmetry 11, nr 5 (26.04.2019): 600. http://dx.doi.org/10.3390/sym11050600.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, the proposed algorithm makes full use of the correlation of R–B channels in high frequency when interpolating R/B values at B/R positions. Experimental results show that the proposed algorithm is superior to previous similar algorithms in composite peak signal-to-noise ratio (CPSNR) and subjective visual effect. The biggest advantage of the proposed algorithm is the use of posteriori gradients and the correlation of R–B channels in high frequency.
Style APA, Harvard, Vancouver, ISO itp.
4

Tolpin, David, i Frank Wood. "Maximum a Posteriori Estimation by Search in Probabilistic Programs". Proceedings of the International Symposium on Combinatorial Search 6, nr 1 (1.09.2021): 201–5. http://dx.doi.org/10.1609/socs.v6i1.18369.

Pełny tekst źródła
Streszczenie:
We introduce an approximate search algorithm for fast maximum a posteriori probability estimation in probabilistic programs, which we call Bayesian ascent Monte Carlo (BaMC). Probabilistic programs represent probabilistic models with varying number of mutually dependent finite, countable, and continuous random variables. BaMC is an anytime MAP search algorithm applicable to any combination of random variables and dependencies. We compare BaMC to other MAP estimation algorithms and show that BaMC is faster and more robust on a range of probabilistic models.
Style APA, Harvard, Vancouver, ISO itp.
5

Arar, Maher, Claude D'Amours i Abbas Yongacoglu. "Simplified LLRs for the Decoding of Single Parity Check Turbo Product Codes Transmitted Using 16QAM". Research Letters in Communications 2007 (2007): 1–4. http://dx.doi.org/10.1155/2007/53517.

Pełny tekst źródła
Streszczenie:
Iterative soft-decision decoding algorithms require channel log-likelihood ratios (LLRs) which, when using 16QAM modulation, require intensive computations to be obtained. Therefore, we derive four simple approximate LLR expressions. When using the maximum a posteriori probability algorithm for decoding single parity check turbo product codes (SPC/TPCs), these LLRs can be simplified even further. We show through computer simulations that the bit-error-rate performance of(8,7)2and(8,7)3SPC/TPCs, transmitted using 16QAM and decoded using the maximum a posteriori algorithm with our simplified LLRs, is nearly identical to the one achieved by using the exact LLRs.
Style APA, Harvard, Vancouver, ISO itp.
6

Pan, Lu, Xiaoming He i Tao Lü. "High Accuracy Combination Method for Solving the Systems of Nonlinear Volterra Integral and Integro-Differential Equations with Weakly Singular Kernels of the Second Kind". Mathematical Problems in Engineering 2010 (2010): 1–21. http://dx.doi.org/10.1155/2010/901587.

Pełny tekst źródła
Streszczenie:
This paper presents a high accuracy combination algorithm for solving the systems of nonlinear Volterra integral and integro-differential equations with weakly singular kernels of the second kind. Two quadrature algorithms for solving the systems are discussed, which possess high accuracy order and the asymptotic expansion of the errors. By means of combination algorithm, we may obtain a numerical solution with higher accuracy order than the original two quadrature algorithms. Moreover an a posteriori error estimation for the algorithm is derived. Both of the theory and the numerical examples show that the algorithm is effective and saves storage capacity and computational cost.
Style APA, Harvard, Vancouver, ISO itp.
7

Karimi, Mohammad, Maryam Miriestahbanati, Hamed Esmaeeli i Ciprian Alecsandru. "Multi-Objective Stochastic Optimization Algorithms to Calibrate Microsimulation Models". Transportation Research Record: Journal of the Transportation Research Board 2673, nr 4 (29.03.2019): 743–52. http://dx.doi.org/10.1177/0361198119838260.

Pełny tekst źródła
Streszczenie:
The calibration process for microscopic models can be automatically undertaken using optimization algorithms. Because of the random nature of this problem, the corresponding objectives are not simple concave functions. Accordingly, such problems cannot easily be solved unless a stochastic optimization algorithm is used. In this study, two different objectives are proposed such that the simulation model reproduces real-world traffic more accurately, both in relation to longitudinal and lateral movements. When several objectives are defined for an optimization problem, one solution method may aggregate the objectives into a single-objective function by assigning weighting coefficients to each objective before running the algorithm (also known as an a priori method). However, this method does not capture the information exchange among the solutions during the calibration process, and may fail to minimize all the objectives at the same time. To address this limitation, an a posteriori method (multi-objective particle swarm optimization, MOPSO) is employed to calibrate a microscopic simulation model in one single step while minimizing the objectives functions simultaneously. A set of traffic data collected by video surveillance is used to simulate a real-world highway in VISSIM. The performance of the a posteriori-based MOPSO in the calibration process is compared with a priori-based optimization methods such as particle swarm optimization, genetic algorithm, and whale optimization algorithm. The optimization methodologies are implemented in MATLAB and connected to VISSIM using its COM interface. Based on the validation results, the a posteriori-based MOPSO leads to the most accurate solutions among the tested algorithms with respect to both objectives.
Style APA, Harvard, Vancouver, ISO itp.
8

Lee, Dongwook, i Rémi Bourgeois. "GP-MOOD: a positivity-preserving high-order finite volume method for hyperbolic conservation laws". Proceedings of the International Astronomical Union 16, S362 (czerwiec 2020): 373–79. http://dx.doi.org/10.1017/s1743921322001363.

Pełny tekst źródła
Streszczenie:
AbstractWe present an a posteriori shock-capturing finite volume method algorithm called GP-MOOD. The method solves a compressible hyperbolic conservative system at high-order solution accuracy in multiple spatial dimensions. The core design principle in GP-MOOD is to combine two recent numerical methods, the polynomial-free spatial reconstruction methods of GP (Gaussian Process) and the a posteriori detection algorithms of MOOD (Multidimensional Optimal Order Detection). We focus on extending GP’s flexible variability of spatial accuracy to an a posteriori detection formalism based on the MOOD approach. The resulting GP-MOOD method is a positivity-preserving method that delivers its solutions at high-order accuracy, selectable among three accuracy choices, including third-order, fifth-order, and seventh-order.
Style APA, Harvard, Vancouver, ISO itp.
9

Nguyen, Hoang Nguyen. "SYNTHESIS OF A RADAR RECOGNITION ALGORITHM WITH ABILITY TO MEET RELIABILITY OF DECISIONS". Journal of Science and Technique 14, nr 5 (26.04.2021): 87–95. http://dx.doi.org/10.56651/lqdtu.jst.v14.n05.257.

Pełny tekst źródła
Streszczenie:
This paper is devoted to the variant of synthesis of a radar recognition algorithm with the ability to meet reliability of decisions. The algorithm is based on the theory of sequential analysis in combination with flexible change in the level of the classification detail when the observation time cannot be increased. Compared with one-step algorithms, the proposed algorithm allows guaranteeing “the posteriori probability of decisions is not smaller than the set value”. The proposed algorithm can be used in radar target recognition systems.
Style APA, Harvard, Vancouver, ISO itp.
10

Kang, Jiayi, Andrew Salmon i Stephen S. T. Yau. "Log-Concave Posterior Densities Arising in Continuous Filtering and a Maximum A Posteriori Algorithm". SIAM Journal on Control and Optimization 61, nr 4 (4.08.2023): 2407–24. http://dx.doi.org/10.1137/22m1508352.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "POSTERIORI ALGORITHM"

1

Ghoumari, Asmaa. "Métaheuristiques adaptatives d'optimisation continue basées sur des méthodes d'apprentissage". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1114/document.

Pełny tekst źródła
Streszczenie:
Les problèmes d'optimisation continue sont nombreux, en économie, en traitement de signal, en réseaux de neurones, etc. L'une des solutions les plus connues et les plus employées est l'algorithme évolutionnaire, métaheuristique basée sur les théories de l'évolution qui emprunte des mécanismes stochastiques et qui a surtout montré de bonnes performances dans la résolution des problèmes d'optimisation continue. L’utilisation de cette famille d'algorithmes est très populaire, malgré les nombreuses difficultés qui peuvent être rencontrées lors de leur conception. En effet, ces algorithmes ont plusieurs paramètres à régler et plusieurs opérateurs à fixer en fonction des problèmes à résoudre. Dans la littérature, on trouve pléthore d'opérateurs décrits, et il devient compliqué pour l'utilisateur de savoir lesquels sélectionner afin d'avoir le meilleur résultat possible. Dans ce contexte, cette thèse avait pour objectif principal de proposer des méthodes permettant de remédier à ces problèmes sans pour autant détériorer les performances de ces algorithmes. Ainsi nous proposons deux algorithmes :- une méthode basée sur le maximum a posteriori qui utilise les probabilités de diversité afin de sélectionner les opérateurs à appliquer, et qui remet ce choix régulièrement en jeu,- une méthode basée sur un graphe dynamique d'opérateurs représentant les probabilités de passages entre les opérateurs, et en s'appuyant sur un modèle de la fonction objectif construit par un réseau de neurones pour mettre régulièrement à jour ces probabilités. Ces deux méthodes sont détaillées, ainsi qu'analysées via un benchmark d'optimisation continue
The problems of continuous optimization are numerous, in economics, in signal processing, in neural networks, and so on. One of the best-known and most widely used solutions is the evolutionary algorithm, a metaheuristic algorithm based on evolutionary theories that borrows stochastic mechanisms and has shown good performance in solving problems of continuous optimization. The use of this family of algorithms is very popular, despite the many difficulties that can be encountered in their design. Indeed, these algorithms have several parameters to adjust and a lot of operators to set according to the problems to solve. In the literature, we find a plethora of operators described, and it becomes complicated for the user to know which one to select in order to have the best possible result. In this context, this thesis has the main objective to propose methods to solve the problems raised without deteriorating the performance of these algorithms. Thus we propose two algorithms:- a method based on the maximum a posteriori that uses diversity probabilities for the operators to apply, and which puts this choice regularly in play,- a method based on a dynamic graph of operators representing the probabilities of transitions between operators, and relying on a model of the objective function built by a neural network to regularly update these probabilities. These two methods are detailed, as well as analyzed via a continuous optimization benchmark
Style APA, Harvard, Vancouver, ISO itp.
2

Moon, Kyoung-Sook. "Adaptive Algorithms for Deterministic and Stochastic Differential Equations". Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3586.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bekkouche, Fatiha. "Étude théorique et numérique des équations non-linéaires de Sobolev". Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0018/document.

Pełny tekst źródła
Streszczenie:
L'objectif de la thèse est l'étude mathématique et l'analyse numérique du problème non linéaire de Sobolev. Un premier chapitre est consacré à l'analyse a priori pour le problème de Sobolev où on utilise des méthodes de semi-discrétisation explicite en temps. Des estimations d'erreurs ont été obtenues assurant que les schémas numériques utilisés convergent lorsque le pas de discrétisation en temps et le pas de discrétisation en espace tendent vers zéro. Dans le second chapitre, on s'intéresse au problème de Sobolev singulièrement perturbé. En vue de la stabilité des schémas numériques, on utilise dans cette partie des méthodes numériques implicites (la méthode d'Euler et la méthode de Crank- Nicolson) pour discrétiser le problème par rapport au temps. Dans le troisième chapitre, on présente des applications et des illustrations où on utilise le logiciel "FreeFem++". Dans le dernier chapitre, on considère une équation de type Sobolev et on s'intéresse à la dérivation d'estimations d'erreur a posteriori pour la discrétisation de cette équation par la méthode des éléments finis conforme en espace et un schéma d'Euler implicite en temps. La borne supérieure est globale en espace et en temps et permet le contrôle effectif de l'erreur globale. A la fin du chapitre, on propose un algorithme adaptatif qui permet d'atteindre une précision relative fixée par l'utilisateur en raffinant les maillages adaptativement et en équilibrant les contributions en espace et en temps de l'erreur. On présente également des essais numériques
The purpose of this work is the mathematical study and the numerical analysis of the nonlinear Sobolev problem. A first chapter is devoted to the a priori analysis for the Sobolev problem, where we use an explicit semidiscretization in time. A priori error estimates were obtained ensuring that the used numerical schemes converge when the time step discretization and the spatial step discretization tend to zero. In a second chapter, we are interested in the singularly perturbed Sobolev problem. For the stability of numerical schemes, we used in this part implicit semidiscretizations in time (the Euler method and the Crank-Nicolson method). Our estimates of Chapters 1 and 2 are confirmed in the third chapter by some numerical experiments. In the last chapter, we consider a Sobolev equation and we derive a posteriori error estimates for the discretization of this equation by a conforming finite element method in space and an implicit Euler scheme in time. The upper bound is global in space and time and allows effective control of the global error. At the end of the chapter, we propose an adaptive algorithm which ensures the control of the total error with respect to a user-defined relative precision by refining the meshes adaptively, equilibrating the time and space contributions of the error. We also present numerical experiments
Style APA, Harvard, Vancouver, ISO itp.
4

Giacomini, Matteo. "Quantitative a posteriori error estimators in Finite Element-based shape optimization". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX070/document.

Pełny tekst źródła
Streszczenie:
Les méthodes d’optimisation de forme basées sur le gradient reposent sur le calcul de la dérivée de forme. Dans beaucoup d’applications, la fonctionnelle coût dépend de la solution d’une EDP. Il s’en suit qu’elle ne peut être résolue exactement et que seule une approximation de celle-ci peut être calculée, par exemple par la méthode des éléments finis. Il en est de même pour la dérivée de forme. Ainsi, les méthodes de gradient en optimisation de forme - basées sur des approximations du gradient - ne garantissent pas a priori que la direction calculée à chaque itération soit effectivement une direction de descente pour la fonctionnelle coût. Cette thèse est consacrée à la construction d’une procédure de certification de la direction de descente dans des algorithmes de gradient en optimisation de forme grâce à des estimations a posteriori de l’erreur introduite par l’approximation de la dérivée de forme par la méthode des éléments finis. On présente une procédure pour estimer l’erreur dans une Quantité d’Intérêt et on obtient une borne supérieure certifiée et explicitement calculable. L’Algorithme de Descente Certifiée (CDA) pour l’optimisation de forme identifie une véritable direction de descente à chaque itération et permet d’établir un critère d’arrêt fiable basé sur la norme de la dérivée de forme. Deux applications principales sont abordées dans la thèse. Premièrement, on considère le problème scalaire d’identification de forme en tomographie d’impédance électrique et on étudie différentes estimations d’erreur. Une première approche est basée sur le principe de l’énergie complémentaire et nécessite la résolution de problèmes globaux additionnels. Afin de réduire le coût de calcul de la procédure de certification, une estimation qui dépend seulement de quantités locales est dérivée par la reconstruction des flux équilibrés. Après avoir validé les estimations de l’erreur pour un cas bidimensionnel, des résultats numériques sont présentés pour tester les méthodes discutées. Une deuxième application est centrée sur le problème vectoriel de la conception optimale des structures élastiques. Dans ce cadre figure, on calcule l’expression volumique de la dérivée de forme de la compliance à partir de la formulation primale en déplacements et de la formulation duale mixte pour l’équation de l’élasticité linéaire. Quelques résultats numériques préliminaires pour la minimisation de la compliance sous une contrainte de volume en 2D sont obtenus à l’aide de l’Algorithme de Variation de Frontière et une estimation a posteriori de l’erreur de la dérivée de forme basée sur le principe de l’énergie complémentaire est calculée
Gradient-based shape optimization strategies rely on the computation of the so-called shape gradient. In many applications, the objective functional depends both on the shape of the domain and on the solution of a PDE which can only be solved approximately (e.g. via the Finite Element Method). Hence, the direction computed using the discretized shape gradient may not be a genuine descent direction for the objective functional. This Ph.D. thesis is devoted to the construction of a certification procedure to validate the descent direction in gradient-based shape optimization methods using a posteriori estimators of the error due to the Finite Element approximation of the shape gradient.By means of a goal-oriented procedure, we derive a fully computable certified upper bound of the aforementioned error. The resulting Certified Descent Algorithm (CDA) for shape optimization is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion basedon the norm of the shape gradient.Two main applications are tackled in the thesis. First, we consider the scalar inverse identification problem of Electrical Impedance Tomography and we investigate several a posteriori estimators. A first procedure is inspired by the complementary energy principle and involves the solution of additionalglobal problems. In order to reduce the computational cost of the certification step, an estimator which depends solely on local quantities is derived via an equilibrated fluxes approach. The estimators are validated for a two-dimensional case and some numerical simulations are presented to test the discussed methods. A second application focuses on the vectorial problem of optimal design of elastic structures. Within this framework, we derive the volumetric expression of the shape gradient of the compliance using both H 1 -based and dual mixed variational formulations of the linear elasticity equation. Some preliminary numerical tests are performed to minimize the compliance under a volume constraint in 2D using the Boundary Variation Algorithm and an a posteriori estimator of the error in the shape gradient is obtained via the complementary energy principle
Style APA, Harvard, Vancouver, ISO itp.
5

Chalhoub, Nancy. "Estimations a posteriori pour l'équation de convection-diffusion-réaction instationnaire et applications aux volumes finis". Phd thesis, Université Paris-Est, 2012. http://pastel.archives-ouvertes.fr/pastel-00794392.

Pełny tekst źródła
Streszczenie:
On considère l'équation de convection-diffusion-réaction instationnaire. On s'intéresse à la dérivation d'estimations d'erreur a posteriori pour la discrétisation de cette équation par la méthode des volumes finis centrés par mailles en espace et un schéma d'Euler implicite en temps. Les estimations, qui sont établies dans la norme d'énergie, bornent l'erreur entre la solution exacte et une solution post-traitée à l'aide de reconstructions H(div, Ω)-conformes du flux diffusif et du flux convectif, et d'une reconstruction H_0^1(Ω)-conforme du potentiel. On propose un algorithme adaptatif qui permet d'atteindre une précision relative fixée par l'utilisateur en raffinant les maillages adaptativement et en équilibrant les contributions en espace et en temps de l'erreur. On présente également des essais numériques. Enfin, on dérive une estimation d'erreur a posteriori dans la norme d'énergie augmentée d'une norme duale de la dérivée en temps et de la partie antisymétrique de l'opérateur différentiel. Cette nouvelle estimation est robuste dans des régimes dominés par la convection et des bornes inférieures locales en temps et globales en espace sont également obtenues.
Style APA, Harvard, Vancouver, ISO itp.
6

Moon, Kyoung-Sook. "Convergence rates of adaptive algorithms for deterministic and stochastic differential equations". Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1382.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sánchez, Góez Sebastián. "Algoritmo de reconstrucción analítico para el escáner basado en cristales monolíticos MINDView". Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/159259.

Pełny tekst źródła
Streszczenie:
[ES] La tomografía por emisión de positrones (PET, del inglés Positron Emission Tomography) es una técnica de medicina nuclear en la que se genera una imagen a partir de la detección de rayos gamma en coincidencia. Estos rayos son producidos dentro de un paciente al que se le inyecta una radiotrazador emisor de positrones, los cuales se aniquilan con electrones del medio circundante. El proceso de adquisición de eventos de interacción, tiene como unidad central el detector del escáner PET, el cual se compone a su vez de un cristal de centelleo, encargado de transformar los rayos gamma incidentes en fotones ópticos dentro del cristal. La finalidad es entonces, determinar las coordenadas de impacto dentro del cristal de centelleo con la mayor precisión posible, para que, a partir de dichos puntos, se pueda reconstruir una imagen. A lo largo de la historia, los detectores basados en cristales pixelados han representado la elección por excelencia para la la fabricación de escáneres PET. En está tesis se evalúa el impacto en la resolución espacial del escáner PET MINDView, desarrollado dentro del séptimo programa Marco de la Unión Europea No 603002, el cual se basa en el uso de cristales monolíticos. El uso de cristales monolíticos, facilita la determinación de la profundidad de interacción (DOI - del inglés Depth Of Interaction) de los rayos gamma incidentes, aumenta la precisión en las coordenadas de impacto determinadas, y disminuye el error de paralaje que se induce en cristales pixelados, debido a la dificultad para determinar la DOI. En esta tesis, hemos logrado dos objetivos principales relacionados con la medición de la resolución espacial del escáner MINDView: la adaptación del un algoritmo de STIR de Retroproyección Filtrada en 3D (FBP3DRP - del inglés Filtered BackProjection 3D Reproyected) a un escáner basado en cristales monolíticos y la implementación de un algoritmo de Retroproyección y filtrado a posteriori (BPF - BackProjection then Filtered). Respecto a la adaptación del algoritmo FBP, las resoluciones espaciales obtenidas varían en los intervalos [2 mm, 3,4 mm], [2,3 mm, 3,3 mm] y [2,2 mm, 2,3 mm] para las direcciones radial, tangencial y axial, respectivamente, en el primer prototipo del escáner MINDView dedicado a cerebro. Por otra parte, en la implementación del algoritmo de tipo BPF, se realizó una adquisición de un maniquí de derenzo y se comparó la resolución obtenida con el algoritmo de FBP y una implementación del algoritmo de subconjuntos ordenados en modo lista (LMOS - del inglés List Mode Ordered Subset). Mediante el algoritmo de tipo BPF se obtuvieron valores pico-valle de 2.4 a lo largo de los cilindros del maniquí de 1.6 mm de diámetro, en contraste con las medidas obtenidas de 1.34 y 1.44 para los algoritmos de FBP3DRP y LMOS, respectivamente. Lo anterior se traduce en que, mediante el algoritmo de tipo BPF, se logra mejorar la resolución para obtenerse un valor promedio 1.6 mm.
[CAT] La tomografia per emissió de positrons és una tècnica de medicina nuclear en la qual es genera una imatge a partir de la detecció de raigs gamma en coincidència. Aquests raigs són produïts dins d'un pacient a què se li injecta una radiotraçador emissor de positrons, els quals s'aniquilen amb electrons de l'medi circumdant. El procés de adquición d'esdeveniments d'interacció, té com a unitat central el detector de l'escàner PET, el qual es compon al seu torn d'un vidre de centelleig, encarregat de transformar els raigs gamma incidents en fotons òptics dins el vidre. La finalitat és llavors, determinar les coordenades d'impacte dins el vidre de centelleig amb la major precisió possible, perquè, a partir d'aquests punts, es pugui reconstruir una imatge. Al llarg de la història, els detectors basats en cristalls pixelats han representat l'elecció per excellència per a la la fabricació d'escàners PET. En aquesta tesi s'avalua l'impacte en la resolució espacial de l'escàner PET MINDView, desenvolupat dins el setè programa Marc de la Unió Europea No 603.002, el qual es basa en l'ús de vidres monolítics. L'ús de vidres monolítics, facilita la determinació de la profunditat d'interacció dels raigs gamma incidents, augmenta la precisió en les coordenades d'impacte determinades, i disminueix l'error de parallaxi que s'indueix en cristalls pixelats, a causa de la dificultat per determinar la DOI. En aquesta tesi, hem aconseguit dos objectius principals relacionats amb el mesurament de la resolució espacial de l'escàner MINDView: l'adaptació de l'un algoritme de STIR de Retroprojecció Filtrada en 3D a un escàner basat en cristalls monolítics i la implementació d'un algoritme de Retroprojecció i filtrat a posteriori. Pel que fa a l'adaptació de l'algoritme FBP3DRP, les resolucions espacials obtingudes varien en els intervals [2 mm, 3,4 mm], [2,3 mm, 3,3 mm] i [2,2 mm, 2,3 mm] per les direccions radial, tangencial i axial, respectivament, en el primer prototip de l'escàner MINDView dedicat a cervell. D'altra banda, en la implementació de l'algoritme de tipus BPF, es va realitzar una adquisició d'un maniquí de derenzo i es va comparar la resolució obtinguda amb l'algorisme de FBP3DRP i una implementació de l'algoritme de subconjunts ordenats en mode llista (LMOS - de l'anglès List Mode Ordered Subset). Mitjançant l'algoritme de tipus BPF es van obtenir valors pic-vall de 2.4 al llarg dels cilindres de l'maniquí de 1.6 mm de diàmetre, en contrast amb les mesures obtingudes de 1.34 i 1.44 per als algoritmes de FBP3DRP i LMOS, respectivament. L'anterior es tradueix en que, mitjançant l'algoritme de tipus BPF, s'aconsegueix millorar la resolució per obtenir-se un valor mitjà 1.6 mm.
[EN] Positron Emission Tomography (PET) is a medical imaging technique, in which an image is generated from the detection of gamma rays in coincidence. These rays are produced within a patient, who is injected with a positron emmiter radiotracer, from which positrons are annihilated with electrons in the media. The event acquisition process is focused on the scanner detector. The detector is in turn composed of a scintillation crystal, which transform the incident ray gamma into optical photons within the crystal. The purpose is then to determine the impact coordinates within the scintillation crystal with the greatest possible precision, so that, from these points, an image can be reconstructed. Throughout history, detectors based on pixelated crystals have represented the quintessential choice for PET scanners manufacture. This thesis evaluates the impact on the spatial resolution of the MINDView PET scanner, developed in the seventh Framework program of the European Union No. 603002, which detectors are based on monolithic crystals. The use of monolithic crystals facilitates the determination of the depth of interaction (DOI - Depth Of Interaction) of the incident gamma rays, increases the precision in the determined impact coordinates, and reduces the parallax error induces in pixelated crystals, due to the difficulties in determining DOI. In this thesis, we have achieved two main goals related to the measurement of the spatial resolution of the MINDView PET scanner: the adaptation of an STIR algorithm for Filtered BackProjection 3D Reproyected (FBP3DRP) to a scanner based on monolithic crystals, and the implementation of a BackProjection then Filtered algorithm (BPF). Regarding the FBP algorithm adaptation, we achieved resolutions ranging in the intervals [2 mm, 3.4 mm], [2.3 mm, 3.3 mm] and [2.2 mm, 2.3 mm] for the radial, tangential and axial directions, respectively. On the an acquisition of a derenzo phantom was performed to measure the spacial resolution, which was obtained using three reconstruction algorithms: the BPF-type algorithm, the FBP3DRP algorithm and an implementation of the list-mode ordered subsets algorithm (LMOS). Regarding the BPF-type algorithm, a peak-to-valley value of 2.4 were obtain along rod of 1.6 mm, in contrast to the measurements of 1.34 and 1.44 obtained for the FBP3DRP and LMOS algorithms, respectively. This means that, by means of the BPF-type algorithm, it is possible to improve the resolution to obtain an average value of 1.6 mm.
Sánchez Góez, S. (2020). Algoritmo de reconstrucción analítico para el escáner basado en cristales monolíticos MINDView [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159259
TESIS
Style APA, Harvard, Vancouver, ISO itp.
8

Hu, Ying. "Maximum a posteriori estimation algorithms for image segmentation and restoration". Thesis, University of Essex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.317698.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Renaud, Gabriel. "Bayesian maximum a posteriori algorithms for modern and ancient DNA". Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-195705.

Pełny tekst źródła
Streszczenie:
When DNA is sequenced, nucleotide calls are produced along with their individual error probabilities, which are usually reported in the form of a per-base quality score. However, these quality scores have not generally been incorporated into probabilistic models as there is typically a poor correlation between the predicted and observed error rates. Computational tools aimed at sequence analysis have therefore used arbitrary cutoffs on quality scores which often unnecessarily reduce the amount of data that can be analyzed. A different approach involves recalibration of those quality scores using known genomic variants to measure empirical error rates. However, for this heuristic to work, an adequate characterization of the variants present in a population must be available -which means that this approach is not possible for a wide range of species. This thesis develops methods to directly produce error probabilities that are representative of their empirical error rates for raw sequencing data. These can then be incorporated into Bayesian maximum a posteriori algorithms to make highly accurate inferences about the likelihood of the model that gave rise to this observed data. First, an algorithm to produce highly accurate nucleotide basecalls along with calibrated error probabilities is presented. Using the resulting data, individual reads can be robustly as- signed to their samples of origin and ancient DNA fragments can be inferred even at high error rates. For archaic hominin samples, the number of DNA fragments from present-day human contamination can also be accurately quantified. The novel algorithms developed during the course of this thesis provide an alternative approach to working with Illumina sequence data. They also provide a demonstrable improvement over existing computational methods for basecalling, inferring ancient DNA fragments, demultiplexing, and estimating present-day human contamination along with reconstruction of mitochondrial genomes in ancient hominins.
Style APA, Harvard, Vancouver, ISO itp.
10

Grosman, Sergey. "Adaptivity in anisotropic finite element calculations". Doctoral thesis, Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600815.

Pełny tekst źródła
Streszczenie:
When the finite element method is used to solve boundary value problems, the corresponding finite element mesh is appropriate if it is reflects the behavior of the true solution. A posteriori error estimators are suited to construct adequate meshes. They are useful to measure the quality of an approximate solution and to design adaptive solution algorithms. Singularly perturbed problems yield in general solutions with anisotropic features, e.g. strong boundary or interior layers. For such problems it is useful to use anisotropic meshes in order to reach maximal order of convergence. Moreover, the quality of the numerical solution rests on the robustness of the a posteriori error estimation with respect to both the anisotropy of the mesh and the perturbation parameters. There exist different possibilities to measure the a posteriori error in the energy norm for the singularly perturbed reaction-diffusion equation. One of them is the equilibrated residual method which is known to be robust as long as one solves auxiliary local Neumann problems exactly on each element. We provide a basis for an approximate solution of the aforementioned auxiliary problem and show that this approximation does not affect the quality of the error estimation. Another approach that we develope for the a posteriori error estimation is the hierarchical error estimator. The robustness proof for this estimator involves some stages including the strengthened Cauchy-Schwarz inequality and the error reduction property for the chosen space enrichment. In the rest of the work we deal with adaptive algorithms. We provide an overview of the existing methods for the isotropic meshes and then generalize the ideas for the anisotropic case. For the resulting algorithm the error reduction estimates are proven for the Poisson equation and for the singularly perturbed reaction-difussion equation. The convergence for the Poisson equation is also shown. Numerical experiments for the equilibrated residual method, for the hierarchical error estimator and for the adaptive algorithm confirm the theory. The adaptive algorithm shows its potential by creating the anisotropic mesh for the problem with the boundary layer starting with a very coarse isotropic mesh.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "POSTERIORI ALGORITHM"

1

Montgomery, Erwin B. Algorithm for Selecting Electrode Configurations and Stimulation Parameters. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190259600.003.0014.

Pełny tekst źródła
Streszczenie:
Chapter 9, Approaches to Programming, provided a general discussion regarding the approaches to DBS programming. The focus of Chapter 9 was on the underlying electroneurophysiological principles rather an explicit algorithm that addressed every possible circumstance. Chapters 11, Chapter 12, and Chapter 13 discussed approaches in the context of specific DBS targets. These approaches emphasized interpreting the DBS responses to visualize the location of the DBS contacts in the unique regional anatomy of the individual patient. For example, the production of paresthesias at stimulation currents insufficient to produce clinical benefit with DBS in the vicinity of the STN indicates that the DBS lead position is probably too posterior. This chapter gives an algorithm that takes the programmer step by step through the process of positioning DBS leads and contacts, and determining stimulation levels for optimal results.
Style APA, Harvard, Vancouver, ISO itp.
2

Zahn, Roland, i Alistair Burns. Dementia disorders. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780198779803.003.0001.

Pełny tekst źródła
Streszczenie:
This chapter provides a brief overview of the different forms of dementia syndromes and provides a simple algorithm for initial differential diagnosis. Rapidly progressive dementias have to be excluded which require specific investigations to detect Creutzfeldt–Jakob as well as inflammatory and autoimmune diseases. A lead symptom-based approach in patients with slowly progressive cognitive and behavioural impairments without neurological symptoms is applied: progressive and primary impairments in recent memory are characteristic of typical Alzheimer’s dementia, primary behavioural changes point to the behavioural variant of frontotemporal dementia, primary impairments of language or speech are distinctive for progressive aphasias, fluctuating impairments of attention are a hallmark of Lewy body dementia, whereas primary visuospatial impairments suggest a posterior cortical atrophy. The chapter further discusses updated vascular dementia guidelines and DSM-5 revisions of defining dementia. Current diagnostic criteria for the different dementias are referenced and the role of neuroimaging is illustrated.
Style APA, Harvard, Vancouver, ISO itp.
3

Liang, Percy, Michael Jordan i Dan Klein. Probabilistic grammars and hierarchical Dirichlet processes. Redaktorzy Anthony O'Hagan i Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.27.

Pełny tekst źródła
Streszczenie:
This article focuses on the use of probabilistic context-free grammars (PCFGs) in natural language processing involving a large-scale natural language parsing task. It describes detailed, highly-structured Bayesian modelling in which model dimension and complexity responds naturally to observed data. The framework, termed hierarchical Dirichlet process probabilistic context-free grammar (HDP-PCFG), involves structured hierarchical Dirichlet process modelling and customized model fitting via variational methods to address the problem of syntactic parsing and the underlying problems of grammar induction and grammar refinement. The central object of study is the parse tree, which can be used to describe a substantial amount of the syntactic structure and relational semantics of natural language sentences. The article first provides an overview of the formal probabilistic specification of the HDP-PCFG, algorithms for posterior inference under the HDP-PCFG, and experiments on grammar learning run on the Wall Street Journal portion of the Penn Treebank.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "POSTERIORI ALGORITHM"

1

Ghoumari, Asmaa, Amir Nakib i Patrick Siarry. "Maximum a Posteriori Based Evolutionary Algorithm". W Bioinspired Heuristics for Optimization, 301–14. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95104-1_19.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Frolov, Maxim, i Olga Chistiakova. "Adaptive Algorithm Based on Functional-Type A Posteriori Error Estimate for Reissner-Mindlin Plates". W Lecture Notes in Computational Science and Engineering, 131–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-14244-5_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Sun, Zengguo, i Xuejun Peng. "Maximum a Posteriori Despeckling Algorithm of Synthetic Aperture Radar Images with Exponential Prior Distribution". W Advances in Natural Computation, Fuzzy Systems and Knowledge Discovery, 410–18. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70665-4_47.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Choï, Daniel, Laurent Gallimard i Taoufik Sassi. "A Posteriori Error Estimates for a Neumann-Neumann Domain Decomposition Algorithm Applied to Contact Problems". W Lecture Notes in Computational Science and Engineering, 769–77. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05789-7_74.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ramos, A. L. L., i J. A. Apolinário. "A Lattice Version of the Multichannel Fast QRD Algorithm Based on A Posteriori Backward Errors". W Telecommunications and Networking - ICT 2004, 488–97. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27824-5_66.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Evensen, Geir, Femke C. Vossepoel i Peter Jan van Leeuwen. "Maximum a Posteriori Solution". W Springer Textbooks in Earth Sciences, Geography and Environment, 27–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96709-3_3.

Pełny tekst źródła
Streszczenie:
AbstractWe will now introduce a fundamental approximation used in most practical data-assimilation methods, namely the definition of Gaussian priors. This approximation simplifies the Bayesian posterior, which allows us to compute the maximum a posteriori (MAP) estimate and sample from the posterior pdf. This chapter will introduce the Gaussian approximation and then discuss the Gauss–Newton method for finding the MAP estimate. This method is the starting point for many of the data-assimilation algorithms discussed in the following chapters.
Style APA, Harvard, Vancouver, ISO itp.
7

Azzolini, Damiano, Elena Bellodi i Fabrizio Riguzzi. "MAP Inference in Probabilistic Answer Set Programs". W AIxIA 2022 – Advances in Artificial Intelligence, 413–26. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27181-6_29.

Pełny tekst źródła
Streszczenie:
AbstractReasoning with uncertain data is a central task in artificial intelligence. In some cases, the goal is to find the most likely assignment to a subset of random variables, named query variables, while some other variables are observed. This task is called Maximum a Posteriori (MAP). When the set of query variables is the complement of the observed variables, the task goes under the name of Most Probable Explanation (MPE). In this paper, we introduce the definitions of cautious and brave MAP and MPE tasks in the context of Probabilistic Answer Set Programming under the credal semantics and provide an algorithm to solve them. Empirical results show that the brave version of both tasks is usually faster to compute. On the brave MPE task, the adoption of a state-of-the-art ASP solver makes the computation much faster than a naive approach based on the enumeration of all the worlds.
Style APA, Harvard, Vancouver, ISO itp.
8

Talagrand, O. "A Posteriori Validation of Assimilation Algorithms". W Data Assimilation for the Earth System, 85–95. Dordrecht: Springer Netherlands, 2003. http://dx.doi.org/10.1007/978-94-010-0029-1_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Darbon, Jérôme, Gabriel P. Langlois i Tingwei Meng. "Connecting Hamilton-Jacobi Partial Differential Equations with Maximum a Posteriori and Posterior Mean Estimators for Some Non-convex Priors". W Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging, 1–25. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-03009-4_56-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Darbon, Jérôme, Gabriel P. Langlois i Tingwei Meng. "Connecting Hamilton-Jacobi Partial Differential Equations with Maximum a Posteriori and Posterior Mean Estimators for Some Non-convex Priors". W Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging, 209–33. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-98661-2_56.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "POSTERIORI ALGORITHM"

1

Arora, Kirti, i T. L. Singal. "An Optimized Algorithm Maximum a Posteriori Energy Detection". W 2015 Fifth International Conference on Communication Systems and Network Technologies (CSNT). IEEE, 2015. http://dx.doi.org/10.1109/csnt.2015.49.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hamamura, T., T. Akagi i B. Irie. "An Analytic Word Recognition Algorithm Using a Posteriori Probability". W Ninth International Conference on Document Analysis and Recognition (ICDAR 2007) Vol 2. IEEE, 2007. http://dx.doi.org/10.1109/icdar.2007.4376999.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Choi, Dooseop, Taeg-Hyun An i Taejeong Kim. "Hierarchical motion estimation algorithm based on maximum a posteriori probability". W 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP). IEEE, 2017. http://dx.doi.org/10.1109/mmsp.2017.8122242.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

West, Karen F., Douglas J. Granrath i James R. Lersch. "Use of Additional Constraint Terms in Maximum A Posteriori Super Resolution". W Signal Recovery and Synthesis. Washington, D.C.: Optica Publishing Group, 1992. http://dx.doi.org/10.1364/srs.1992.tud3.

Pełny tekst źródła
Streszczenie:
Super resolution algorithms derived by maximum a posteriori (MAP) estimation have been successfully applied to images of stars and other compact objects on dark backgrounds. One such algorithm, derived under the assumptions of positivity and Poisson statistics is [1] where the corresponding imaging equation is and ∗ denotes convolution, h(x) = imaging system point spread function (psf), f(x) = object, g(x) = image, and fn(x) = current estimate of object.
Style APA, Harvard, Vancouver, ISO itp.
5

Granrath, Douglas J., Karen F. West, H. Donald Fisher i James Lersch. "Deblurring extended astronomical objects with a maximum-a posteriori/expectation-maximization algorithm". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.mqq4.

Pełny tekst źródła
Streszczenie:
It has been shown that expectation-maximization (EM) can be applied to a maximum-a posteriori (MAP) formulation of the image restoration problem, resulting in a nonlinear iterative restoration algorithm. This MAP/EM algorithm has been shown to be effective in the restoration and super resolution of point objects. When applied to ex tended objects such as planets, however, the algorithm produces ringing artifacts near edges in the object. We show that such artifacts can be overcome by decomposing the image into two terms, a background and a foreground. The background term is used to remove large scale variations in the data such that the foreground term retains primarily edge information. The MAP/EM algorithm then operates on the foreground term to produce the deblurring. This decomposition effectively generalizes the positivity constraint beyond a zero-level surface. We describe our means of performing the decomposition and show both simulated and actual results.
Style APA, Harvard, Vancouver, ISO itp.
6

Ogworonjo, Henry C., i John M. M. Anderson. "An MM-based maximum a posteriori algorithm for GPR image reconstruction". W 2014 IEEE Radar Conference (RadarCon). IEEE, 2014. http://dx.doi.org/10.1109/radar.2014.6875671.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chervova, A. A., G. F. Filaretov i F. F. Pashchenko. "A posteriori Fractal Characteristics Change Point Detecting Algorithm for Time Series". W 2018 IEEE 12th International Conference on Application of Information and Communication Technologies (AICT). IEEE, 2018. http://dx.doi.org/10.1109/icaict.2018.8747092.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hur, Minsung, Jin Yong Choi, Jong-Seob Baek i JongSoo Seo. "Generalized Normalized Gradient Descent Algorithm Based on Estimated a Posteriori Error". W 2008 10th International Conference on Advanced Communication Technology. IEEE, 2008. http://dx.doi.org/10.1109/icact.2008.4493703.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Jilkov, Vesselin P., Jeffrey H. Ledet i X. Rong Li. "Constrained multiple model maximum a posteriori estimation using list Viterbi algorithm". W 2017 20th International Conference on Information Fusion (Fusion). IEEE, 2017. http://dx.doi.org/10.23919/icif.2017.8009649.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zheng, Shuai, Jian Chen i Yonghong Kuo. "A new CPM demodulation algorithm based on tilted-phase and posteriori probability". W International Conference on Signal Processing and Communication Technology (SPCT 2022), redaktorzy Sandeep Saxena i Shuwen Xu. SPIE, 2023. http://dx.doi.org/10.1117/12.2673812.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "POSTERIORI ALGORITHM"

1

Gungor, Osman, Imad Al-Qadi i Navneet Garg. Pavement Data Analytics for Collected Sensor Data. Illinois Center for Transportation, październik 2021. http://dx.doi.org/10.36501/0197-9191/21-034.

Pełny tekst źródła
Streszczenie:
The Federal Aviation Administration instrumented four concrete slabs of a taxiway at the John F. Kennedy International Airport to collect pavement responses under aircraft and environmental loading. The study started with developing preprocessing scripts to organize, structure, and clean the collected data. As a result of the preprocessing step, the data became easier and more intuitive for pavement engineers and researchers to transform and process. After the data were cleaned and organized, they were used to develop two prediction models. The first prediction model employs a Bayesian calibration framework to estimate the unknown material parameters of the concrete pavement. Additionally, the posterior distributions resulting from the calibration process served as a sensitivity analysis by reporting the significance of each parameter for temperature distribution. The second prediction model utilized a machine-learning (ML) algorithm to predict pavement responses under aircraft and environmental loadings. The results demonstrated that ML can predict the responses with high accuracy at a low computational cost. This project highlighted the potential of using ML for future pavement design guidelines as more instrumentation data from future projects are collected to incorporate various material properties and pavement structures.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii