Academic literature on the topic 'Algorithmes de passage en message'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithmes de passage en message.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithmes de passage en message"

1

Odorico, Paolo. "Le backgammon de Kékaumenos. À propos d’un passage peu clair et d’une bataille peu connue." Zbornik radova Vizantoloskog instituta, no. 50-1 (2013): 423–31. http://dx.doi.org/10.2298/zrvi1350423o.

Full text
Abstract:
The Strat?gikon of Cecaumenus tells the story of Basil Pediadites, who has suffered the ironic attacks of the emperor for having played tavla during his mission in Sicily. The message, rather strange, is explained in a different way: in the imperial message the issue was a pun, concerning tavla and a locality in the plain, which could be identified with current Piano Tavola near Catania.
APA, Harvard, Vancouver, ISO, and other styles
2

Samet, Nili. "How Deterministic is Qohelet? A New Reading of the Appendix to the Catalogue of Times." Zeitschrift für die alttestamentliche Wissenschaft 131, no. 4 (December 1, 2019): 577–91. http://dx.doi.org/10.1515/zaw-2019-4004.

Full text
Abstract:
Abstract This paper examines the message of Qohelet’s Catalogue of Times and its interpretive appendix. Scholars disagree on the extent to which this unit deviates from the traditional free-will theology of the Bible. The paper presents a fresh reading of the passage, which sheds new light on the problem of determinism in Qohelet. Beginning by a novel delineation for the unit, it then suggests fresh solutions for the main exegetical cruxes of the passage (3:14, 15, 17), and finally presents an innovative, somewhat radical, understanding of Qohelet’s approach towards the problem of free will.
APA, Harvard, Vancouver, ISO, and other styles
3

FUJII, SEIJI. "Political Shirking – Proposition 13 vs. Proposition 8." Japanese Journal of Political Science 10, no. 2 (August 2009): 213–37. http://dx.doi.org/10.1017/s1468109909003533.

Full text
Abstract:
AbstractThis paper considers the efficiency of the political market in the California State legislature. I analyzed the property tax limitation voter initiative, Proposition 13. I found that districts which supported Proposition 13 more strongly were more likely to oppose the incumbents regardless of whether the incumbents had the different preferences for property taxes from their districts. I also studied how legislators voted on the bills adopted after the passage of Proposition 13 to finance local governments. I found that legislators tended to follow the constituents’ will after they received the voters’ tax-cutting message expressed by the passage of Proposition 13.
APA, Harvard, Vancouver, ISO, and other styles
4

Rotman, Marco. "The “Others” Coming to John the Baptist and the Text of Josephus." Journal for the Study of Judaism 49, no. 1 (February 22, 2018): 68–83. http://dx.doi.org/10.1163/15700631-12491167.

Full text
Abstract:
Abstract Josephus’s passage on John the Baptist (Ant. 18.116-119) contains a much-discussed crux interpretum: who are the “others” that are inspired by John’s words and ready to do everything he said (§118), and who are distinguished from those who gave heed to his message and were baptized (§117)? After a brief discussion of the textual witnesses, text, and translation of the passage in question, various interpretations of “the others” are discussed, none of which is entirely satisfactory. In this article a case will be made for accepting the conjecture originally proposed by Benedikt Niese, who assumed that Josephus originally wrote ἀνθρώπων “people” instead of ἄλλων “others.”
APA, Harvard, Vancouver, ISO, and other styles
5

Zorn, Jean-François. "Exégèse, herméneutique et actualisation : étapes successives ou interaction dynamique ? La notion d'exégèse homilétique." Études théologiques et religieuses 75, no. 4 (2000): 549–63. http://dx.doi.org/10.3406/ether.2000.3620.

Full text
Abstract:
The use of a step by step method is necessary for the preparation of a sermon. One method suggests that every serious preacher needs to follow a three-step process : Exegesis, Interpretation, and Application. With the help of research done by experts in semiotics and homiletics, J.-F. Zorn shows how this method neglects the preacher who reads the Bible passage as well as the audience who listens to his sermon. When these two factors are taken into consideration during the preparation of a sermon, these three steps are viewed differently. They become interactive operations that are capable of re-establishing a living relationship between the ancient Bible passage and the new message of the preacher.
APA, Harvard, Vancouver, ISO, and other styles
6

Garrett, Thomas More. "The Message to the Merchants in James 4:13–17 and Its Relevance for Today." Journal of Theological Interpretation 10, no. 2 (2016): 299–315. http://dx.doi.org/10.2307/26373919.

Full text
Abstract:
ABSTRACT This article highlights the contemporary significance of Jas 4:13–17 to business and commercial pursuits. The first part summarizes modern biblical and theological scholarship on the scriptural passage. The discussion highlights areas of convergence within different Christian traditions by examining the work of commentators writing from a variety of Christian backgrounds. The second part offers a treatment of the passage within the wider context of the epistle. Drawing from modern commentary, this part of the essay also elaborates on the relationship between faith and secular pursuits envisioned by the James text. Particular focus is directed toward concerns pertaining to the separation of faith from commercial affairs expressed in two recent Roman Catholic magisterial works, Benedict XVI's Caritas in Veritate and the Pontifical Council for Justice and Peace document titled Vocation of the Business Leader: A Reflection. The third part extends the discussion in the second part by tracing some further parallels between Jas 4:13–17 and portions of Benedict XVI's Caritas in Veritate.
APA, Harvard, Vancouver, ISO, and other styles
7

Garrett, Thomas More. "The Message to the Merchants in James 4:13–17 and Its Relevance for Today." Journal of Theological Interpretation 10, no. 2 (2016): 299–315. http://dx.doi.org/10.2307/jtheointe.10.2.0299.

Full text
Abstract:
ABSTRACT This article highlights the contemporary significance of Jas 4:13–17 to business and commercial pursuits. The first part summarizes modern biblical and theological scholarship on the scriptural passage. The discussion highlights areas of convergence within different Christian traditions by examining the work of commentators writing from a variety of Christian backgrounds. The second part offers a treatment of the passage within the wider context of the epistle. Drawing from modern commentary, this part of the essay also elaborates on the relationship between faith and secular pursuits envisioned by the James text. Particular focus is directed toward concerns pertaining to the separation of faith from commercial affairs expressed in two recent Roman Catholic magisterial works, Benedict XVI's Caritas in Veritate and the Pontifical Council for Justice and Peace document titled Vocation of the Business Leader: A Reflection. The third part extends the discussion in the second part by tracing some further parallels between Jas 4:13–17 and portions of Benedict XVI's Caritas in Veritate.
APA, Harvard, Vancouver, ISO, and other styles
8

Schmidl, Martina. "Ad astra: Graphic Signalling in the Acrostic Hymn of Nebuchadnezzar II (BM 55469)." Altorientalische Forschungen 48, no. 2 (November 5, 2021): 318–26. http://dx.doi.org/10.1515/aofo-2021-0021.

Full text
Abstract:
Abstract This article examines two orthographic features in the Acrostic Hymn of Nebuchadnezzar II. It aims to show that the text makes use of the possibilities of the cuneiform writing system to create various levels of meaning. The first example clarifies structure and content with regard to a difficult passage in the fourth and last stanza of the text, in which a possible change of actors is indicated by an orthographic feature. The second example shows how orthography is used in the first stanza of the text to augment its message. These examples demonstrate how structural elements and micro-features such as orthography were used creatively to enhance the message of the hymn.
APA, Harvard, Vancouver, ISO, and other styles
9

Cauchie, Jean-François, Patrice Corriveau, and Alexandre Pelletier-Audet. "Le suicide de jeunes québécois.es : une analyse communicationnelle de 138 lettres d’adieu (1940-1970)1." Reflets 28, no. 1 (June 5, 2023): 93–120. http://dx.doi.org/10.7202/1100221ar.

Full text
Abstract:
Notre article porte sur les lettres d’adieu de 72 Québécois.es entre 20 et 30 ans qui se sont enlevé la vie durant les années 1940-1970. Les 138 lettres étudiées, qui proviennent des Archives du Coroner du district judiciaire de la ville de Montréal, sont approchées dans une perspective que nous qualifions de communicationnelle. Après avoir dégagé cinq idéaux-types selon que le sens du message est de type introspectif ou dyadique par rapport au passage à l’acte, nous mettons en évidence le foisonnement et la multidirectionnalité des thèmes que les individus investissent pour établir leur moi posthume. Nos enseignements montrent aussi que le genre joue un rôle indéniable tant dans le message communiqué que dans la manière dont il est communiqué.
APA, Harvard, Vancouver, ISO, and other styles
10

Vatta, Francesca, Alessandro Soranzo, Massimiliano Comisso, Giulia Buttazzoni, and Fulvio Babich. "A Survey on Old and New Approximations to the Function ϕ(x) Involved in LDPC Codes Density Evolution Analysis Using a Gaussian Approximation." Information 12, no. 5 (May 17, 2021): 212. http://dx.doi.org/10.3390/info12050212.

Full text
Abstract:
Low Density Parity Check (LDPC) codes are currently being deeply analyzed through algorithms that require the capability of addressing their iterative decoding convergence performance. Since it has been observed that the probability distribution function of the decoder’s log-likelihood ratio messages is roughly Gaussian, a multiplicity of moderate entanglement strategies to this analysis has been suggested. The first of them was proposed in Chung et al.’s 2001 paper, where the recurrent sequence, characterizing the passage of messages between variable and check nodes, concerns the function ϕ(x), therein specified, and its inverse. In this paper, we review this old approximation to the function ϕ(x), one variant on it obtained in the same period (proposed in Ha et al.’s 2004 paper), and some new ones, recently published in two 2019 papers by Vatta et al. The objective of this review is to analyze the differences among them and their characteristics in terms of accuracy and computational complexity. In particular, the explicitly invertible, not piecewise defined approximation of the function ϕ(x), published in the second of the two abovementioned 2019 papers, is shown to have less relative error in any x than most of the other approximations. Moreover, its use conducts to an important complexity reduction, and allows better Gaussian approximated thresholds to be obtained.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Algorithmes de passage en message"

1

Taftaf, Ala. "Développements du modèle adjoint de la différentiation algorithmique destinés aux applications intensives en calcul." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4001/document.

Full text
Abstract:
Le mode adjoint de la Différentiation Algorithmique (DA) est particulièrement intéressant pour le calcul des gradients. Cependant, ce mode utilise les valeurs intermédiaires de la simulation d'origine dans l'ordre inverse à un coût qui augmente avec la longueur de la simulation. La DA cherche des stratégies pour réduire ce coût, par exemple en profitant de la structure du programme donné. Dans ce travail, nous considérons d'une part le cas des boucles à point-fixe pour lesquels plusieurs auteurs ont proposé des stratégies adjointes adaptées. Parmi ces stratégies, nous choisissons celle de B. Christianson. Nous spécifions la méthode choisie et nous décrivons la manière dont nous l'avons implémentée dans l'outil de DA Tapenade. Les expériences sur une application de taille moyenne montrent une réduction importante de la consommation de mémoire. D'autre part, nous étudions le checkpointing dans le cas de programmes parallèles MPI avec des communications point-à-point. Nous proposons des techniques pour appliquer le checkpointing à ces programmes. Nous fournissons des éléments de preuve de correction de nos techniques et nous les expérimentons sur des codes représentatifs. Ce travail a été effectué dans le cadre du projet européen ``AboutFlow''
The adjoint mode of Algorithmic Differentiation (AD) is particularly attractive for computing gradients. However, this mode needs to use the intermediate values of the original simulation in reverse order at a cost that increases with the length of the simulation. AD research looks for strategies to reduce this cost, for instance by taking advantage of the structure of the given program. In this work, we consider on one hand the frequent case of Fixed-Point loops for which several authors have proposed adapted adjoint strategies. Among these strategies, we select the one introduced by B. Christianson. We specify further the selected method and we describe the way we implemented it inside the AD tool Tapenade. Experiments on a medium-size application shows a major reduction of the memory needed to store trajectories. On the other hand, we study checkpointing in the case of MPI parallel programs with point-to-point communications. We propose techniques to apply checkpointing to these programs. We provide proof of correctness of our techniques and we experiment them on representative CFD codes
APA, Harvard, Vancouver, ISO, and other styles
2

De, Bacco Caterina. "Decentralized network control, optimization and random walks on networks." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112164/document.

Full text
Abstract:
Dans les dernières années, plusieurs problèmes ont été étudiés à l'interface entre la physique statistique et l'informatique. La raison étant que, souvent, ces problèmes peuvent être réinterprétés dans le langage de la physique des systèmes désordonnés, où un grand nombre de variables interagit à travers champs locales qui dépendent de l'état du quartier environnant. Parmi les nombreuses applications de l'optimisation combinatoire le routage optimal sur les réseaux de communication est l'objet de la première partie de la thèse. Nous allons exploiter la méthode de la cavité pour formuler des algorithmes efficaces de type ‘’message-passing’’ et donc résoudre plusieurs variantes du problème grâce à sa mise en œuvre numérique. Dans un deuxième temps, nous allons décrire un modèle pour approcher la version dynamique de la méthode de la cavité, ce qui permet de diminuer la complexité du problème de l'exponentielle de polynôme dans le temps. Ceci sera obtenu en utilisant le formalisme de ‘’Matrix Product State’’ de la mécanique quantique.Un autre sujet qui a suscité beaucoup d'intérêt en physique statistique de processus dynamiques est la marche aléatoire sur les réseaux. La théorie a été développée depuis de nombreuses années dans le cas que la topologie dessous est un réseau de dimension d. Au contraire le cas des réseaux aléatoires a été abordé que dans la dernière décennie, laissant de nombreuses questions encore ouvertes pour obtenir des réponses. Démêler plusieurs aspects de ce thème fera l'objet de la deuxième partie de la thèse. En particulier, nous allons étudier le nombre moyen de sites distincts visités au cours d'une marche aléatoire et caractériser son comportement en fonction de la topologie du graphe. Enfin, nous allons aborder les événements rares statistiques associées aux marches aléatoires sur les réseaux en utilisant le ‘’Large deviations formalism’’. Deux types de transitions de phase dynamiques vont se poser à partir de simulations numériques. Nous allons conclure décrivant les principaux résultats d'une œuvre indépendante développée dans le cadre de la physique hors de l'équilibre. Un système résoluble en deux particules browniens entouré par un bain thermique sera étudiée fournissant des détails sur une interaction à médiation par du bain résultant de la présence du bain
In the last years several problems been studied at the interface between statistical physics and computer science. The reason being that often these problems can be reinterpreted in the language of physics of disordered systems, where a big number of variables interacts through local fields dependent on the state of the surrounding neighborhood. Among the numerous applications of combinatorial optimisation the optimal routing on communication networks is the subject of the first part of the thesis. We will exploit the cavity method to formulate efficient algorithms of type message-passing and thus solve several variants of the problem through its numerical implementation. At a second stage, we will describe a model to approximate the dynamic version of the cavity method, which allows to decrease the complexity of the problem from exponential to polynomial in time. This will be obtained by using the Matrix Product State formalism of quantum mechanics. Another topic that has attracted much interest in statistical physics of dynamic processes is the random walk on networks. The theory has been developed since many years in the case the underneath topology is a d-dimensional lattice. On the contrary the case of random networks has been tackled only in the past decade, leaving many questions still open for answers. Unravelling several aspects of this topic will be the subject of the second part of the thesis. In particular we will study the average number of distinct sites visited during a random walk and characterize its behaviour as a function of the graph topology. Finally, we will address the rare events statistics associated to random walks on networks by using the large-deviations formalism. Two types of dynamic phase transitions will arise from numerical simulations, unveiling important aspects of these problems. We will conclude outlining the main results of an independent work developed in the context of out-of-equilibrium physics. A solvable system made of two Brownian particles surrounded by a thermal bath will be studied providing details about a bath-mediated interaction arising for the presence of the bath
APA, Harvard, Vancouver, ISO, and other styles
3

Barbier, Jean. "Statistical physics and approximate message-passing algorithms for sparse linear estimation problems in signal processing and coding theory." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC130.

Full text
Abstract:
Cette thèse s’intéresse à l’application de méthodes de physique statistique des systèmes désordonnés ainsi que de l’inférence à des problèmes issus du traitement du signal et de la théorie du codage, plus précisément, aux problèmes parcimonieux d’estimation linéaire. Les outils utilisés sont essentiellement les modèles graphiques et l’algorithme approximé de passage de messages ainsi que la méthode de la cavité (appelée analyse de l’évolution d’état dans le contexte du traitement de signal) pour son analyse théorique. Nous aurons également recours à la méthode des répliques de la physique des systèmes désordonnées qui permet d’associer aux problèmes rencontrés une fonction de coût appelé potentiel ou entropie libre en physique. Celle-ci permettra de prédire les différentes phases de complexité typique du problème, en fonction de paramètres externes tels que le niveau de bruit ou le nombre de mesures liées au signal auquel l’on a accès : l’inférence pourra être ainsi typiquement simple, possible mais difficile et enfin impossible. Nous verrons que la phase difficile correspond à un régime où coexistent la solution recherchée ainsi qu’une autre solution des équations de passage de messages. Dans cette phase, celle-ci est un état métastable et ne représente donc pas l’équilibre thermodynamique. Ce phénomène peut-être rapproché de la surfusion de l’eau, bloquée dans l’état liquide à une température où elle devrait être solide pour être à l’équilibre. Via cette compréhension du phénomène de blocage de l’algorithme, nous utiliserons une méthode permettant de franchir l’état métastable en imitant la stratégie adoptée par la nature pour la surfusion : la nucléation et le couplage spatial. Dans de l’eau en état métastable liquide, il suffit d’une légère perturbation localisée pour que se créer un noyau de cristal qui va rapidement se propager dans tout le système de proche en proche grâce aux couplages physiques entre atomes. Le même procédé sera utilisé pour aider l’algorithme à retrouver le signal, et ce grâce à l’introduction d’un noyau contenant de l’information locale sur le signal. Celui-ci se propagera ensuite via une "onde de reconstruction" similaire à la propagation de proche en proche du cristal dans l’eau. Après une introduction à l’inférence statistique et aux problèmes d’estimation linéaires, on introduira les outils nécessaires. Seront ensuite présentées des applications de ces notions. Celles-ci seront divisées en deux parties. La partie traitement du signal se concentre essentiellement sur le problème de l’acquisition comprimée où l’on cherche à inférer un signal parcimonieux dont on connaît un nombre restreint de projections linéaires qui peuvent être bruitées. Est étudiée en profondeur l’influence de l’utilisation d’opérateurs structurés à la place des matrices aléatoires utilisées originellement en acquisition comprimée. Ceux-ci permettent un gain substantiel en temps de traitement et en allocation de mémoire, conditions nécessaires pour le traitement algorithmique de très grands signaux. Nous verrons que l’utilisation combinée de tels opérateurs avec la méthode du couplage spatial permet d’obtenir un algorithme de reconstruction extrê- mement optimisé et s’approchant des performances optimales. Nous étudierons également le comportement de l’algorithme confronté à des signaux seulement approximativement parcimonieux, question fondamentale pour l’application concrète de l’acquisition comprimée sur des signaux physiques réels. Une application directe sera étudiée au travers de la reconstruction d’images mesurées par microscopie à fluorescence. La reconstruction d’images dites "naturelles" sera également étudiée. En théorie du codage, seront étudiées les performances du décodeur basé sur le passage de message pour deux modèles distincts de canaux continus. Nous étudierons un schéma où le signal inféré sera en fait le bruit que l’on pourra ainsi soustraire au signal reçu. Le second, les codes de superposition parcimonieuse pour le canal additif Gaussien est le premier exemple de schéma de codes correcteurs d’erreurs pouvant être directement interprété comme un problème d’acquisition comprimée structuré. Dans ce schéma, nous appliquerons une grande partie des techniques étudiée dans cette thèse pour finalement obtenir un décodeur ayant des résultats très prometteurs à des taux d’information transmise extrêmement proches de la limite théorique de transmission du canal
This thesis is interested in the application of statistical physics methods and inference to signal processing and coding theory, more precisely, to sparse linear estimation problems. The main tools are essentially the graphical models and the approximate message-passing algorithm together with the cavity method (referred as the state evolution analysis in the signal processing context) for its theoretical analysis. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical complexity of the problem as a function of external parameters such as the noise level or the number of measurements one has about the signal: the inference can be typically easy, hard or impossible. We will see that the hard phase corresponds to a regime of coexistence of the actual solution together with another unwanted solution of the message passing equations. In this phase, it represents a metastable state which is not the true equilibrium solution. This phenomenon can be linked to supercooled water blocked in the liquid state below its freezing critical temperature. Thanks to this understanding of blocking phenomenon of the algorithm, we will use a method that allows to overcome the metastability mimicing the strategy adopted by nature itself for supercooled water: the nucleation and spatial coupling. In supercooled water, a weak localized perturbation is enough to create a crystal nucleus that will propagate in all the medium thanks to the physical couplings between closeby atoms. The same process will help the algorithm to find the signal, thanks to the introduction of a nucleus containing local information about the signal. It will then spread as a "reconstruction wave" similar to the crystal in the water. After an introduction to statistical inference and sparse linear estimation, we will introduce the necessary tools. Then we will move to applications of these notions. They will be divided into two parts. The signal processing part will focus essentially on the compressed sensing problem where we seek to infer a sparse signal from a small number of linear projections of it that can be noisy. We will study in details the influence of structured operators instead of purely random ones used originally in compressed sensing. These allow a substantial gain in computational complexity and necessary memory allocation, which are necessary conditions in order to work with very large signals. We will see that the combined use of such operators with spatial coupling allows the implementation of an highly optimized algorithm able to reach near to optimal performances. We will also study the algorithm behavior for reconstruction of approximately sparse signals, a fundamental question for the application of compressed sensing to real life problems. A direct application will be studied via the reconstruction of images measured by fluorescence microscopy. The reconstruction of "natural" images will be considered as well. In coding theory, we will look at the message-passing decoding performances for two distincts real noisy channel models. A first scheme where the signal to infer will be the noise itself will be presented. The second one, the sparse superposition codes for the additive white Gaussian noise channel is the first example of error correction scheme directly interpreted as a structured compressed sensing problem. Here we will apply all the tools developed in this thesis for finally obtaining a very promising decoder that allows to decode at very high transmission rates, very close of the fundamental channel limit
APA, Harvard, Vancouver, ISO, and other styles
4

Aubin, Benjamin. "Mean-field methods and algorithmic perspectives for high-dimensional machine learning." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP083.

Full text
Abstract:
À une époque où l'utilisation des données a atteint un niveau sans précédent, l'apprentissage machine, et plus particulièrement l'apprentissage profond basé sur des réseaux de neurones artificiels, a été responsable de très importants progrès pratiques. Leur utilisation est désormais omniprésente dans de nombreux domaines d'application, de la classification d'images à la reconnaissance vocale en passant par la prédiction de séries temporelles et l'analyse de texte. Pourtant, la compréhension de nombreux algorithmes utilisés en pratique est principalement empirique et leur comportement reste difficile à analyser. Ces lacunes théoriques soulèvent de nombreuses questions sur leur efficacité et leurs potentiels risques. Établir des fondements théoriques sur lesquels asseoir les observations numériques est devenu l'un des défis majeurs de la communauté scientifique.La principale difficulté qui se pose lors de l’analyse de la plupart des algorithmes d'apprentissage automatique est de traiter analytiquement et numériquement un grand nombre de variables aléatoires en interaction. Dans ce manuscrit, nous revisitons une approche basée sur les outils de la physique statistique des systèmes désordonnés. Développés au long d’une riche littérature, ils ont été précisément conçus pour décrire le comportement macroscopique d'un grand nombre de particules, à partir de leurs interactions microscopiques. Au cœur de ce travail, nous mettons fortement à profit le lien profond entre la méthode des répliques et les algorithmes de passage de messages pour mettre en lumière les diagrammes de phase de divers modèles théoriques, en portant l’accent sur les potentiels écarts entre seuils statistiques et algorithmiques. Nous nous concentrons essentiellement sur des tâches et données synthétiques générées dans le paradigme enseignant-élève. En particulier, nous appliquons ces méthodes à champ moyen à l'analyse Bayes-optimale des machines à comité, à l'analyse des bornes de généralisation de Rademacher pour les perceptrons, et à la minimisation du risque empirique dans le contexte des modèles linéaires généralisés. Enfin, nous développons un cadre pour analyser des modèles d'estimation avec des informations à priori structurées, produites par exemple par des réseaux de neurones génératifs avec des poids aléatoires
At a time when the use of data has reached an unprecedented level, machine learning, and more specifically deep learning based on artificial neural networks, has been responsible for very important practical advances. Their use is now ubiquitous in many fields of application, from image classification, text mining to speech recognition, including time series prediction and text analysis. However, the understanding of many algorithms used in practice is mainly empirical and their behavior remains difficult to analyze. These theoretical gaps raise many questions about their effectiveness and potential risks. Establishing theoretical foundations on which to base numerical observations has become one of the fundamental challenges of the scientific community. The main difficulty that arises in the analysis of most machine learning algorithms is to handle, analytically and numerically, a large number of interacting random variables. In this manuscript, we revisit an approach based on the tools of statistical physics of disordered systems. Developed through a rich literature, they have been precisely designed to infer the macroscopic behavior of a large number of particles from their microscopic interactions. At the heart of this work, we strongly capitalize on the deep connection between the replica method and message passing algorithms in order to shed light on the phase diagrams of various theoretical models, with an emphasis on the potential differences between statistical and algorithmic thresholds. We essentially focus on synthetic tasks and data generated in the teacher-student paradigm. In particular, we apply these mean-field methods to the Bayes-optimal analysis of committee machines, to the worst-case analysis of Rademacher generalization bounds for perceptrons, and to empirical risk minimization in the context of generalized linear models. Finally, we develop a framework to analyze estimation models with structured prior informations, produced for instance by deep neural networks based generative models with random weights
APA, Harvard, Vancouver, ISO, and other styles
5

Sahin, Serdar. "Advanced receivers for distributed cooperation in mobile ad hoc networks." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0089.

Full text
Abstract:
Les réseaux ad hoc mobiles (MANETs) sont des systèmes de communication sans fil rapidement déployables et qui fonctionnent avec une coordination minimale, ceci afin d'éviter les pertes d'efficacité spectrale induites par la signalisation. Les stratégies de transmissions coopératives présentent un intérêt pour les MANETs, mais la nature distribuée de tels protocoles peut augmenter le niveau d'interférence avec un impact autant plus sévère que l'on cherche à pousser les limites des efficacités énergétique et spectrale. L'impact de l'interférence doit alors être réduit par l'utilisation d'algorithmes de traitement du signal au niveau de la couche PHY, avec une complexité calculatoire raisonnable. Des avancées récentes sur les techniques de conception de récepteurs numériques itératifs proposent d'exploiter l'inférence bayésienne approximée et des techniques de passage de message associés afin d'améliorer le potentiel des turbo-détecteurs plus classiques. Entre autres, la propagation d'espérance (EP) est une technique flexible, qui offre des compromis attractifs de complexité et de performance dans des situations où la propagation de croyance conventionnel est limité par sa complexité calculatoire. Par ailleurs, grâce à des techniques émergentes de l'apprentissage profond, de telles structures itératives peuvent être projetés vers des réseaux de détection profonds, où l'apprentissage des hyper-paramètres algorithmiques améliore davantage les performances. Dans cette thèse nous proposons des égaliseurs à retour de décision à réponse impulsionnelle finie basée sur la propagation d'espérance (EP) qui apportent des améliorations significatives, en particulier pour des applications à haute efficacité spectrale vis à vis des turbo-détecteurs conventionnels, tout en ayant l'avantage d'être asymptotiquement prédictibles. Nous proposons un cadre générique pour la conception de récepteurs dans le domaine fréquentiel, afin d'obtenir des architectures de détection avec une faible complexité calculatoire. Cette approche est analysée théoriquement et numériquement, avec un accent mis sur l'égalisation des canaux sélectifs en fréquence, et avec des extensions pour de la détection dans des canaux qui varient dans le temps ou pour des systèmes multi-antennes. Nous explorons aussi la conception de détecteurs multi-utilisateurs, ainsi que l'impact de l'estimation du canal, afin de comprendre le potentiel et le limite de cette approche. Pour finir, nous proposons une méthode de prédiction performance à taille finie, afin de réaliser une abstraction de lien pour l'égaliseur domaine fréquentiel à base d'EP. L'impact d'un modélisation plus fine de la couche PHY est évalué dans le contexte de la diffusion coopérative pour des MANETs tactiques, grâce à un simulateur flexible de couche MAC
Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulator
APA, Harvard, Vancouver, ISO, and other styles
6

Saade, Alaa. "Spectral inference methods on sparse graphs : theory and applications." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE024/document.

Full text
Abstract:
Face au déluge actuel de données principalement non structurées, les graphes ont démontré, dans une variété de domaines scientifiques, leur importance croissante comme language abstrait pour décrire des interactions complexes entre des objets complexes. L’un des principaux défis posés par l’étude de ces réseaux est l’inférence de propriétés macroscopiques à grande échelle, affectant un grand nombre d’objets ou d’agents, sur la seule base des interactions microscopiquesqu’entretiennent leurs constituants élémentaires. La physique statistique, créée précisément dans le but d’obtenir les lois macroscopiques de la thermodynamique à partir d’un modèle idéal de particules en interaction, fournit une intuition décisive dans l’étude des réseaux complexes.Dans cette thèse, nous utilisons des méthodes issues de la physique statistique des systèmes désordonnés pour mettre au point et analyser de nouveaux algorithmes d’inférence sur les graphes. Nous nous concentrons sur les méthodes spectrales, utilisant certains vecteurs propres de matrices bien choisies, et sur les graphes parcimonieux, qui contiennent une faible quantité d’information. Nous développons une théorie originale de l’inférence spectrale, fondée sur une relaxation de l’optimisation de certaines énergies libres en champ moyen. Notre approche est donc entièrement probabiliste, et diffère considérablement des motivations plus classiques fondées sur l’optimisation d’une fonction de coût. Nous illustrons l’efficacité de notre approchesur différents problèmes, dont la détection de communautés, la classification non supervisée à partir de similarités mesurées aléatoirement, et la complétion de matrices
In an era of unprecedented deluge of (mostly unstructured) data, graphs are proving more and more useful, across the sciences, as a flexible abstraction to capture complex relationships between complex objects. One of the main challenges arising in the study of such networks is the inference of macroscopic, large-scale properties affecting a large number of objects, based solely on he microscopic interactions between their elementary constituents. Statistical physics, precisely created to recover the macroscopic laws of thermodynamics from an idealized model of interacting particles, provides significant insight to tackle such complex networks.In this dissertation, we use methods derived from the statistical physics of disordered systems to design and study new algorithms for inference on graphs. Our focus is on spectral methods, based on certain eigenvectors of carefully chosen matrices, and sparse graphs, containing only a small amount of information. We develop an original theory of spectral inference based on a relaxation of various meanfield free energy optimizations. Our approach is therefore fully probabilistic, and contrasts with more traditional motivations based on the optimization of a cost function. We illustrate the efficiency of our approach on various problems, including community detection, randomized similarity-based clustering, and matrix completion
APA, Harvard, Vancouver, ISO, and other styles
7

Diaconu, Raluca. "Passage à l'échelle pour les mondes virtuels." Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066090.

Full text
Abstract:
La réalité mixe, les jeux en ligne massivement multijoueur (MMOGs), les mondes virtuels et le cyberespace sont des concepts extrêmement attractifs. Mais leur déploiement à large échelle reste difficile et il est en conséquence souvent évité.La contribution principale de la thèse réside dans le système distribué Kiwano, qui permet à un nombre illimité d'avatars de peupler et d'interagir simultanément dans un même monde contigu. Dans Kiwano nous utilisons la triangulation de Delaunay pour fournir à chaque avatar un nombre constant de voisins en moyenne, indépendamment de leur densité ou distribution géographique. Le nombre d'interactions entre les avatars et les calculs inhérents sont bornés, ce qui permet le passage à l'échelle du système.La charge est repartie sur plusieurs machines qui regroupent sur un même nœud les avatars voisins de façon contiguë dans le graphe de Delaunay. L'équilibrage de la charge se fait de manière contiguë et dynamique, en suivant la philosophie des réseaux pair-à-pair (peer-to-peer overlays). Cependant ce principe est adapté au contexte de l'informatique dématérialisée (cloud computing).Le nombre optimal d'avatars par CPU et les performances de notre système ont été évalués en simulant des dizaines de milliers d'avatars connectés à la même instance de Kiwano tournant à travers plusieurs centres de traitement de données.Nous proposons également trois applications concrètes qui utilisent Kiwano : Manycraft est une architecture distribuée capable de supporter un nombre arbitrairement grand d'utilisateurs cohabitant dans le même espace Minecraft, OneSim, qui permet à un nombre illimité d'usagers d'être ensemble dans la même région de Second Life et HybridEarth, un monde en réalité mixte où avatars et personnes physiques sont présents et interagissent dans un même espace: la Terre
Virtual worlds attract millions of users and these popular applications --supported by gigantic data centers with myriads of processors-- are routinely accessed. However, surprisingly, virtual worlds are still unable to host simultaneously more than a few hundred users in the same contiguous space.The main contribution of the thesis is Kiwano, a distributed system enabling an unlimited number of avatars to simultaneously evolve and interact in a contiguous virtual space. In Kiwano we employ the Delaunay triangulation to provide each avatar with a constant number of neighbors independently of their density or distribution. The avatar-to-avatar interactions and related computations are then bounded, allowing the system to scale. The load is constantly balanced among Kiwano's nodes which adapt and take in charge sets of avatars according to their geographic proximity. The optimal number of avatars per CPU and the performances of our system have been evaluated simulating tens of thousands of avatars connecting to a Kiwano instance running across several data centers, with results well beyond the current state-of-the-art.We also propose Kwery, a distributed spatial index capable to scale dynamic objects of virtual worlds. Kwery performs efficient reverse geolocation queries on large numbers of moving objects updating their position at arbitrary high frequencies. We use a distributed spatial index on top of a self-adaptive tree structure. Each node of the system hosts and answers queries on a group of objects in a zone, which is the minimal axis-aligned rectangle. They are chosen based on their proximity and the load of the node. Spatial queries are then answered only by the nodes with meaningful zones, that is, where the node's zone intersects the query zone.Kiwano has been successfully implemented for HybridEarth, a mixed reality world, Manycraft, our scalable multiplayer Minecraft map, and discussed for OneSim, a distributed Second Life architecture. By handling avatars separately, we show interoperability between these virtual worlds.With Kiwano and Kwery we provide the first massively distributed and self-adaptive solutions for virtual worlds suitable to run in the cloud. The results, in terms of number of avatars per CPU, exceed by orders of magnitude the performances of current state-of-the-art implementations. This indicates Kiwano to be a cost effective solution for the industry. The open API for our first implementation is available at \url{http://kiwano.li}
APA, Harvard, Vancouver, ISO, and other styles
8

Diaconu, Raluca. "Passage à l'échelle pour les mondes virtuels." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066090/document.

Full text
Abstract:
La réalité mixe, les jeux en ligne massivement multijoueur (MMOGs), les mondes virtuels et le cyberespace sont des concepts extrêmement attractifs. Mais leur déploiement à large échelle reste difficile et il est en conséquence souvent évité.La contribution principale de la thèse réside dans le système distribué Kiwano, qui permet à un nombre illimité d'avatars de peupler et d'interagir simultanément dans un même monde contigu. Dans Kiwano nous utilisons la triangulation de Delaunay pour fournir à chaque avatar un nombre constant de voisins en moyenne, indépendamment de leur densité ou distribution géographique. Le nombre d'interactions entre les avatars et les calculs inhérents sont bornés, ce qui permet le passage à l'échelle du système.La charge est repartie sur plusieurs machines qui regroupent sur un même nœud les avatars voisins de façon contiguë dans le graphe de Delaunay. L'équilibrage de la charge se fait de manière contiguë et dynamique, en suivant la philosophie des réseaux pair-à-pair (peer-to-peer overlays). Cependant ce principe est adapté au contexte de l'informatique dématérialisée (cloud computing).Le nombre optimal d'avatars par CPU et les performances de notre système ont été évalués en simulant des dizaines de milliers d'avatars connectés à la même instance de Kiwano tournant à travers plusieurs centres de traitement de données.Nous proposons également trois applications concrètes qui utilisent Kiwano : Manycraft est une architecture distribuée capable de supporter un nombre arbitrairement grand d'utilisateurs cohabitant dans le même espace Minecraft, OneSim, qui permet à un nombre illimité d'usagers d'être ensemble dans la même région de Second Life et HybridEarth, un monde en réalité mixte où avatars et personnes physiques sont présents et interagissent dans un même espace: la Terre
Virtual worlds attract millions of users and these popular applications --supported by gigantic data centers with myriads of processors-- are routinely accessed. However, surprisingly, virtual worlds are still unable to host simultaneously more than a few hundred users in the same contiguous space.The main contribution of the thesis is Kiwano, a distributed system enabling an unlimited number of avatars to simultaneously evolve and interact in a contiguous virtual space. In Kiwano we employ the Delaunay triangulation to provide each avatar with a constant number of neighbors independently of their density or distribution. The avatar-to-avatar interactions and related computations are then bounded, allowing the system to scale. The load is constantly balanced among Kiwano's nodes which adapt and take in charge sets of avatars according to their geographic proximity. The optimal number of avatars per CPU and the performances of our system have been evaluated simulating tens of thousands of avatars connecting to a Kiwano instance running across several data centers, with results well beyond the current state-of-the-art.We also propose Kwery, a distributed spatial index capable to scale dynamic objects of virtual worlds. Kwery performs efficient reverse geolocation queries on large numbers of moving objects updating their position at arbitrary high frequencies. We use a distributed spatial index on top of a self-adaptive tree structure. Each node of the system hosts and answers queries on a group of objects in a zone, which is the minimal axis-aligned rectangle. They are chosen based on their proximity and the load of the node. Spatial queries are then answered only by the nodes with meaningful zones, that is, where the node's zone intersects the query zone.Kiwano has been successfully implemented for HybridEarth, a mixed reality world, Manycraft, our scalable multiplayer Minecraft map, and discussed for OneSim, a distributed Second Life architecture. By handling avatars separately, we show interoperability between these virtual worlds.With Kiwano and Kwery we provide the first massively distributed and self-adaptive solutions for virtual worlds suitable to run in the cloud. The results, in terms of number of avatars per CPU, exceed by orders of magnitude the performances of current state-of-the-art implementations. This indicates Kiwano to be a cost effective solution for the industry. The open API for our first implementation is available at \url{http://kiwano.li}
APA, Harvard, Vancouver, ISO, and other styles
9

Kurisummoottil, Thomas Christo. "Sparse Bayesian learning, beamforming techniques and asymptotic analysis for massive MIMO." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS231.

Full text
Abstract:
Des antennes multiples du côté de la station de base peuvent être utilisées pour améliorer l'efficacité spectrale et l'efficacité énergétique des technologies sans fil de nouvelle génération. En effet, le multi-entrées et sorties multiples massives (MIMO) est considéré comme une technologie prometteuse pour apporter les avantages susmentionnés pour la norme sans fil de cinquième génération, communément appelée 5G New Radio (5G NR). Dans cette monographie, nous explorerons un large éventail de sujets potentiels dans Multi-userMIMO (MU-MIMO) pertinents pour la 5G NR,• Conception de la formation de faisceaux (BF) maximisant le taux de somme et robustesse à l'état de canal partiel informations à l'émetteur (CSIT)• Analyse asymptotique des différentes techniques BF en MIMO massif et• Méthodes d'estimation de canal bayésien utilisant un apprentissage bayésien clairsemé.L'une des techniques potentielles proposées dans la littérature pour contourner la complexité matérielle et la consommation d'énergie dans le MIMO massif est la formation de faisceaux hybrides. Nous proposons une conception de phaseur analogique globalement optimale utilisant la technique du recuit déterministe, qui nous a valu le prix du meilleur article étudiant. En outre, afin d'analyser le comportement des grands systèmes des systèmes MIMO massifs, nous avons utilisé des techniques de la théorie des matrices aléatoires et obtenu des expressions de taux de somme simplifiées. Enfin, nous avons également examiné le problème de récupération de signal bayésien clairsemé en utilisant la technique appelée apprentissage bayésien clairsemé (SBL)
Multiple antennas at the base station side can be used to enhance the spectral efficiency and energy efficiency of the next generation wireless technologies. Indeed, massive multi-input multi-output (MIMO) is seen as one promising technology to bring the aforementioned benefits for fifth generation wireless standard, commonly known as 5G New Radio (5G NR). In this monograph, we will explore a wide range of potential topics in multi-userMIMO (MU-MIMO) relevant to 5G NR,• Sum rate maximizing beamforming (BF) design and robustness to partial channel stateinformation at the transmitter (CSIT)• Asymptotic analysis of the various BF techniques in massive MIMO and• Bayesian channel estimation methods using sparse Bayesian learning.One of the potential techniques proposed in the literature to circumvent the hardware complexity and power consumption in massive MIMO is hybrid beamforming. We propose a globally optimal analog phasor design using the technique of deterministic annealing, which won us the best student paper award. Further, in order to analyze the large system behaviour of the massive MIMO systems, we utilized techniques from random matrix theory and obtained simplified sum rate expressions. Finally, we also looked at Bayesian sparse signal recovery problem using the technique called sparse Bayesian learning (SBL). We proposed low complexity SBL algorithms using a combination of approximate inference techniques such as belief propagation (BP), expectation propagation and mean field (MF) variational Bayes. We proposed an optimal partitioning of the different parameters (in the factor graph) into either MF or BP nodes based on Fisher information matrix analysis
APA, Harvard, Vancouver, ISO, and other styles
10

Adjiman, Philippe. "Raisonnement pair-à-pair en logique propositionnelle : algorithmes, passage à l'échelle et applications." Paris 11, 2006. http://www.theses.fr/2006PA112128.

Full text
Abstract:
Dans un système d'inférence pair-à-pair, chaque pair peut raisonner localement mais peut également solliciter son voisinage constitué des pairs avec lesquels il partage une partie de son vocabulaire. Une caractéristique importante des systèmes d'inférence pair-à-pair est que la théorie globale (l'union des théories de tous les pairs) n'est pas connue. La première contribution majeure de cette thèse est de proposer le premier algorithme de calcul de conséquence dans un environnement pair-à-pair : DeCA. L'algorithme calcul les conséquences graduellement en partant des pairs sollicités jusqu'aux pairs de plus en plus distant. On fournit une condition suffisante sur le graphe de voisinage du système d'inférence pair-à-pair, garantissant la complétude de l'algorithme. Une autre contribution importante est l'application de ce cadre général de raisonnement distribué au contexte du web sémantique à travers les systèmes de gestion de données pair-à-pair SomeOWL et SomeRDFS. Ces systèmes permettent à chaque pair d'annoter (de catégoriser) ses données à l'aide d'ontologies simples et d'établir des liens sémantique, appelés " mappings ", entre son ontologie et celle de ses voisins. Les modèles de donnée de SomeOWL et SomeRDFS sont respectivement fondés sur les deux recommandations récentes du W3C pour le web sémantique : OWL et RDF(S). La dernière contribution de cette thèse est de fournir une étude expérimentale poussée du passage à l'échelle de l'infrastructure pair-à-pair que nous proposons, et ce sur des réseaux allant jusqu'à 1000 pairs
In a peer-to-peer inference system, each peer can reason locally but can also solicit some of its acquaintances, which are peers sharing part of its vocabulary. In this thesis, we consider peer-to-peer inference systems in which the local theory of each peer is a set of propositional clauses defined upon a local vocabulary. An important characteristic of peer-to-peer inference systems is that the global theory (the union of all peer theories) is not known. The first main contribution of this thesis is to provide the first consequence finding algorithm in a peer-to-peer setting: DeCA. It is anytime and computes consequences gradually from the solicited peer to peers that are more and more distant. We exhibit a sufficient condition on the acquaintance graph of the peer-to-peer inference system for guaranteeing the completeness of this algorithm. Another important contribution is to apply this general distributed reasoning setting to the setting of the Semantic Web through the SomeOWL and SomeRDFS peer-to-peer data management systems. Those systems allow each peer to annotate (categorize) its data using simple ontologies and to establish mappings with ontologies of its acquaintances. SomeOWL and SomeRDFS data models are respectively based on the two emerging W3C recommendations for the semantic web, namely OWL and RDF(S). The last contribution of this thesis is to provide an extensive experimental analysis of the scalability of the peer-to-peer infrastructure that we propose, on large networks of 1000 peers
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Algorithmes de passage en message"

1

Laurent, Xavier William. Message Pour un Passage. Lulu Press, Inc., 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wilson, Walter T. The Gospel of Matthew. Wm. B. Eerdmans Publishing Co., 2022. http://dx.doi.org/10.5040/bci-0013.

Full text
Abstract:
What was the original purpose of the Gospel of Matthew? For whom was it written? In this magisterial two-volume commentary, Walter Wilson interprets Matthew as a catechetical work that expresses the ideological and institutional concerns of a faction of disaffected Jewish followers of Jesus in the late first century CE. Wilson’s compelling thesis frames Matthew’s Gospel as not only a continuation of the biblical story but also as a didactic narrative intended to shape the commitments and identity of a particular group that saw itself as a beleaguered, dissident minority. Thus, the text clarifies Jesus’s essential Jewish character as the “Son of David” while also portraying him in opposition to prominent religious leaders of his day – most notably the Pharisees – and open to cordial association with non-Jews. Through meticulous engagement with the Greek text of the Gospel, as well as relevant primary sources and secondary literature, Wilson offers a wealth of insight into the first book of the New Testament. After an introduction exploring the background of the text, its genre and literary features, and its theological orientation, Wilson explicates each passage of the Gospel with thorough commentary on the intended message to first-century readers about topics like morality, liturgy, mission, group discipline, and eschatology. Scholars, students, pastors, and all readers interested in what makes the Gospel of Matthew distinctive among the Synoptics will appreciate and benefit from Wilson’s deep contextualization of the text, informed by his years of studying the New Testament and Christian origins.
APA, Harvard, Vancouver, ISO, and other styles
3

Wilson, Walter T. The Gospel of Matthew. Wm. B. Eerdmans Publishing Co., 2022. http://dx.doi.org/10.5040/bci-0014.

Full text
Abstract:
What was the original purpose of the Gospel of Matthew? For whom was it written? In this magisterial two-volume commentary, Walter Wilson interprets Matthew as a catechetical work that expresses the ideological and institutional concerns of a faction of disaffected Jewish followers of Jesus in the late first century CE. Wilson’s compelling thesis frames Matthew’s Gospel as not only a continuation of the biblical story but also as a didactic narrative intended to shape the commitments and identity of a particular group that saw itself as a beleaguered, dissident minority. Thus, the text clarifies Jesus’s essential Jewish character as the “Son of David” while also portraying him in opposition to prominent religious leaders of his day – most notably the Pharisees – and open to cordial association with non-Jews. Through meticulous engagement with the Greek text of the Gospel, as well as relevant primary sources and secondary literature, Wilson offers a wealth of insight into the first book of the New Testament. After an introduction exploring the background of the text, its genre and literary features, and its theological orientation, Wilson explicates each passage of the Gospel with thorough commentary on the intended message to first-century readers about topics like morality, liturgy, mission, group discipline, and eschatology. Scholars, students, pastors, and all readers interested in what makes the Gospel of Matthew distinctive among the Synoptics will appreciate and benefit from Wilson’s deep contextualization of the text, informed by his years of studying the New Testament and Christian origins.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Algorithmes de passage en message"

1

Soron, Antony. "Le Message d’Andrée Chedid ou la condition sine qua non du « bon passage »." In Le Bon Passage, 195–205. Presses Universitaires de Bordeaux, 2016. http://dx.doi.org/10.4000/books.pub.15446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mclay, Mark. "The end? Poverty politics and the ‘Reagan Revolution’, 1977–81." In The Republican Party and the War on Poverty: 1964-1981, 243–80. Edinburgh University Press, 2021. http://dx.doi.org/10.3366/edinburgh/9781474475525.003.0008.

Full text
Abstract:
This chapter explores the impact of the ‘Reagan Revolution’ on the anti-poverty policies. It begins by charting Reagan’s path to the White House and how this was helped by a political environment that was turning away from government solutions to social problems. It shows how Reagan had been the War on Poverty’s chief opponent throughout his political career and that he was successful in continuing to prosecute his anti-welfare message during the 1980 election against Jimmy Carter. The heart of the chapter then shows how Reagan was able to put his anti-poverty message into policy, through the passage of the Omnibus Budget and Reconciliation Act (OBRA). In doing so, Reagan demonstrated his talent as a party leader and his skill as a communicator.
APA, Harvard, Vancouver, ISO, and other styles
3

Katz, Wendy Jean. "Conclusion." In A True American, 151–62. Fordham University Press, 2022. http://dx.doi.org/10.5422/fordham/9780823298563.003.0008.

Full text
Abstract:
The conclusion considers the fate of the Walcutt’s portrait of Commodore Perry during the Colonial Revival of the early twentieth century, when Progressive-era elites turned to the colonial past to reassert cultural control. The 1928 bronze reproduction of Walcutt’s statue for the Capitol of Rhode Island was a sign of the recurrence of nativism in this later period, which saw the passage of stringent and racist immigration laws. But Walcutt’s “fiery” portrait style continued to carry Young America’s message of expanded rights for ordinary people. Rhode Island, a bastion of “old stock” rule, was transitioning to less restricted voting requirements, which for the first time permitted its Catholic majority to have a voice. So too Walcutt’s affiliation with the Taft family in Ohio reflects both the strength of nativism within the Republican party and internationalists’ reaction against it.
APA, Harvard, Vancouver, ISO, and other styles
4

Murphy, Mary-Elizabeth B. "Introduction." In Jim Crow Capital, 1–14. University of North Carolina Press, 2018. http://dx.doi.org/10.5149/northcarolina/9781469646725.003.0001.

Full text
Abstract:
This introduction contextualizes black women’s politics within the historical and social landscape of political culture in black Washington. While African American women’s political activism stretched back to the seventeenth century, it was during the 1920s and 1930s that their political campaigns gained more visibility, and Washington, D.C. was a key location for this process. Inspired by the passage of the Nineteenth Amendment and emboldened by World War I’s message of democracy, black women formed partisan organizations, testified in Congress, weighed in on legislation, staged protest parades, and lobbied politicians. But in addition to their formal political activities, black women also waged informal politics by expressing workplace resistance, self-defense toward violence, and performances of racial egalitarianism, democracy, and citizenship in a city that very often denied them all of these rights. Jim Crow Capital connects black women’s formal and informal politics to illustrate the complexity of their activism.
APA, Harvard, Vancouver, ISO, and other styles
5

Fleegler, Robert L. "Dukakis’s Triumph." In Brutal Campaign, 64–93. University of North Carolina PressChapel Hill, NC, 2023. http://dx.doi.org/10.5149/northcarolina/9781469673370.003.0004.

Full text
Abstract:
Abstract This chapter shows how Michael Dukakis’s staying power allowed him to win the Democratic nomination over a diverse field that included Paul Simon, Dick Gephardt, Al Gore, and Jesse Jackson. More of the major trends of modern politics became clear. Gephardt rode to victory in Iowa using a populist antitrade message that previewed a generation of politicians that would propose protectionist politics to appeal to working-class white voters. In addition, Jesse Jackson’s success created a brief moment where—for the first time-- it appeared a black candidate had an opportunity to win a major party nomination. Though Jackson fell short, his campaign represented a key middle point between the passage of the Voting Rights of 1965 and the election of Barack Obama in 2008. Eventually, Dukakis defeated Gore—who was trying to run as a more moderate Democrat—in New York to seal his victory.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Chuan-Kun. "Key Management." In IT Policy and Ethics, 728–53. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2919-6.ch033.

Full text
Abstract:
In secure communications, key management is not as simple as metal key management which is supposed to be in a key ring or simply put in a pocket. Suppose Alice wants to transmit some confidential information to Bob over the public networks such as the Internet, Alice could simply encrypt the message using a known cipher such as AES, and then transmit the ciphertext to Bob. However, in order to enable Bob to decrypt the ciphertext to get the original message, in traditional cipher system, Bob needs to have the encryption key. How to let Alice securely and efficiently transmit the encryption key to Bob is a problem of key management. An intuitive approach would be to use a secure channel for the key transmission; this worked in earlier years, but is not a desirable solution in today’s electronic world. Since the invention of public key cryptography, the key management problem with respect to secret key transmission has been solved, which can either employ the Diffie-Hellman key agreement scheme or to use a public key cryptographic algorithm to encrypt the encryption key (which is often known as a session key). This approach is secure against passive attacks, but is vulnerable against active attacks (more precisely the man-in-the-middle attacks). So there must be a way to authenticate the identity of the communication entities. This leads to public key management where the public key infrastructure (PKI) is a typical set of practical protocols, and there is also a set of international standards about PKI. With respect to private key management, it is to prevent keys to be lost or stolen. To prevent a key from being lost, one way is to use the secret sharing, and another is to use the key escrow technique. Both aspects have many research outcomes and practical solutions. With respect to keys being stolen, another practical solution is to use a password to encrypt the key. Hence, there are many password-based security protocols in different applications. This chapter presents a comprehensive description about how each aspect of the key management works. Topics on key management covered by this chapter include key agreement, group-based key agreement and key distribution, the PKI mechanisms, secret sharing, key escrow, password associated key management, and key management in PGP and UMTS systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Brescia, Ray. "Introduction." In The Future of Change, 1–12. Cornell University Press, 2020. http://dx.doi.org/10.7591/cornell/9781501748110.003.0001.

Full text
Abstract:
This introductory chapter details the story of the passage of the G.I. Bill, revealing how an adaptive grassroots network utilized all the media technologies available to it at the time in creative ways—from the mail and the telegraph to the radio and the cinema—to promote a positive, inclusive message and bring about social change. Innovation in communications technologies created an opportunity for the American Legion; it had at its disposal a vast array of tools to not just communicate with but also coordinate the efforts of its vast network of local chapters to promote adoption of the program. This connection between communications technology and a social movement is not accidental. U.S. history reveals the deep relationship between social change and innovation in the means of communication. Thus, this book examines the link between, on the one hand, innovations in communications technology and methods and, on the other, social movements that appear to have emerged in their wake. It also identifies the components of the successes and failures of these same movements that seem to have a symbiotic relationship to the technology that fuels them.
APA, Harvard, Vancouver, ISO, and other styles
8

Krupa, Natalia. "Konserwacja jedwabnego obicia ze ścian kapitularza Archiwum Krakowskiej Kapituły Katedralnej – strategia zarządzania projektem ochrony." In Studia z dziejów katedry na Wawelu, 391–408. Ksiegarnia Akademicka Publishing, 2023. http://dx.doi.org/10.12797/9788381389211.23.

Full text
Abstract:
The primary protected values of historic objects include authenticity, integrity, andlegibility of historical communication. By analyzing a monument’s state of preservation,we can interpret the passage of time through the traces of its use, patina, ordamage. The protection of a monument must be preceded by a thorough understandingof its historical message and by defining the values it carries, as well as byidentifying the role of the object within a broader contextual framework. Only inthis manner can the value of the object be determined, along with its features andelements requiring protection.The implementation of the conservation project for the silk wall hanging from theChapter House of the Cracow Cathedral Chapter’s Archive provides a background fordiscussing the main principles of the preservation process management strategy, basedon the identification of threats, care plans, and monitoring of the risk of deteriorationof the monument. A proper preservation strategy for the accumulated material assetsentails responsibility for the quality of work and research aimed at reconstructinghistorical facts and narratives. This responsibility takes on particular significance in thedays of ongoing debate within relevant communities regarding the current position onthe contemporary heritage conservation model.
APA, Harvard, Vancouver, ISO, and other styles
9

Stein, Michael D., and Sandro Galea. "The Downside of Drinking." In Pained, 209–12. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197510384.003.0060.

Full text
Abstract:
This chapter addresses five potential reasons as to why alcohol, an ancient substance, seems to have become newly hazardous. First, the alcohol industry continues to be powerful and savvy. Industry advertising never says that alcohol is not addictive; rather, the message is “use responsibly,” which implies that alcohol’s use—unlike the use of drugs—is controllable. Second, although the proportion of Americans drinking has remained steady at about two in three people over the past 70 years, Americans are drinking more, and more easily. Third, during this decade of economic expansion, many Americans have more income. In contrast to the stereotype, affluent people are more likely to drink than low-income people. Fourth, binge-drinking is now a rite of passage in college. With women a growing percentage of collegiate heavy drinkers, and with alcohol-makers targeting women with sweeter and fizzier products, health risks accumulate among women, who generally experience greater alcohol effects at lower doses than men. Fifth, Americans have become complacent about driving under the influence, because seatbelts and safer cars have lowered alcohol-related fatalities. Yet, paradoxically, alcohol-related traffic accidents are on the rise. Consuming less alcohol in total or on a per-occasion basis would probably improve the health of most people. That is a credible and reasonable public health goal.
APA, Harvard, Vancouver, ISO, and other styles
10

Koch, Christof. "Computing with Neurons: A Summary." In Biophysics of Computation. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195104912.003.0027.

Full text
Abstract:
We now have arrived at the end of the book. The first 16 chapters dealt with linear and nonlinear cable theory, voltage-dependent ionic currents, the biophysical origin of spike initiation and propagation, the statistical properties of spike trains and neural coding, bursting, dendritic spines, synaptic transmission and plasticity, the types of interactions that can occur among synaptic inputs in a passive or active dendritic arbor, and the diffusion and buffering of calcium and other ions. We attempted to weave these disparate threads into a single tapestry in Chaps. 17-19, demonstrating how these elements interact within a single neuron. The penultimate chapter dealt with various unconventional biophysical and biochemical mechanisms that could instantiate computations at the molecular and the network levels. It is time to summarize. What have we learned about the way brains do or do not compute? The brain has frequently been compared to a universal Turing machine (for a very lucid account of this, see Hofstadter, 1979). A Turing machine is a mathematical abstraction meant to clarify what is meant by algorithm, computation, and computable. Think of it as a machine with a finite number of internal states and an infinite tape that can read messages composed with a finite alphabet, write an output, and store intermediate results as memory. A universal Turing machine is one that can mimic any arbitrary Turing machine. We are here not interested in the renewed debate as to whether or not the brain can, in principle, be treated as such a machine (Lucas, 1964; Penrose, 1989), but whether this is a useful way to conceptualize nervous systems in this manner. Because brains have limited precision, only finite amounts of memory and do not live forever, they cannot possibly be like “real” Turing machines. It is therefore more appropriate to ask: to what extent can brains be treated as finite state machines or automata! Such a machine only has finite computational and memory resources (Hopcroft and Ullman, 1979). The answer has to be an ambiguous “it depends.”
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithmes de passage en message"

1

Wanderley, Juan B. V., and Carlos Levi. "Free Surface Viscous Flow Around a Ship Model." In 25th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/omae2006-92165.

Full text
Abstract:
The present stage of viscous flow numerical analysis combined with computer technology latest advances made viable the mathematical treatment of many robust and complex engineering problems of practical interest. Some numerical problems which solutions would be just unthinkable not more than ten years ago may be now dealt with in a reliable and fairly accurate manner. A truly example of this kind of problem would be the calculation of hydrodynamic loads acting on yawing ships. The solution of such a problem raises practical interest due to applications, for instance, as in the case of stationary FPSO/FSO ships facing sea currents, commonly used in offshore deep-water oil production. In the present solution, the complete incompressible Navier–Stokes (N-S) equations are solved by means of an algorithm that applies the Beam and Warming [1] approximated factorization scheme to simulate the flow around a Wigley’s hull. The numerical code was implemented using Message Passage Interface (MPI) and can be run in a cluster with an arbitrary number of computers. The good agreement with other numerical and experimental data obtained from the literature and high efficiency of the algorithm indicated its potential to be used as an effective tool in ship design.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, J. P., and W. R. Briley. "A Parallel Flow Solver for Unsteady Multiple Blade Row Turbomachinery Simulations." In ASME Turbo Expo 2001: Power for Land, Sea, and Air. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/2001-gt-0348.

Full text
Abstract:
A parallel flow solver has been developed to provide a turbomachinery flow simulation tool that extends the capabilities of a previous single–processor production code (TURBO) for unsteady turbomachinery flow analysis. The code solves the unsteady Reynolds-averaged Navier-Stokes equations with a k–ε turbulence model. The parallel code now includes most features of the serial production code, but is implemented in a portable, scalable form for distributed–memory parallel computers using MPI message passing. The parallel implementation employs domain decomposition and supports general multiblock grids with arbitrary grid–block connectivity. The solution algorithm is an iterative implicit time–accurate scheme with characteristics–based finite–volume spatial discretization. The Newton subiterations are solved using a concurrent block–Jacobi symmetric Gauss–Seidel (BJ–SGS) relaxation scheme. Unsteady blade–row interaction is treated either by simulating full or periodic sectors of blade–rows, or by solving within a single passage for each row using phase–lag and wake–blade interaction approximations at boundaries. A scalable dynamic sliding–interface algorithm is developed here, with an efficient parallel data communication between blade rows in relative motion. Parallel computations are given here for flat plate, single blade row (Rotor 67) and single stage (Stage 37) test cases, and these results are validated by comparison with corresponding results from the previously validated serial production code. Good speedup performance is demonstrated for the single–stage case with a relatively small grid of 600,000 points.
APA, Harvard, Vancouver, ISO, and other styles
3

Ji, Shanhong, and Feng Liu. "Computation of Flutter of Turbomachinery Cascades Using a Parallel Unsteady Navier-Stokes Code." In ASME 1998 International Gas Turbine and Aeroengine Congress and Exhibition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/98-gt-043.

Full text
Abstract:
A quasi-three-dimensional multigrid Navier-Stokes solver on single and multiple passage domains is presented for solving unsteady flows around oscillating turbine and compressor blades. The conventional “direct store” method is used for applying the phase-shifted periodic boundary condition over a single blade passage. A parallel version of the solver using the Message Passing Interface (MPI) standard is developed for multiple passage computations. In the parallel multiple passage computations, the phase-shifted periodic boundary condition is converted to simple in-phase periodic condition. Euler and Navier-Stokes solutions are obtained for unsteady flows through an oscillating turbine cascade blade row with both the sequential and the parallel code. It is found that the parallel code offers almost linear speedup with multiple CPUs. In addition, significant improvement is achieved in convergence of the computation to a periodic unsteady state in the parallel multiple passage computations due to the use of in-phase periodic boundary conditions as compared to that in the single passage computations with phase-lagged periodic boundary conditions via the “direct store” method. The parallel Navier-Stokes code is also used to calculate the flow through an oscillating compressor cascade. Results are compared with experimental data and computations by other authors.
APA, Harvard, Vancouver, ISO, and other styles
4

Esperanc¸a, Paulo T., Juan B. V. Wanderley, and Carlos Levi. "Validation of a Three-Dimensional Large Eddy Simulation Finite Difference Method to Study Vortex Induced Vibration." In 25th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/omae2006-92367.

Full text
Abstract:
Two-dimensional numerical simulations of Vortex Induced Vibration have been failing to duplicate accurately the corresponding experimental data. One possible explanation could be 3D effects present in the real problem that are not modeled in two-dimensional simulations. A three-dimensional finite difference method was implemented using Large Eddy Simulation (LES) technique and Message Passage Interface (MPI) and can be run in a cluster with an arbitrary number of computers. The good agreement with other numerical and experimental data obtained from the literature shows the good quality of the implemented code.
APA, Harvard, Vancouver, ISO, and other styles
5

Mašat, Milan, and Adéla Štěpánková. "A few notes on the book “Call me by your name” by André Aciman." In 7th International e-Conference on Studies in Humanities and Social Sciences. Center for Open Access in Science, Belgrade, 2021. http://dx.doi.org/10.32591/coas.e-conf.07.02011m.

Full text
Abstract:
In the article we deal with the interpretation and analysis of selected topics and motives in the narrative of André Aciman’s publication Call me by your name. After a summary of the story, we take a closer look at the genesis of the two men’s relationships in the context of their Jewish faith. We also depict the transformation of their animal sexual relationship into a loving relationship associated with psychic harmony. The final passage of the article is devoted to the conclusion of the book, in which the message of the publication is anchored, which to a certain extent goes beyond the inclusion of Aciman’s work primarily in LGBT young adult literature.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, F. B., M. D. Duta, M. P. Henry, S. Baker, and C. Burton. "Remote Condition Monitoring for Railway Point Machine." In ASME/IEEE 2002 Joint Rail Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/rtd2002-1646.

Full text
Abstract:
This paper presents the research work carried out at Oxford University on condition monitoring of railway point machines. The developed condition monitoring system includes a variety of sensors for acquiring trackside data related to different parameters. Key events to be logged include time stamping of points operation, opening and closing of case cover associated with a points machine, insertion and removal of a hand-crank, loss of supply current and the passage of a train. The system also has built-in Web functions. This allows a remote operator using Internet Explorer to observe the condition of the point machine at any time, while the acquired data can be downloaded automatically for offline analysis, providing more detailed information on the health condition of the monitored point machine. A short daily condition report message can also be sent to relevant staff via email. At last the experience are reported on the four trackside installed systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography