Academic literature on the topic 'Algorithme de passage de message'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithme de passage de message.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithme de passage de message":

1

Odorico, Paolo. "Le backgammon de Kékaumenos. À propos d’un passage peu clair et d’une bataille peu connue." Zbornik radova Vizantoloskog instituta, no. 50-1 (2013): 423–31. http://dx.doi.org/10.2298/zrvi1350423o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The Strat?gikon of Cecaumenus tells the story of Basil Pediadites, who has suffered the ironic attacks of the emperor for having played tavla during his mission in Sicily. The message, rather strange, is explained in a different way: in the imperial message the issue was a pun, concerning tavla and a locality in the plain, which could be identified with current Piano Tavola near Catania.
2

Samet, Nili. "How Deterministic is Qohelet? A New Reading of the Appendix to the Catalogue of Times." Zeitschrift für die alttestamentliche Wissenschaft 131, no. 4 (December 1, 2019): 577–91. http://dx.doi.org/10.1515/zaw-2019-4004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This paper examines the message of Qohelet’s Catalogue of Times and its interpretive appendix. Scholars disagree on the extent to which this unit deviates from the traditional free-will theology of the Bible. The paper presents a fresh reading of the passage, which sheds new light on the problem of determinism in Qohelet. Beginning by a novel delineation for the unit, it then suggests fresh solutions for the main exegetical cruxes of the passage (3:14, 15, 17), and finally presents an innovative, somewhat radical, understanding of Qohelet’s approach towards the problem of free will.
3

FUJII, SEIJI. "Political Shirking – Proposition 13 vs. Proposition 8." Japanese Journal of Political Science 10, no. 2 (August 2009): 213–37. http://dx.doi.org/10.1017/s1468109909003533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis paper considers the efficiency of the political market in the California State legislature. I analyzed the property tax limitation voter initiative, Proposition 13. I found that districts which supported Proposition 13 more strongly were more likely to oppose the incumbents regardless of whether the incumbents had the different preferences for property taxes from their districts. I also studied how legislators voted on the bills adopted after the passage of Proposition 13 to finance local governments. I found that legislators tended to follow the constituents’ will after they received the voters’ tax-cutting message expressed by the passage of Proposition 13.
4

Rotman, Marco. "The “Others” Coming to John the Baptist and the Text of Josephus." Journal for the Study of Judaism 49, no. 1 (February 22, 2018): 68–83. http://dx.doi.org/10.1163/15700631-12491167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Josephus’s passage on John the Baptist (Ant. 18.116-119) contains a much-discussed crux interpretum: who are the “others” that are inspired by John’s words and ready to do everything he said (§118), and who are distinguished from those who gave heed to his message and were baptized (§117)? After a brief discussion of the textual witnesses, text, and translation of the passage in question, various interpretations of “the others” are discussed, none of which is entirely satisfactory. In this article a case will be made for accepting the conjecture originally proposed by Benedikt Niese, who assumed that Josephus originally wrote ἀνθρώπων “people” instead of ἄλλων “others.”
5

Zorn, Jean-François. "Exégèse, herméneutique et actualisation : étapes successives ou interaction dynamique ? La notion d'exégèse homilétique." Études théologiques et religieuses 75, no. 4 (2000): 549–63. http://dx.doi.org/10.3406/ether.2000.3620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The use of a step by step method is necessary for the preparation of a sermon. One method suggests that every serious preacher needs to follow a three-step process : Exegesis, Interpretation, and Application. With the help of research done by experts in semiotics and homiletics, J.-F. Zorn shows how this method neglects the preacher who reads the Bible passage as well as the audience who listens to his sermon. When these two factors are taken into consideration during the preparation of a sermon, these three steps are viewed differently. They become interactive operations that are capable of re-establishing a living relationship between the ancient Bible passage and the new message of the preacher.
6

Schmidl, Martina. "Ad astra: Graphic Signalling in the Acrostic Hymn of Nebuchadnezzar II (BM 55469)." Altorientalische Forschungen 48, no. 2 (November 5, 2021): 318–26. http://dx.doi.org/10.1515/aofo-2021-0021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This article examines two orthographic features in the Acrostic Hymn of Nebuchadnezzar II. It aims to show that the text makes use of the possibilities of the cuneiform writing system to create various levels of meaning. The first example clarifies structure and content with regard to a difficult passage in the fourth and last stanza of the text, in which a possible change of actors is indicated by an orthographic feature. The second example shows how orthography is used in the first stanza of the text to augment its message. These examples demonstrate how structural elements and micro-features such as orthography were used creatively to enhance the message of the hymn.
7

Garrett, Thomas More. "The Message to the Merchants in James 4:13–17 and Its Relevance for Today." Journal of Theological Interpretation 10, no. 2 (2016): 299–315. http://dx.doi.org/10.2307/26373919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT This article highlights the contemporary significance of Jas 4:13–17 to business and commercial pursuits. The first part summarizes modern biblical and theological scholarship on the scriptural passage. The discussion highlights areas of convergence within different Christian traditions by examining the work of commentators writing from a variety of Christian backgrounds. The second part offers a treatment of the passage within the wider context of the epistle. Drawing from modern commentary, this part of the essay also elaborates on the relationship between faith and secular pursuits envisioned by the James text. Particular focus is directed toward concerns pertaining to the separation of faith from commercial affairs expressed in two recent Roman Catholic magisterial works, Benedict XVI's Caritas in Veritate and the Pontifical Council for Justice and Peace document titled Vocation of the Business Leader: A Reflection. The third part extends the discussion in the second part by tracing some further parallels between Jas 4:13–17 and portions of Benedict XVI's Caritas in Veritate.
8

Garrett, Thomas More. "The Message to the Merchants in James 4:13–17 and Its Relevance for Today." Journal of Theological Interpretation 10, no. 2 (2016): 299–315. http://dx.doi.org/10.2307/jtheointe.10.2.0299.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
ABSTRACT This article highlights the contemporary significance of Jas 4:13–17 to business and commercial pursuits. The first part summarizes modern biblical and theological scholarship on the scriptural passage. The discussion highlights areas of convergence within different Christian traditions by examining the work of commentators writing from a variety of Christian backgrounds. The second part offers a treatment of the passage within the wider context of the epistle. Drawing from modern commentary, this part of the essay also elaborates on the relationship between faith and secular pursuits envisioned by the James text. Particular focus is directed toward concerns pertaining to the separation of faith from commercial affairs expressed in two recent Roman Catholic magisterial works, Benedict XVI's Caritas in Veritate and the Pontifical Council for Justice and Peace document titled Vocation of the Business Leader: A Reflection. The third part extends the discussion in the second part by tracing some further parallels between Jas 4:13–17 and portions of Benedict XVI's Caritas in Veritate.
9

Patel, Aditya, and Nidhi Singh. "Technological Prerequisites and Consequences of Ubiquitous Computing and Networking in Resurrecting Extinct Computers." Journal of Computer Networks and Virtualization 2, no. 1 (April 10, 2024): 15–20. http://dx.doi.org/10.48001/jocnv.2024.2115-20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The passage discusses the previous days of computing, highlighting the experiment of alternative processor designs, such as the Connection Machine (CM1). The CM1 was a unique architecture consisting of 65536 individual one-bit processors interconnected as a 12dimensional hyper-cube. Despite its innovative design, the machine faced challenges and eventually faded into obscurity. To preserve this piece of computing history, efforts have been made to develop a cycle accurate simulator of the Connection-Machine and create an RTL (Register Transfer Level) hardware description of its building block chip. These preservation steps are crucial in ensuring that the legacy of the Connection Machine is not forgotten. The evaluate of the Connection Machine performance yields mixed result. While it demonstrates impressive performance on certain tasks such as a breadth first search algorithm with a remarkably low cycle-per-element ratio, its limitations become apparent in other applications, particularly those in linear algebra. Factors like the 1 bit word size and latency of messages passing impose constraints on performance, especially in traditionally parallelizable applications. Overall, the passage underscores the importance of understanding and preserving the history of computing, including the exploration of alternative architectures like the Connection Machine, despite their eventual challenges and limitations.
10

Cauchie, Jean-François, Patrice Corriveau, and Alexandre Pelletier-Audet. "Le suicide de jeunes québécois.es : une analyse communicationnelle de 138 lettres d’adieu (1940-1970)1." Reflets 28, no. 1 (June 5, 2023): 93–120. http://dx.doi.org/10.7202/1100221ar.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Notre article porte sur les lettres d’adieu de 72 Québécois.es entre 20 et 30 ans qui se sont enlevé la vie durant les années 1940-1970. Les 138 lettres étudiées, qui proviennent des Archives du Coroner du district judiciaire de la ville de Montréal, sont approchées dans une perspective que nous qualifions de communicationnelle. Après avoir dégagé cinq idéaux-types selon que le sens du message est de type introspectif ou dyadique par rapport au passage à l’acte, nous mettons en évidence le foisonnement et la multidirectionnalité des thèmes que les individus investissent pour établir leur moi posthume. Nos enseignements montrent aussi que le genre joue un rôle indéniable tant dans le message communiqué que dans la manière dont il est communiqué.

Dissertations / Theses on the topic "Algorithme de passage de message":

1

Taftaf, Ala. "Développements du modèle adjoint de la différentiation algorithmique destinés aux applications intensives en calcul." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4001/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le mode adjoint de la Différentiation Algorithmique (DA) est particulièrement intéressant pour le calcul des gradients. Cependant, ce mode utilise les valeurs intermédiaires de la simulation d'origine dans l'ordre inverse à un coût qui augmente avec la longueur de la simulation. La DA cherche des stratégies pour réduire ce coût, par exemple en profitant de la structure du programme donné. Dans ce travail, nous considérons d'une part le cas des boucles à point-fixe pour lesquels plusieurs auteurs ont proposé des stratégies adjointes adaptées. Parmi ces stratégies, nous choisissons celle de B. Christianson. Nous spécifions la méthode choisie et nous décrivons la manière dont nous l'avons implémentée dans l'outil de DA Tapenade. Les expériences sur une application de taille moyenne montrent une réduction importante de la consommation de mémoire. D'autre part, nous étudions le checkpointing dans le cas de programmes parallèles MPI avec des communications point-à-point. Nous proposons des techniques pour appliquer le checkpointing à ces programmes. Nous fournissons des éléments de preuve de correction de nos techniques et nous les expérimentons sur des codes représentatifs. Ce travail a été effectué dans le cadre du projet européen ``AboutFlow''
The adjoint mode of Algorithmic Differentiation (AD) is particularly attractive for computing gradients. However, this mode needs to use the intermediate values of the original simulation in reverse order at a cost that increases with the length of the simulation. AD research looks for strategies to reduce this cost, for instance by taking advantage of the structure of the given program. In this work, we consider on one hand the frequent case of Fixed-Point loops for which several authors have proposed adapted adjoint strategies. Among these strategies, we select the one introduced by B. Christianson. We specify further the selected method and we describe the way we implemented it inside the AD tool Tapenade. Experiments on a medium-size application shows a major reduction of the memory needed to store trajectories. On the other hand, we study checkpointing in the case of MPI parallel programs with point-to-point communications. We propose techniques to apply checkpointing to these programs. We provide proof of correctness of our techniques and we experiment them on representative CFD codes
2

Barbier, Jean. "Statistical physics and approximate message-passing algorithms for sparse linear estimation problems in signal processing and coding theory." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse s’intéresse à l’application de méthodes de physique statistique des systèmes désordonnés ainsi que de l’inférence à des problèmes issus du traitement du signal et de la théorie du codage, plus précisément, aux problèmes parcimonieux d’estimation linéaire. Les outils utilisés sont essentiellement les modèles graphiques et l’algorithme approximé de passage de messages ainsi que la méthode de la cavité (appelée analyse de l’évolution d’état dans le contexte du traitement de signal) pour son analyse théorique. Nous aurons également recours à la méthode des répliques de la physique des systèmes désordonnées qui permet d’associer aux problèmes rencontrés une fonction de coût appelé potentiel ou entropie libre en physique. Celle-ci permettra de prédire les différentes phases de complexité typique du problème, en fonction de paramètres externes tels que le niveau de bruit ou le nombre de mesures liées au signal auquel l’on a accès : l’inférence pourra être ainsi typiquement simple, possible mais difficile et enfin impossible. Nous verrons que la phase difficile correspond à un régime où coexistent la solution recherchée ainsi qu’une autre solution des équations de passage de messages. Dans cette phase, celle-ci est un état métastable et ne représente donc pas l’équilibre thermodynamique. Ce phénomène peut-être rapproché de la surfusion de l’eau, bloquée dans l’état liquide à une température où elle devrait être solide pour être à l’équilibre. Via cette compréhension du phénomène de blocage de l’algorithme, nous utiliserons une méthode permettant de franchir l’état métastable en imitant la stratégie adoptée par la nature pour la surfusion : la nucléation et le couplage spatial. Dans de l’eau en état métastable liquide, il suffit d’une légère perturbation localisée pour que se créer un noyau de cristal qui va rapidement se propager dans tout le système de proche en proche grâce aux couplages physiques entre atomes. Le même procédé sera utilisé pour aider l’algorithme à retrouver le signal, et ce grâce à l’introduction d’un noyau contenant de l’information locale sur le signal. Celui-ci se propagera ensuite via une "onde de reconstruction" similaire à la propagation de proche en proche du cristal dans l’eau. Après une introduction à l’inférence statistique et aux problèmes d’estimation linéaires, on introduira les outils nécessaires. Seront ensuite présentées des applications de ces notions. Celles-ci seront divisées en deux parties. La partie traitement du signal se concentre essentiellement sur le problème de l’acquisition comprimée où l’on cherche à inférer un signal parcimonieux dont on connaît un nombre restreint de projections linéaires qui peuvent être bruitées. Est étudiée en profondeur l’influence de l’utilisation d’opérateurs structurés à la place des matrices aléatoires utilisées originellement en acquisition comprimée. Ceux-ci permettent un gain substantiel en temps de traitement et en allocation de mémoire, conditions nécessaires pour le traitement algorithmique de très grands signaux. Nous verrons que l’utilisation combinée de tels opérateurs avec la méthode du couplage spatial permet d’obtenir un algorithme de reconstruction extrê- mement optimisé et s’approchant des performances optimales. Nous étudierons également le comportement de l’algorithme confronté à des signaux seulement approximativement parcimonieux, question fondamentale pour l’application concrète de l’acquisition comprimée sur des signaux physiques réels. Une application directe sera étudiée au travers de la reconstruction d’images mesurées par microscopie à fluorescence. La reconstruction d’images dites "naturelles" sera également étudiée. En théorie du codage, seront étudiées les performances du décodeur basé sur le passage de message pour deux modèles distincts de canaux continus. Nous étudierons un schéma où le signal inféré sera en fait le bruit que l’on pourra ainsi soustraire au signal reçu. Le second, les codes de superposition parcimonieuse pour le canal additif Gaussien est le premier exemple de schéma de codes correcteurs d’erreurs pouvant être directement interprété comme un problème d’acquisition comprimée structuré. Dans ce schéma, nous appliquerons une grande partie des techniques étudiée dans cette thèse pour finalement obtenir un décodeur ayant des résultats très prometteurs à des taux d’information transmise extrêmement proches de la limite théorique de transmission du canal
This thesis is interested in the application of statistical physics methods and inference to signal processing and coding theory, more precisely, to sparse linear estimation problems. The main tools are essentially the graphical models and the approximate message-passing algorithm together with the cavity method (referred as the state evolution analysis in the signal processing context) for its theoretical analysis. We will also use the replica method of statistical physics of disordered systems which allows to associate to the studied problems a cost function referred as the potential of free entropy in physics. It allows to predict the different phases of typical complexity of the problem as a function of external parameters such as the noise level or the number of measurements one has about the signal: the inference can be typically easy, hard or impossible. We will see that the hard phase corresponds to a regime of coexistence of the actual solution together with another unwanted solution of the message passing equations. In this phase, it represents a metastable state which is not the true equilibrium solution. This phenomenon can be linked to supercooled water blocked in the liquid state below its freezing critical temperature. Thanks to this understanding of blocking phenomenon of the algorithm, we will use a method that allows to overcome the metastability mimicing the strategy adopted by nature itself for supercooled water: the nucleation and spatial coupling. In supercooled water, a weak localized perturbation is enough to create a crystal nucleus that will propagate in all the medium thanks to the physical couplings between closeby atoms. The same process will help the algorithm to find the signal, thanks to the introduction of a nucleus containing local information about the signal. It will then spread as a "reconstruction wave" similar to the crystal in the water. After an introduction to statistical inference and sparse linear estimation, we will introduce the necessary tools. Then we will move to applications of these notions. They will be divided into two parts. The signal processing part will focus essentially on the compressed sensing problem where we seek to infer a sparse signal from a small number of linear projections of it that can be noisy. We will study in details the influence of structured operators instead of purely random ones used originally in compressed sensing. These allow a substantial gain in computational complexity and necessary memory allocation, which are necessary conditions in order to work with very large signals. We will see that the combined use of such operators with spatial coupling allows the implementation of an highly optimized algorithm able to reach near to optimal performances. We will also study the algorithm behavior for reconstruction of approximately sparse signals, a fundamental question for the application of compressed sensing to real life problems. A direct application will be studied via the reconstruction of images measured by fluorescence microscopy. The reconstruction of "natural" images will be considered as well. In coding theory, we will look at the message-passing decoding performances for two distincts real noisy channel models. A first scheme where the signal to infer will be the noise itself will be presented. The second one, the sparse superposition codes for the additive white Gaussian noise channel is the first example of error correction scheme directly interpreted as a structured compressed sensing problem. Here we will apply all the tools developed in this thesis for finally obtaining a very promising decoder that allows to decode at very high transmission rates, very close of the fundamental channel limit
3

Mekhiche, Adam. "Accès non-orthogonal aux ressources et techniques de réception associées pour les réseaux ad-hoc mobiles." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les réseaux de communications sans fil, qu’ils soient cellulaires ou ad hoc mobiles (MANETs), doivent permettre des transmissions de données de plus en plus massives en raison de l’augmentation du nombre d’utilisateurs et de l’émergence de nouvelles applications à forte consommation, tout en gérant efficacement les rares ressources radio disponibles. Il est donc crucial d’améliorer l’efficacité spectrale et énergétique de ces systèmes pour répondre à ces besoins croissants. Récemment, des organismes de standardisation (3GPP pour les réseaux cellulaires) ont proposé de nouveaux schémas d’accès dit non orthogonaux aux ressources (NOMA), en combinaison avec des méthodes plus classiques telles que l’utilisation d’antennes multiples en émission et en réception (MIMO). Cependant, ces schémas introduisent également des défis techniques, notamment la génération d’interférences entre utilisateurs, complexifiant le processus de détection du signal au niveau du récepteur. Les fabricants d’équipements radio, tels que Thales, explorent ces solutions NOMA pour les MANETs afin d’accroître leur maturité technique et de relever leurs défis.Des récepteurs numériques itératifs utilisant l’inférence bayésienne approximée, notamment les méthodes de passage de messages qui démontrent leur capacité à surpasser les récepteurs classiques de la littérature, en particulier dans des conditions de propagation avec des niveaux élevés d’interférences. Alors que la propagation de croyance (BP) a été la première méthode utilisée, il semble que la propagation d’espérance (EP) soit capable d’atteindre de meilleurs compromis entre performance et complexité, à la fois dans des scénarios typiques avec peu d’utilisateurs et des débits faibles, et dans des scénarios envisagés pour les réseaux de prochaine génération (Beyond 5G, 6G) avec des dizaines d’utilisateurs partageant les mêmes ressources et des débits élevés.Dans cette thèse, nous proposons des détecteurs doublement itératifs (auto et turbo itérés) capables de conserver les performances de l’état de l’art EP tout en diminuant la complexité sur toute une gamme de configurations, allant de déploiements restreints avec peu d’utilisateurs à des déploiements massifs avec des centaines d’utilisateurs. Nous avons travaillé la représentation graphique de la factorisation de la probabilité a posteriori des symboles à détecter en utilisant des méthodes de décomposition matricielle et/ou d’annulation d’interférences. Nous proposons la dérivation de plusieurs récepteurs basés sur EP et BP répondant aux exigences des nouveaux enjeux de communication. De plus, grâce à un ordonnancement réfléchi des messages internes au détecteur, nous sommes capables d’améliorer les performances sans augmentation significative de la complexité et d’atteindre de meilleurs compromis entre performance/complexité.Nous proposons également une étude des performances asymptotiques de nos récepteurs afin de quantifier leur efficacité spectrale à l’aide d’outils d’analyse issus de la théorie de l’information. L’impact de certains facteurs tels que l’imperfection des connaissances sur le canal de propagation est également étudié, ainsi que des méthodes pour renforcer nos récepteurs, garantissant leur utilisation dans une variété de situations
Wireless communication networks, whether cellular or mobile ad hoc networks (MANETs), must accommodate increasingly massive data transmissions due to the growing number of users and new high-consumption applications, all while efficiently managing the available scarce radio resources. It is therefore crucial to enhance the spectral and energy efficiency of these systems to meet these growing needs.Recently, standardization bodies such as 3GPP for cellular networks proposed new schemes for accessing non-orthogonal resources (NOMA) in combination with more traditional methods, such as the use of multiple-input and multiple-output (MIMO) antennas.However, these schemes also introduce technical challenges, particularly the generation of interference between users, complicating the signal detection process at the receiver. Radio equipment manufacturers, like Thales, are exploring NOMA solutions for MANETs to be able to address their challenges and increase their technical maturity.Iterative digital receivers utilizing approximate Bayesian inference, specifically message-passing methods, demonstrate the ability to outperform conventional receivers of the literature, especially in propagating conditions/setups with high levels of interference. While belief propagation (BP) was the initial method employed, it appears that expectation propagation (EP) is capable of achieving better trade-offs between performance and complexity,both in typical scenarios with few users and low data rates and in scenarios envisioned fornext-generation networks (Beyond 5G, 6G) with dozens of users sharing the same resourcesand high data rates.In this thesis, we propose doubly iterative detectors (auto and turbo iterated) capable of maintaining the performance of state-of-the-art EP while reducing complexity across a range of configurations, from small deployments with few users to massive deployments with hundreds of users. We worked on the graphical representation of the factorization of the posterior probability of the symbols to be detected by using matrix decomposition and/or interference cancellation methods. We propose the derivation of several EP and BP-based receivers meeting the requirements of the new communication challenges. Furthermore, thanks to the thoughtful scheduling of internal messages within the detector, we are able to enhance performance without a significant increase in complexity and achieve better trade-offs between performance and complexity.We also conduct a study of the asymptotic performance of our receivers to quantify their spectral efficiency using analysis tools from the information theory. The impact of factors such as imperfect knowledge of the propagation channel is also investigated, along with methods to reinforce our receivers, ensuring their use in a variety of situations
4

De, Bacco Caterina. "Decentralized network control, optimization and random walks on networks." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112164/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans les dernières années, plusieurs problèmes ont été étudiés à l'interface entre la physique statistique et l'informatique. La raison étant que, souvent, ces problèmes peuvent être réinterprétés dans le langage de la physique des systèmes désordonnés, où un grand nombre de variables interagit à travers champs locales qui dépendent de l'état du quartier environnant. Parmi les nombreuses applications de l'optimisation combinatoire le routage optimal sur les réseaux de communication est l'objet de la première partie de la thèse. Nous allons exploiter la méthode de la cavité pour formuler des algorithmes efficaces de type ‘’message-passing’’ et donc résoudre plusieurs variantes du problème grâce à sa mise en œuvre numérique. Dans un deuxième temps, nous allons décrire un modèle pour approcher la version dynamique de la méthode de la cavité, ce qui permet de diminuer la complexité du problème de l'exponentielle de polynôme dans le temps. Ceci sera obtenu en utilisant le formalisme de ‘’Matrix Product State’’ de la mécanique quantique.Un autre sujet qui a suscité beaucoup d'intérêt en physique statistique de processus dynamiques est la marche aléatoire sur les réseaux. La théorie a été développée depuis de nombreuses années dans le cas que la topologie dessous est un réseau de dimension d. Au contraire le cas des réseaux aléatoires a été abordé que dans la dernière décennie, laissant de nombreuses questions encore ouvertes pour obtenir des réponses. Démêler plusieurs aspects de ce thème fera l'objet de la deuxième partie de la thèse. En particulier, nous allons étudier le nombre moyen de sites distincts visités au cours d'une marche aléatoire et caractériser son comportement en fonction de la topologie du graphe. Enfin, nous allons aborder les événements rares statistiques associées aux marches aléatoires sur les réseaux en utilisant le ‘’Large deviations formalism’’. Deux types de transitions de phase dynamiques vont se poser à partir de simulations numériques. Nous allons conclure décrivant les principaux résultats d'une œuvre indépendante développée dans le cadre de la physique hors de l'équilibre. Un système résoluble en deux particules browniens entouré par un bain thermique sera étudiée fournissant des détails sur une interaction à médiation par du bain résultant de la présence du bain
In the last years several problems been studied at the interface between statistical physics and computer science. The reason being that often these problems can be reinterpreted in the language of physics of disordered systems, where a big number of variables interacts through local fields dependent on the state of the surrounding neighborhood. Among the numerous applications of combinatorial optimisation the optimal routing on communication networks is the subject of the first part of the thesis. We will exploit the cavity method to formulate efficient algorithms of type message-passing and thus solve several variants of the problem through its numerical implementation. At a second stage, we will describe a model to approximate the dynamic version of the cavity method, which allows to decrease the complexity of the problem from exponential to polynomial in time. This will be obtained by using the Matrix Product State formalism of quantum mechanics. Another topic that has attracted much interest in statistical physics of dynamic processes is the random walk on networks. The theory has been developed since many years in the case the underneath topology is a d-dimensional lattice. On the contrary the case of random networks has been tackled only in the past decade, leaving many questions still open for answers. Unravelling several aspects of this topic will be the subject of the second part of the thesis. In particular we will study the average number of distinct sites visited during a random walk and characterize its behaviour as a function of the graph topology. Finally, we will address the rare events statistics associated to random walks on networks by using the large-deviations formalism. Two types of dynamic phase transitions will arise from numerical simulations, unveiling important aspects of these problems. We will conclude outlining the main results of an independent work developed in the context of out-of-equilibrium physics. A solvable system made of two Brownian particles surrounded by a thermal bath will be studied providing details about a bath-mediated interaction arising for the presence of the bath
5

Aubin, Benjamin. "Mean-field methods and algorithmic perspectives for high-dimensional machine learning." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
À une époque où l'utilisation des données a atteint un niveau sans précédent, l'apprentissage machine, et plus particulièrement l'apprentissage profond basé sur des réseaux de neurones artificiels, a été responsable de très importants progrès pratiques. Leur utilisation est désormais omniprésente dans de nombreux domaines d'application, de la classification d'images à la reconnaissance vocale en passant par la prédiction de séries temporelles et l'analyse de texte. Pourtant, la compréhension de nombreux algorithmes utilisés en pratique est principalement empirique et leur comportement reste difficile à analyser. Ces lacunes théoriques soulèvent de nombreuses questions sur leur efficacité et leurs potentiels risques. Établir des fondements théoriques sur lesquels asseoir les observations numériques est devenu l'un des défis majeurs de la communauté scientifique.La principale difficulté qui se pose lors de l’analyse de la plupart des algorithmes d'apprentissage automatique est de traiter analytiquement et numériquement un grand nombre de variables aléatoires en interaction. Dans ce manuscrit, nous revisitons une approche basée sur les outils de la physique statistique des systèmes désordonnés. Développés au long d’une riche littérature, ils ont été précisément conçus pour décrire le comportement macroscopique d'un grand nombre de particules, à partir de leurs interactions microscopiques. Au cœur de ce travail, nous mettons fortement à profit le lien profond entre la méthode des répliques et les algorithmes de passage de messages pour mettre en lumière les diagrammes de phase de divers modèles théoriques, en portant l’accent sur les potentiels écarts entre seuils statistiques et algorithmiques. Nous nous concentrons essentiellement sur des tâches et données synthétiques générées dans le paradigme enseignant-élève. En particulier, nous appliquons ces méthodes à champ moyen à l'analyse Bayes-optimale des machines à comité, à l'analyse des bornes de généralisation de Rademacher pour les perceptrons, et à la minimisation du risque empirique dans le contexte des modèles linéaires généralisés. Enfin, nous développons un cadre pour analyser des modèles d'estimation avec des informations à priori structurées, produites par exemple par des réseaux de neurones génératifs avec des poids aléatoires
At a time when the use of data has reached an unprecedented level, machine learning, and more specifically deep learning based on artificial neural networks, has been responsible for very important practical advances. Their use is now ubiquitous in many fields of application, from image classification, text mining to speech recognition, including time series prediction and text analysis. However, the understanding of many algorithms used in practice is mainly empirical and their behavior remains difficult to analyze. These theoretical gaps raise many questions about their effectiveness and potential risks. Establishing theoretical foundations on which to base numerical observations has become one of the fundamental challenges of the scientific community. The main difficulty that arises in the analysis of most machine learning algorithms is to handle, analytically and numerically, a large number of interacting random variables. In this manuscript, we revisit an approach based on the tools of statistical physics of disordered systems. Developed through a rich literature, they have been precisely designed to infer the macroscopic behavior of a large number of particles from their microscopic interactions. At the heart of this work, we strongly capitalize on the deep connection between the replica method and message passing algorithms in order to shed light on the phase diagrams of various theoretical models, with an emphasis on the potential differences between statistical and algorithmic thresholds. We essentially focus on synthetic tasks and data generated in the teacher-student paradigm. In particular, we apply these mean-field methods to the Bayes-optimal analysis of committee machines, to the worst-case analysis of Rademacher generalization bounds for perceptrons, and to empirical risk minimization in the context of generalized linear models. Finally, we develop a framework to analyze estimation models with structured prior informations, produced for instance by deep neural networks based generative models with random weights
6

Sahin, Serdar. "Advanced receivers for distributed cooperation in mobile ad hoc networks." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les réseaux ad hoc mobiles (MANETs) sont des systèmes de communication sans fil rapidement déployables et qui fonctionnent avec une coordination minimale, ceci afin d'éviter les pertes d'efficacité spectrale induites par la signalisation. Les stratégies de transmissions coopératives présentent un intérêt pour les MANETs, mais la nature distribuée de tels protocoles peut augmenter le niveau d'interférence avec un impact autant plus sévère que l'on cherche à pousser les limites des efficacités énergétique et spectrale. L'impact de l'interférence doit alors être réduit par l'utilisation d'algorithmes de traitement du signal au niveau de la couche PHY, avec une complexité calculatoire raisonnable. Des avancées récentes sur les techniques de conception de récepteurs numériques itératifs proposent d'exploiter l'inférence bayésienne approximée et des techniques de passage de message associés afin d'améliorer le potentiel des turbo-détecteurs plus classiques. Entre autres, la propagation d'espérance (EP) est une technique flexible, qui offre des compromis attractifs de complexité et de performance dans des situations où la propagation de croyance conventionnel est limité par sa complexité calculatoire. Par ailleurs, grâce à des techniques émergentes de l'apprentissage profond, de telles structures itératives peuvent être projetés vers des réseaux de détection profonds, où l'apprentissage des hyper-paramètres algorithmiques améliore davantage les performances. Dans cette thèse nous proposons des égaliseurs à retour de décision à réponse impulsionnelle finie basée sur la propagation d'espérance (EP) qui apportent des améliorations significatives, en particulier pour des applications à haute efficacité spectrale vis à vis des turbo-détecteurs conventionnels, tout en ayant l'avantage d'être asymptotiquement prédictibles. Nous proposons un cadre générique pour la conception de récepteurs dans le domaine fréquentiel, afin d'obtenir des architectures de détection avec une faible complexité calculatoire. Cette approche est analysée théoriquement et numériquement, avec un accent mis sur l'égalisation des canaux sélectifs en fréquence, et avec des extensions pour de la détection dans des canaux qui varient dans le temps ou pour des systèmes multi-antennes. Nous explorons aussi la conception de détecteurs multi-utilisateurs, ainsi que l'impact de l'estimation du canal, afin de comprendre le potentiel et le limite de cette approche. Pour finir, nous proposons une méthode de prédiction performance à taille finie, afin de réaliser une abstraction de lien pour l'égaliseur domaine fréquentiel à base d'EP. L'impact d'un modélisation plus fine de la couche PHY est évalué dans le contexte de la diffusion coopérative pour des MANETs tactiques, grâce à un simulateur flexible de couche MAC
Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulator
7

Saade, Alaa. "Spectral inference methods on sparse graphs : theory and applications." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEE024/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Face au déluge actuel de données principalement non structurées, les graphes ont démontré, dans une variété de domaines scientifiques, leur importance croissante comme language abstrait pour décrire des interactions complexes entre des objets complexes. L’un des principaux défis posés par l’étude de ces réseaux est l’inférence de propriétés macroscopiques à grande échelle, affectant un grand nombre d’objets ou d’agents, sur la seule base des interactions microscopiquesqu’entretiennent leurs constituants élémentaires. La physique statistique, créée précisément dans le but d’obtenir les lois macroscopiques de la thermodynamique à partir d’un modèle idéal de particules en interaction, fournit une intuition décisive dans l’étude des réseaux complexes.Dans cette thèse, nous utilisons des méthodes issues de la physique statistique des systèmes désordonnés pour mettre au point et analyser de nouveaux algorithmes d’inférence sur les graphes. Nous nous concentrons sur les méthodes spectrales, utilisant certains vecteurs propres de matrices bien choisies, et sur les graphes parcimonieux, qui contiennent une faible quantité d’information. Nous développons une théorie originale de l’inférence spectrale, fondée sur une relaxation de l’optimisation de certaines énergies libres en champ moyen. Notre approche est donc entièrement probabiliste, et diffère considérablement des motivations plus classiques fondées sur l’optimisation d’une fonction de coût. Nous illustrons l’efficacité de notre approchesur différents problèmes, dont la détection de communautés, la classification non supervisée à partir de similarités mesurées aléatoirement, et la complétion de matrices
In an era of unprecedented deluge of (mostly unstructured) data, graphs are proving more and more useful, across the sciences, as a flexible abstraction to capture complex relationships between complex objects. One of the main challenges arising in the study of such networks is the inference of macroscopic, large-scale properties affecting a large number of objects, based solely on he microscopic interactions between their elementary constituents. Statistical physics, precisely created to recover the macroscopic laws of thermodynamics from an idealized model of interacting particles, provides significant insight to tackle such complex networks.In this dissertation, we use methods derived from the statistical physics of disordered systems to design and study new algorithms for inference on graphs. Our focus is on spectral methods, based on certain eigenvectors of carefully chosen matrices, and sparse graphs, containing only a small amount of information. We develop an original theory of spectral inference based on a relaxation of various meanfield free energy optimizations. Our approach is therefore fully probabilistic, and contrasts with more traditional motivations based on the optimization of a cost function. We illustrate the efficiency of our approach on various problems, including community detection, randomized similarity-based clustering, and matrix completion
8

Genaud, Stéphane. "Exécutions de programmes parallèles à passage de messages sur grille de calcul." Habilitation à diriger des recherches, Université Henri Poincaré - Nancy I, 2009. http://tel.archives-ouvertes.fr/tel-00440503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le document présente une synthèse de travaux sur le déploiement, l'utilisation et les techniques de mise en oeuvre d'applications développées selon un modèle de programmation à passage de messages sur des grilles de calcul. La première partie décrit les performances observées sur la période 2002-2006 sur une plateforme à l'échelle de la France, ainsi que les gains obtenus par équilibrage de charge. La deuxième partie décrit un intergiciel nouveau baptisé P2P-MPI qui synthétise un ensemble de propositions pour améliorer la prise en charge de tels programmes à passage de messages.
9

Kurisummoottil, Thomas Christo. "Sparse Bayesian learning, beamforming techniques and asymptotic analysis for massive MIMO." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Des antennes multiples du côté de la station de base peuvent être utilisées pour améliorer l'efficacité spectrale et l'efficacité énergétique des technologies sans fil de nouvelle génération. En effet, le multi-entrées et sorties multiples massives (MIMO) est considéré comme une technologie prometteuse pour apporter les avantages susmentionnés pour la norme sans fil de cinquième génération, communément appelée 5G New Radio (5G NR). Dans cette monographie, nous explorerons un large éventail de sujets potentiels dans Multi-userMIMO (MU-MIMO) pertinents pour la 5G NR,• Conception de la formation de faisceaux (BF) maximisant le taux de somme et robustesse à l'état de canal partiel informations à l'émetteur (CSIT)• Analyse asymptotique des différentes techniques BF en MIMO massif et• Méthodes d'estimation de canal bayésien utilisant un apprentissage bayésien clairsemé.L'une des techniques potentielles proposées dans la littérature pour contourner la complexité matérielle et la consommation d'énergie dans le MIMO massif est la formation de faisceaux hybrides. Nous proposons une conception de phaseur analogique globalement optimale utilisant la technique du recuit déterministe, qui nous a valu le prix du meilleur article étudiant. En outre, afin d'analyser le comportement des grands systèmes des systèmes MIMO massifs, nous avons utilisé des techniques de la théorie des matrices aléatoires et obtenu des expressions de taux de somme simplifiées. Enfin, nous avons également examiné le problème de récupération de signal bayésien clairsemé en utilisant la technique appelée apprentissage bayésien clairsemé (SBL)
Multiple antennas at the base station side can be used to enhance the spectral efficiency and energy efficiency of the next generation wireless technologies. Indeed, massive multi-input multi-output (MIMO) is seen as one promising technology to bring the aforementioned benefits for fifth generation wireless standard, commonly known as 5G New Radio (5G NR). In this monograph, we will explore a wide range of potential topics in multi-userMIMO (MU-MIMO) relevant to 5G NR,• Sum rate maximizing beamforming (BF) design and robustness to partial channel stateinformation at the transmitter (CSIT)• Asymptotic analysis of the various BF techniques in massive MIMO and• Bayesian channel estimation methods using sparse Bayesian learning.One of the potential techniques proposed in the literature to circumvent the hardware complexity and power consumption in massive MIMO is hybrid beamforming. We propose a globally optimal analog phasor design using the technique of deterministic annealing, which won us the best student paper award. Further, in order to analyze the large system behaviour of the massive MIMO systems, we utilized techniques from random matrix theory and obtained simplified sum rate expressions. Finally, we also looked at Bayesian sparse signal recovery problem using the technique called sparse Bayesian learning (SBL). We proposed low complexity SBL algorithms using a combination of approximate inference techniques such as belief propagation (BP), expectation propagation and mean field (MF) variational Bayes. We proposed an optimal partitioning of the different parameters (in the factor graph) into either MF or BP nodes based on Fisher information matrix analysis
10

RAJI, MOURAD. "Algorithme de reconnaissance de formes discretes par passage au continu. Application a la recherche de similarite moleculaire et a la mesure de chiralite geometrique." Paris 7, 1996. http://www.theses.fr/1996PA077270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux de cette these presente une methode originale pour l'assortiment de structure discretes quand les correspondances sont inconnues. Cette methode evite le traitement point par point base de traitement des methodes existantes avec d'important risques d'explosion combinatoire et considere les structures a comparer dans leur totalite. L'idee de base de cette methode, est la transformation d'une des structures a traiter en une entite continue par des interpolations de ses projections sur les plans d'un repere orthonorme, a l'aide de cubique splines. Une transformation t' composee de rotations et de translations est recherchee. La transformation t' doit pouvoir ramenee les projections de la seconde structure sur la representation continue de la premiere. Suite a cette etape une proposition d'isomorphismes est faite, et la recherche d'une seconde transformation t est operee afin de ramener l'une des structures sur la seconde. Les temps de reponses constates presentent une evolution logarithmique alors que les autres methodes ont une evolution exponentiel. L'interet d'utilisation de cette methode devient evident dans le cas de traitement de structures importantes. Contrairement aux autres methodes, celle presentee dans cette these fait la difference entre un objet et son image miroir d'ou l'idee de son utilisation dans le calcul de la chiralite. Les resultats obtenues ont ete plus que satisfaisants

Books on the topic "Algorithme de passage de message":

1

Laurent, Xavier William. Message Pour un Passage. Lulu Press, Inc., 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wilson, Walter T. The Gospel of Matthew. Wm. B. Eerdmans Publishing Co., 2022. http://dx.doi.org/10.5040/bci-0013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
What was the original purpose of the Gospel of Matthew? For whom was it written? In this magisterial two-volume commentary, Walter Wilson interprets Matthew as a catechetical work that expresses the ideological and institutional concerns of a faction of disaffected Jewish followers of Jesus in the late first century CE. Wilson’s compelling thesis frames Matthew’s Gospel as not only a continuation of the biblical story but also as a didactic narrative intended to shape the commitments and identity of a particular group that saw itself as a beleaguered, dissident minority. Thus, the text clarifies Jesus’s essential Jewish character as the “Son of David” while also portraying him in opposition to prominent religious leaders of his day – most notably the Pharisees – and open to cordial association with non-Jews. Through meticulous engagement with the Greek text of the Gospel, as well as relevant primary sources and secondary literature, Wilson offers a wealth of insight into the first book of the New Testament. After an introduction exploring the background of the text, its genre and literary features, and its theological orientation, Wilson explicates each passage of the Gospel with thorough commentary on the intended message to first-century readers about topics like morality, liturgy, mission, group discipline, and eschatology. Scholars, students, pastors, and all readers interested in what makes the Gospel of Matthew distinctive among the Synoptics will appreciate and benefit from Wilson’s deep contextualization of the text, informed by his years of studying the New Testament and Christian origins.
3

Wilson, Walter T. The Gospel of Matthew. Wm. B. Eerdmans Publishing Co., 2022. http://dx.doi.org/10.5040/bci-0014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
What was the original purpose of the Gospel of Matthew? For whom was it written? In this magisterial two-volume commentary, Walter Wilson interprets Matthew as a catechetical work that expresses the ideological and institutional concerns of a faction of disaffected Jewish followers of Jesus in the late first century CE. Wilson’s compelling thesis frames Matthew’s Gospel as not only a continuation of the biblical story but also as a didactic narrative intended to shape the commitments and identity of a particular group that saw itself as a beleaguered, dissident minority. Thus, the text clarifies Jesus’s essential Jewish character as the “Son of David” while also portraying him in opposition to prominent religious leaders of his day – most notably the Pharisees – and open to cordial association with non-Jews. Through meticulous engagement with the Greek text of the Gospel, as well as relevant primary sources and secondary literature, Wilson offers a wealth of insight into the first book of the New Testament. After an introduction exploring the background of the text, its genre and literary features, and its theological orientation, Wilson explicates each passage of the Gospel with thorough commentary on the intended message to first-century readers about topics like morality, liturgy, mission, group discipline, and eschatology. Scholars, students, pastors, and all readers interested in what makes the Gospel of Matthew distinctive among the Synoptics will appreciate and benefit from Wilson’s deep contextualization of the text, informed by his years of studying the New Testament and Christian origins.

Book chapters on the topic "Algorithme de passage de message":

1

Soron, Antony. "Le Message d’Andrée Chedid ou la condition sine qua non du « bon passage »." In Le Bon Passage, 195–205. Presses Universitaires de Bordeaux, 2016. http://dx.doi.org/10.4000/books.pub.15446.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mclay, Mark. "The end? Poverty politics and the ‘Reagan Revolution’, 1977–81." In The Republican Party and the War on Poverty: 1964-1981, 243–80. Edinburgh University Press, 2021. http://dx.doi.org/10.3366/edinburgh/9781474475525.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This chapter explores the impact of the ‘Reagan Revolution’ on the anti-poverty policies. It begins by charting Reagan’s path to the White House and how this was helped by a political environment that was turning away from government solutions to social problems. It shows how Reagan had been the War on Poverty’s chief opponent throughout his political career and that he was successful in continuing to prosecute his anti-welfare message during the 1980 election against Jimmy Carter. The heart of the chapter then shows how Reagan was able to put his anti-poverty message into policy, through the passage of the Omnibus Budget and Reconciliation Act (OBRA). In doing so, Reagan demonstrated his talent as a party leader and his skill as a communicator.
3

Katz, Wendy Jean. "Conclusion." In A True American, 151–62. Fordham University Press, 2022. http://dx.doi.org/10.5422/fordham/9780823298563.003.0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The conclusion considers the fate of the Walcutt’s portrait of Commodore Perry during the Colonial Revival of the early twentieth century, when Progressive-era elites turned to the colonial past to reassert cultural control. The 1928 bronze reproduction of Walcutt’s statue for the Capitol of Rhode Island was a sign of the recurrence of nativism in this later period, which saw the passage of stringent and racist immigration laws. But Walcutt’s “fiery” portrait style continued to carry Young America’s message of expanded rights for ordinary people. Rhode Island, a bastion of “old stock” rule, was transitioning to less restricted voting requirements, which for the first time permitted its Catholic majority to have a voice. So too Walcutt’s affiliation with the Taft family in Ohio reflects both the strength of nativism within the Republican party and internationalists’ reaction against it.
4

Murphy, Mary-Elizabeth B. "Introduction." In Jim Crow Capital, 1–14. University of North Carolina Press, 2018. http://dx.doi.org/10.5149/northcarolina/9781469646725.003.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This introduction contextualizes black women’s politics within the historical and social landscape of political culture in black Washington. While African American women’s political activism stretched back to the seventeenth century, it was during the 1920s and 1930s that their political campaigns gained more visibility, and Washington, D.C. was a key location for this process. Inspired by the passage of the Nineteenth Amendment and emboldened by World War I’s message of democracy, black women formed partisan organizations, testified in Congress, weighed in on legislation, staged protest parades, and lobbied politicians. But in addition to their formal political activities, black women also waged informal politics by expressing workplace resistance, self-defense toward violence, and performances of racial egalitarianism, democracy, and citizenship in a city that very often denied them all of these rights. Jim Crow Capital connects black women’s formal and informal politics to illustrate the complexity of their activism.
5

Fleegler, Robert L. "Dukakis’s Triumph." In Brutal Campaign, 64–93. University of North Carolina PressChapel Hill, NC, 2023. http://dx.doi.org/10.5149/northcarolina/9781469673370.003.0004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This chapter shows how Michael Dukakis’s staying power allowed him to win the Democratic nomination over a diverse field that included Paul Simon, Dick Gephardt, Al Gore, and Jesse Jackson. More of the major trends of modern politics became clear. Gephardt rode to victory in Iowa using a populist antitrade message that previewed a generation of politicians that would propose protectionist politics to appeal to working-class white voters. In addition, Jesse Jackson’s success created a brief moment where—for the first time-- it appeared a black candidate had an opportunity to win a major party nomination. Though Jackson fell short, his campaign represented a key middle point between the passage of the Voting Rights of 1965 and the election of Barack Obama in 2008. Eventually, Dukakis defeated Gore—who was trying to run as a more moderate Democrat—in New York to seal his victory.
6

Wu, Chuan-Kun. "Key Management." In IT Policy and Ethics, 728–53. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2919-6.ch033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In secure communications, key management is not as simple as metal key management which is supposed to be in a key ring or simply put in a pocket. Suppose Alice wants to transmit some confidential information to Bob over the public networks such as the Internet, Alice could simply encrypt the message using a known cipher such as AES, and then transmit the ciphertext to Bob. However, in order to enable Bob to decrypt the ciphertext to get the original message, in traditional cipher system, Bob needs to have the encryption key. How to let Alice securely and efficiently transmit the encryption key to Bob is a problem of key management. An intuitive approach would be to use a secure channel for the key transmission; this worked in earlier years, but is not a desirable solution in today’s electronic world. Since the invention of public key cryptography, the key management problem with respect to secret key transmission has been solved, which can either employ the Diffie-Hellman key agreement scheme or to use a public key cryptographic algorithm to encrypt the encryption key (which is often known as a session key). This approach is secure against passive attacks, but is vulnerable against active attacks (more precisely the man-in-the-middle attacks). So there must be a way to authenticate the identity of the communication entities. This leads to public key management where the public key infrastructure (PKI) is a typical set of practical protocols, and there is also a set of international standards about PKI. With respect to private key management, it is to prevent keys to be lost or stolen. To prevent a key from being lost, one way is to use the secret sharing, and another is to use the key escrow technique. Both aspects have many research outcomes and practical solutions. With respect to keys being stolen, another practical solution is to use a password to encrypt the key. Hence, there are many password-based security protocols in different applications. This chapter presents a comprehensive description about how each aspect of the key management works. Topics on key management covered by this chapter include key agreement, group-based key agreement and key distribution, the PKI mechanisms, secret sharing, key escrow, password associated key management, and key management in PGP and UMTS systems.
7

Stephanatos, Gerassimos. "Nouvelles perspectives en psychanalyse à partir de l'œuvre de Piera Aulagnier." In Nouvelles perspectives en psychanalyse à partir de l'œuvre de Piera Aulagnier, 33–51. In Press, 2018. http://dx.doi.org/10.3917/pres.barre.2018.01.0034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En élargissant la scène psychique freudienne, la métapsychologie de P. Aulagnier permet l’accès à un espace somato-psychique originaire d’activité représentative pictographique. À partir donc de l’action du pictogramme cet article pose la question de l’existence d’une force présentante et imageante interne, envisagée au regard de l’émergence de la représentation et de l’auto-constitution de la psyché, à savoir dans son rapport au corps sensoriel, à la figurabilité, à la réflexivité et à l’indétermination créatrice. Le passage du pictogramme, comme image de la chose corporelle, aux processus primaire et secondaire est abordé à travers certaines situations cliniques qui nécessitent l’apport figuratif de l’analyste. Ce qui permet l’hypothèse d’une circularité possible de l’in-formation sensorielle-érogène-affective entre des espaces psychiques hétérogènes et une appropriation potentielle du message, selon les postulats qui régissent le fonctionnement de chaque espace. Les constructions figuratives et interprétantes de l’analyste s’intègrent à ce travail auto-créateur du sujet, qui signe la poïesis de soi-même et du monde.
8

Brescia, Ray. "Introduction." In The Future of Change, 1–12. Cornell University Press, 2020. http://dx.doi.org/10.7591/cornell/9781501748110.003.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This introductory chapter details the story of the passage of the G.I. Bill, revealing how an adaptive grassroots network utilized all the media technologies available to it at the time in creative ways—from the mail and the telegraph to the radio and the cinema—to promote a positive, inclusive message and bring about social change. Innovation in communications technologies created an opportunity for the American Legion; it had at its disposal a vast array of tools to not just communicate with but also coordinate the efforts of its vast network of local chapters to promote adoption of the program. This connection between communications technology and a social movement is not accidental. U.S. history reveals the deep relationship between social change and innovation in the means of communication. Thus, this book examines the link between, on the one hand, innovations in communications technology and methods and, on the other, social movements that appear to have emerged in their wake. It also identifies the components of the successes and failures of these same movements that seem to have a symbiotic relationship to the technology that fuels them.
9

Krupa, Natalia. "Konserwacja jedwabnego obicia ze ścian kapitularza Archiwum Krakowskiej Kapituły Katedralnej – strategia zarządzania projektem ochrony." In Studia z dziejów katedry na Wawelu, 391–408. Ksiegarnia Akademicka Publishing, 2023. http://dx.doi.org/10.12797/9788381389211.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The primary protected values of historic objects include authenticity, integrity, andlegibility of historical communication. By analyzing a monument’s state of preservation,we can interpret the passage of time through the traces of its use, patina, ordamage. The protection of a monument must be preceded by a thorough understandingof its historical message and by defining the values it carries, as well as byidentifying the role of the object within a broader contextual framework. Only inthis manner can the value of the object be determined, along with its features andelements requiring protection.The implementation of the conservation project for the silk wall hanging from theChapter House of the Cracow Cathedral Chapter’s Archive provides a background fordiscussing the main principles of the preservation process management strategy, basedon the identification of threats, care plans, and monitoring of the risk of deteriorationof the monument. A proper preservation strategy for the accumulated material assetsentails responsibility for the quality of work and research aimed at reconstructinghistorical facts and narratives. This responsibility takes on particular significance in thedays of ongoing debate within relevant communities regarding the current position onthe contemporary heritage conservation model.
10

Koch, Christof. "Computing with Neurons: A Summary." In Biophysics of Computation. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195104912.003.0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We now have arrived at the end of the book. The first 16 chapters dealt with linear and nonlinear cable theory, voltage-dependent ionic currents, the biophysical origin of spike initiation and propagation, the statistical properties of spike trains and neural coding, bursting, dendritic spines, synaptic transmission and plasticity, the types of interactions that can occur among synaptic inputs in a passive or active dendritic arbor, and the diffusion and buffering of calcium and other ions. We attempted to weave these disparate threads into a single tapestry in Chaps. 17-19, demonstrating how these elements interact within a single neuron. The penultimate chapter dealt with various unconventional biophysical and biochemical mechanisms that could instantiate computations at the molecular and the network levels. It is time to summarize. What have we learned about the way brains do or do not compute? The brain has frequently been compared to a universal Turing machine (for a very lucid account of this, see Hofstadter, 1979). A Turing machine is a mathematical abstraction meant to clarify what is meant by algorithm, computation, and computable. Think of it as a machine with a finite number of internal states and an infinite tape that can read messages composed with a finite alphabet, write an output, and store intermediate results as memory. A universal Turing machine is one that can mimic any arbitrary Turing machine. We are here not interested in the renewed debate as to whether or not the brain can, in principle, be treated as such a machine (Lucas, 1964; Penrose, 1989), but whether this is a useful way to conceptualize nervous systems in this manner. Because brains have limited precision, only finite amounts of memory and do not live forever, they cannot possibly be like “real” Turing machines. It is therefore more appropriate to ask: to what extent can brains be treated as finite state machines or automata! Such a machine only has finite computational and memory resources (Hopcroft and Ullman, 1979). The answer has to be an ambiguous “it depends.”

Conference papers on the topic "Algorithme de passage de message":

1

Wanderley, Juan B. V., and Carlos Levi. "Free Surface Viscous Flow Around a Ship Model." In 25th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/omae2006-92165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The present stage of viscous flow numerical analysis combined with computer technology latest advances made viable the mathematical treatment of many robust and complex engineering problems of practical interest. Some numerical problems which solutions would be just unthinkable not more than ten years ago may be now dealt with in a reliable and fairly accurate manner. A truly example of this kind of problem would be the calculation of hydrodynamic loads acting on yawing ships. The solution of such a problem raises practical interest due to applications, for instance, as in the case of stationary FPSO/FSO ships facing sea currents, commonly used in offshore deep-water oil production. In the present solution, the complete incompressible Navier–Stokes (N-S) equations are solved by means of an algorithm that applies the Beam and Warming [1] approximated factorization scheme to simulate the flow around a Wigley’s hull. The numerical code was implemented using Message Passage Interface (MPI) and can be run in a cluster with an arbitrary number of computers. The good agreement with other numerical and experimental data obtained from the literature and high efficiency of the algorithm indicated its potential to be used as an effective tool in ship design.
2

Chen, J. P., and W. R. Briley. "A Parallel Flow Solver for Unsteady Multiple Blade Row Turbomachinery Simulations." In ASME Turbo Expo 2001: Power for Land, Sea, and Air. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/2001-gt-0348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A parallel flow solver has been developed to provide a turbomachinery flow simulation tool that extends the capabilities of a previous single–processor production code (TURBO) for unsteady turbomachinery flow analysis. The code solves the unsteady Reynolds-averaged Navier-Stokes equations with a k–ε turbulence model. The parallel code now includes most features of the serial production code, but is implemented in a portable, scalable form for distributed–memory parallel computers using MPI message passing. The parallel implementation employs domain decomposition and supports general multiblock grids with arbitrary grid–block connectivity. The solution algorithm is an iterative implicit time–accurate scheme with characteristics–based finite–volume spatial discretization. The Newton subiterations are solved using a concurrent block–Jacobi symmetric Gauss–Seidel (BJ–SGS) relaxation scheme. Unsteady blade–row interaction is treated either by simulating full or periodic sectors of blade–rows, or by solving within a single passage for each row using phase–lag and wake–blade interaction approximations at boundaries. A scalable dynamic sliding–interface algorithm is developed here, with an efficient parallel data communication between blade rows in relative motion. Parallel computations are given here for flat plate, single blade row (Rotor 67) and single stage (Stage 37) test cases, and these results are validated by comparison with corresponding results from the previously validated serial production code. Good speedup performance is demonstrated for the single–stage case with a relatively small grid of 600,000 points.
3

Ji, Shanhong, and Feng Liu. "Computation of Flutter of Turbomachinery Cascades Using a Parallel Unsteady Navier-Stokes Code." In ASME 1998 International Gas Turbine and Aeroengine Congress and Exhibition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/98-gt-043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A quasi-three-dimensional multigrid Navier-Stokes solver on single and multiple passage domains is presented for solving unsteady flows around oscillating turbine and compressor blades. The conventional “direct store” method is used for applying the phase-shifted periodic boundary condition over a single blade passage. A parallel version of the solver using the Message Passing Interface (MPI) standard is developed for multiple passage computations. In the parallel multiple passage computations, the phase-shifted periodic boundary condition is converted to simple in-phase periodic condition. Euler and Navier-Stokes solutions are obtained for unsteady flows through an oscillating turbine cascade blade row with both the sequential and the parallel code. It is found that the parallel code offers almost linear speedup with multiple CPUs. In addition, significant improvement is achieved in convergence of the computation to a periodic unsteady state in the parallel multiple passage computations due to the use of in-phase periodic boundary conditions as compared to that in the single passage computations with phase-lagged periodic boundary conditions via the “direct store” method. The parallel Navier-Stokes code is also used to calculate the flow through an oscillating compressor cascade. Results are compared with experimental data and computations by other authors.
4

Esperanc¸a, Paulo T., Juan B. V. Wanderley, and Carlos Levi. "Validation of a Three-Dimensional Large Eddy Simulation Finite Difference Method to Study Vortex Induced Vibration." In 25th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/omae2006-92367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two-dimensional numerical simulations of Vortex Induced Vibration have been failing to duplicate accurately the corresponding experimental data. One possible explanation could be 3D effects present in the real problem that are not modeled in two-dimensional simulations. A three-dimensional finite difference method was implemented using Large Eddy Simulation (LES) technique and Message Passage Interface (MPI) and can be run in a cluster with an arbitrary number of computers. The good agreement with other numerical and experimental data obtained from the literature shows the good quality of the implemented code.
5

Mašat, Milan, and Adéla Štěpánková. "A few notes on the book “Call me by your name” by André Aciman." In 7th International e-Conference on Studies in Humanities and Social Sciences. Center for Open Access in Science, Belgrade, 2021. http://dx.doi.org/10.32591/coas.e-conf.07.02011m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the article we deal with the interpretation and analysis of selected topics and motives in the narrative of André Aciman’s publication Call me by your name. After a summary of the story, we take a closer look at the genesis of the two men’s relationships in the context of their Jewish faith. We also depict the transformation of their animal sexual relationship into a loving relationship associated with psychic harmony. The final passage of the article is devoted to the conclusion of the book, in which the message of the publication is anchored, which to a certain extent goes beyond the inclusion of Aciman’s work primarily in LGBT young adult literature.
6

Zhou, F. B., M. D. Duta, M. P. Henry, S. Baker, and C. Burton. "Remote Condition Monitoring for Railway Point Machine." In ASME/IEEE 2002 Joint Rail Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/rtd2002-1646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the research work carried out at Oxford University on condition monitoring of railway point machines. The developed condition monitoring system includes a variety of sensors for acquiring trackside data related to different parameters. Key events to be logged include time stamping of points operation, opening and closing of case cover associated with a points machine, insertion and removal of a hand-crank, loss of supply current and the passage of a train. The system also has built-in Web functions. This allows a remote operator using Internet Explorer to observe the condition of the point machine at any time, while the acquired data can be downloaded automatically for offline analysis, providing more detailed information on the health condition of the monitored point machine. A short daily condition report message can also be sent to relevant staff via email. At last the experience are reported on the four trackside installed systems.

To the bibliography