Academic literature on the topic 'Algorithmic number theory'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithmic number theory.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithmic number theory"

1

Schoof, Ren\'e. "Book Review: Algorithmic algebraic number theory." Bulletin of the American Mathematical Society 29, no. 1 (July 1, 1993): 111–14. http://dx.doi.org/10.1090/s0273-0979-1993-00392-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

W., H. C., and Michael Pohst. "Algorithmic Methods in Algebra and Number Theory." Mathematics of Computation 55, no. 192 (October 1990): 876. http://dx.doi.org/10.2307/2008461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gilman, Robert. "Algorithmic search in group theory." Journal of Algebra 545 (March 2020): 237–44. http://dx.doi.org/10.1016/j.jalgebra.2019.08.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roman’kov, V. A. "Algorithmic theory of solvable groups." Prikladnaya Diskretnaya Matematika, no. 52 (2021): 16–64. http://dx.doi.org/10.17223/20710410/52/2.

Full text
Abstract:
The purpose of this survey is to give some picture of what is known about algorithmic and decision problems in the theory of solvable groups. We will provide a number of references to various results, which are presented without proof. Naturally, the choice of the material reported on reflects the author’s interests and many worthy contributions to the field will unfortunately go without mentioning. In addition to achievements in solving classical algorithmic problems, the survey presents results on other issues. Attention is paid to various aspects of modern theory related to the complexity of algorithms, their practical implementation, random choice, asymptotic properties. Results are given on various issues related to mathematical logic and model theory. In particular, a special section of the survey is devoted to elementary and universal theories of solvable groups. Special attention is paid to algorithmic questions regarding rational subsets of groups. Results on algorithmic problems related to homomorphisms, automorphisms, and endomorphisms of groups are presented in sufficient detail.
APA, Harvard, Vancouver, ISO, and other styles
5

Hofmann, Tommy, and Carlo Sircana. "On the computation of overorders." International Journal of Number Theory 16, no. 04 (December 6, 2019): 857–79. http://dx.doi.org/10.1142/s179304212050044x.

Full text
Abstract:
The computation of a maximal order of an order in a semisimple algebra over a global field is a classical well-studied problem in algorithmic number theory. In this paper, we consider the related problems of computing all minimal overorders as well as all overorders of a given order. We use techniques from algorithmic representation theory and the theory of minimal integral ring extensions to obtain efficient and practical algorithms, whose implementation is publicly available.
APA, Harvard, Vancouver, ISO, and other styles
6

Cremona, J. E. "ALGORITHMIC ALGEBRAIC NUMBER THEORY (Encyclopedia of Mathematics and its Applications)." Bulletin of the London Mathematical Society 23, no. 1 (January 1991): 94–97. http://dx.doi.org/10.1112/blms/23.1.94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Baumslag, Gilbert, Frank B. Cannonito, Derek J. S. Robinson, and Dan Segal. "The algorithmic theory of polycyclic-by-finite groups." Journal of Algebra 142, no. 1 (September 1991): 118–49. http://dx.doi.org/10.1016/0021-8693(91)90221-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ushakov, Alexander. "Algorithmic theory of free solvable groups: Randomized computations." Journal of Algebra 407 (June 2014): 178–200. http://dx.doi.org/10.1016/j.jalgebra.2014.02.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Möhring, Rolf H. "Algorithmic graph theory and perfect graphs." Order 3, no. 2 (June 1986): 207–8. http://dx.doi.org/10.1007/bf00390110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Balakrishnan, Jennifer S. "Coleman integration for even-degree models of hyperelliptic curves." LMS Journal of Computation and Mathematics 18, no. 1 (2015): 258–65. http://dx.doi.org/10.1112/s1461157015000029.

Full text
Abstract:
The Coleman integral is a $p$-adic line integral that encapsulates various quantities of number theoretic interest. Building on the work of Harrison [J. Symbolic Comput. 47 (2012) no. 1, 89–101], we extend the Coleman integration algorithms in Balakrishnan et al. [Algorithmic number theory, Lecture Notes in Computer Science 6197 (Springer, 2010) 16–31] and Balakrishnan [ANTS-X: Proceedings of the Tenth Algorithmic Number Theory Symposium, Open Book Series 1 (Mathematical Sciences Publishers, 2013) 41–61] to even-degree models of hyperelliptic curves. We illustrate our methods with numerical examples computed in Sage.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Algorithmic number theory"

1

Smith, Benjamin Andrew. "Explicit endomorphisms and correspondences." University of Sydney, 2006. http://hdl.handle.net/2123/1066.

Full text
Abstract:
Doctor of Philosophy (PhD)
In this work, we investigate methods for computing explicitly with homomorphisms (and particularly endomorphisms) of Jacobian varieties of algebraic curves. Our principal tool is the theory of correspondences, in which homomorphisms of Jacobians are represented by divisors on products of curves. We give families of hyperelliptic curves of genus three, five, six, seven, ten and fifteen whose Jacobians have explicit isogenies (given in terms of correspondences) to other hyperelliptic Jacobians. We describe several families of hyperelliptic curves whose Jacobians have complex or real multiplication; we use correspondences to make the complex and real multiplication explicit, in the form of efficiently computable maps on ideal class representatives. These explicit endomorphisms may be used for efficient integer multiplication on hyperelliptic Jacobians, extending Gallant--Lambert--Vanstone fast multiplication techniques from elliptic curves to higher dimensional Jacobians. We then describe Richelot isogenies for curves of genus two; in contrast to classical treatments of these isogenies, we consider all the Richelot isogenies from a given Jacobian simultaneously. The inter-relationship of Richelot isogenies may be used to deduce information about the endomorphism ring structure of Jacobian surfaces; we conclude with a brief exploration of these techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Benjamin Andrew. "Explicit endomorphisms and correspondences." Thesis, The University of Sydney, 2005. http://hdl.handle.net/2123/1066.

Full text
Abstract:
In this work, we investigate methods for computing explicitly with homomorphisms (and particularly endomorphisms) of Jacobian varieties of algebraic curves. Our principal tool is the theory of correspondences, in which homomorphisms of Jacobians are represented by divisors on products of curves. We give families of hyperelliptic curves of genus three, five, six, seven, ten and fifteen whose Jacobians have explicit isogenies (given in terms of correspondences) to other hyperelliptic Jacobians. We describe several families of hyperelliptic curves whose Jacobians have complex or real multiplication; we use correspondences to make the complex and real multiplication explicit, in the form of efficiently computable maps on ideal class representatives. These explicit endomorphisms may be used for efficient integer multiplication on hyperelliptic Jacobians, extending Gallant--Lambert--Vanstone fast multiplication techniques from elliptic curves to higher dimensional Jacobians. We then describe Richelot isogenies for curves of genus two; in contrast to classical treatments of these isogenies, we consider all the Richelot isogenies from a given Jacobian simultaneously. The inter-relationship of Richelot isogenies may be used to deduce information about the endomorphism ring structure of Jacobian surfaces; we conclude with a brief exploration of these techniques.
APA, Harvard, Vancouver, ISO, and other styles
3

Pellet--Mary, Alice. "Réseaux idéaux et fonction multilinéaire GGH13." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN048/document.

Full text
Abstract:
La cryptographie à base de réseaux euclidiens est un domaine prometteur pour la construction de primitives cryptographiques post-quantiques. Un problème fondamental, lié aux réseaux, est le problème du plus court vecteur (ou SVP, pour Shortest Vector Problem). Ce problème est supposé être difficile à résoudre même avec un ordinateur quantique. Afin d’améliorer l’efficacité des protocoles cryptographiques, on peut utiliser des réseaux structurés, comme par exemple des réseaux idéaux ou des réseaux modules (qui sont une généralisation des réseaux idéaux). La sécurité de la plupart des schémas utilisant des réseaux structurés dépend de la difficulté du problème SVP dans des réseaux modules, mais un petit nombre de schémas peuvent également être impactés par SVP dans des réseaux idéaux. La principale construction pouvant être impactée par SVP dans des réseaux idéaux est la fonction multilinéaire GGH13. Cette fonction multilinéaire est principalement utilisée aujourd’hui pour construire des obfuscateurs de programmes, c’est-à-dire des fonctions qui prennent en entrée le code d’un programme et renvoie le code d’un programme équivalent (calculant la même fonction), mais qui doit cacher la façon dont le programme fonctionne.Dans cette thèse, nous nous intéressons dans un premier temps au problème SVP dans les réseaux idéaux et modules. Nous présentons un premier algorithme qui, après un pre-calcul exponentiel, permet de trouver des vecteurs courts dans des réseaux idéaux plus rapidement que le meilleur algorithme connu pour des réseaux arbitraires. Nous présentons ensuite un algorithme pour les réseaux modules de rang 2, également plus efficace que le meilleur algorithme connu pour des réseaux arbitraires, à condition d’avoir accès à un oracle résolvant le problème du plus proche vecteur dans un réseau fixé. Le pré-calcul exponentiel et l’oracle pour le problème du plus proche vecteurs rendent ces deux algorithmes inutilisables en pratique.Dans un second temps, nous nous intéressons à la fonction GGH13 ainsi qu’aux obfuscateurs qui l’utilisent. Nous étudions d’abord l’impact des attaques statistiques sur la fonction GGH13 et ses variantes. Nous nous intéressons ensuite à la sécurité des obfuscateurs utilisant la fonction GGH13 et proposons une attaque quantique contre plusieurs de ces obfuscateurs. Cette attaque quantique utilise entre autres un algorithme calculant un vecteur court dans un réseau idéal dépendant d’un paramètre secret de la fonction GGH13
Lattice-based cryptography is a promising area for constructing cryptographic primitives that are plausibly secure even in the presence of quantum computers. A fundamental problem related to lattices is the shortest vector problem (or SVP), which asks to find a shortest non-zero vector in a lattice. This problem is believed to be intractable, even quantumly. Structured lattices, for example ideal lattices or module lattices (the latter being a generalization of the former), are often used to improve the efficiency of lattice-based primitives. The security of most of the schemes based on structured lattices is related to SVP in module lattices, and a very small number of schemes can also be impacted by SVP in ideal lattices.In this thesis, we first focus on the problem of finding short vectors in ideal and module lattices.We propose an algorithm which, after some exponential pre-computation, performs better on ideal lattices than the best known algorithm for arbitrary lattices. We also present an algorithm to find short vectors in rank 2 modules, provided that we have access to some oracle solving the closest vector problem in a fixed lattice. The exponential pre-processing time and the oracle call make these two algorithms unusable in practice.The main scheme whose security might be impacted by SVP in ideal lattices is the GGH13multilinear map. This protocol is mainly used today to construct program obfuscators, which should render the code of a program unintelligible, while preserving its functionality. In a second part of this thesis, we focus on the GGH13 map and its application to obfuscation. We first study the impact of statistical attacks on the GGH13 map and on its variants. We then study the security of obfuscators based on the GGH13 map and propose a quantum attack against multiple such obfuscators. This quantum attack uses as a subroutine an algorithm to find a short vector in an ideal lattice related to a secret element of the GGH13 map
APA, Harvard, Vancouver, ISO, and other styles
4

Varescon, Firmin. "Calculs explicites en théorie d'Iwasawa." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2019/document.

Full text
Abstract:
Dans le premier chapitre de cette thèse on rappelle l'énoncé ainsi que des équivalents de la conjecture de Leopoldt puis l'on donne un algorithme permettant de vérifier cette conjecture pour un corps de nombre et premier donnés. Pour la suite on suppose cette conjecture vraie pour le premier p fixé Et on étudie la torsion du groupe de Galois de l'extension abélienne maximale p-ramifiée. On présente une méthode qui détermine effectivement les facteurs invariants de ce groupe fini. Dans le troisième chapitre on donne des résultats numériques que l'on interpréte via des heuristiques à la Cohen-Lenstra. Dans le quatrième chapitre, à l'aide de l'algorithme qui permet le calcul de ce module, on donne des exemples de corps et de premiers vérifiant la conjecture de Greenberg
In the first chapter of this thesis we explain Leopoldt's conjecture and some equivalent formulations. Then we give an algorithm that checks this conjecture for a given prime p and a number field. Next we assume that this conjecture is true, and we study the torsion part of the Galois group of the maximal abelian p-ramified p-extension of a given number field. We present a method to compute the invariant factors of this finite group. In the third chapter we give an interpretation of our numrical result by heuristics “à la” Cohen-Lenstra. In the fourth and last chapter, using our algorithm which computes this torsion submodule, we give new examples of numbers fields which satisfy Greenberg's conjecture
APA, Harvard, Vancouver, ISO, and other styles
5

Shoup, Victor. "Removing randomness from computational number theory." Madison, Wis. : University of Wisconsin-Madison, Computer Sciences Dept, 1989. http://catalog.hathitrust.org/api/volumes/oclc/20839526.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Viu, Sos Juan. "Periods and line arrangements : contributions to the Kontsevich-Zagier period conjecture and to the Terao conjecture." Thesis, Pau, 2015. http://www.theses.fr/2015PAUU3022/document.

Full text
Abstract:
La première partie concerne un problème de théorie des nombres, pour laquel nous développons une approche géométrique basé sur des outils provenant de la géométrie algébrique et de la géométrique combinatoire. Introduites par M. Kontsevich et D. Zagier en 2001, les périodes sont des nombres complexes obtenus comme valeurs des intégrales d'une forme particulier, où le domaine et l'intégrande s'expriment par des polynômes avec coefficients rationnels. La conjecture de périodes de Kontsevich-Zagier affirme que n'importe quelle relation polynomiale entre périodes peut s'obtenir par des relations linéaires entre différentes représentations intégrales, exprimées par des règles classiques du calcul intégrale. En utilisant des résolutions de singularités, on introduit une réduction semi-canonique de périodes en se concentrant sur le fait d'obtenir une méthode algorithmique et constructive respectant les règles classiques de transformation intégrale: nous prouvons que n'importe quelle période non nulle, représentée par une certaine intégrale, peut être exprimée sauf signe comme le volume d'un ensemble semi-algébrique compact. La réduction semi-canonique permet une reformulation de la conjecture de périodes de Kontsevich-Zagier en termes de changement de variables préservant le volume entre ensembles semi-algébriques compacts. Via des triangulations et méthodes de la géométrie-PL, nous étudions les obstructions de cette approche comme la généralisation du 3ème Problème de Hilbert. Nous complétons les travaux de J. Wan dans le développement d'une théorie du degré pour les périodes, basée sur la dimension minimale de l'espace ambiance nécessaire pour obtenir une telle réduction compacte, en donnant une première notion géométrique sur la transcendance de périodes. Nous étendons cet étude en introduisant des notions de complexité géométrique et arithmétique pour le périodes basées sur la complexité polynomiale minimale parmi les réductions semi-canoniques d'une période. La seconde partie s'occupe de la compréhension d'objets provenant de la géométrie algébrique avec une forte connexion avec la géométrique combinatoire, pour lesquels nous avons développé une approche dynamique. Les champs de vecteurs logarithmiques sont un outils algébro-analytique utilisés dans l'étude des sous-variétés et des germes dans des variétés analytiques. Nous nous sommes concentré sur le cas des arrangements de droites dans des espaces affines ou projectifs. On s'est plus particulièrement intéressé à comprendre comment la combinatoire d'un arrangement détermine les relations entre les champs de vecteurs logarithmiques associés: ce problème est connu sous le nom de conjecture de Terao. Nous étudions le module des champs de vecteurs logarithmiques d'un arrangement de droites affin en utilisant la filtration induite par le degré des composantes polynomiales. Nous déterminons qu'il n'existent que deux types de champs de vecteurs polynomiaux qui fixent une infinité de droites. Ensuite, nous décrivons l'influence de la combinatoire de l'arrangement de droites sur le degré minimal attendu pour ce type de champs de vecteurs. Nous prouvons que la combinatoire ne détermine pas le degré minimal des champs de vecteurs logarithmiques d'un arrangement de droites affin, en présentant deux pairs de contre-exemples, chaque qu'un d'eux correspondant à une notion différente de combinatoire. Nous déterminons que la dimension des espaces de filtration suit une croissance quadratique à partir d'un certain degré, en dépendant uniquement de la combinatoire de l'arrangement. A fin d'étudier de façon calculatoire une telle filtration, nous développons une librairie de fonctions sur le software de calcul formel Sage
The first part concerns a problem of number theory, for which we develop a geometrical approach based on tools coming from algebraic geometry and combinatorial geometry. Introduced by M. Kontsevich and D. Zagier in 2001, periods are complex numbers expressed as values of integrals of a special form, where both the domain and the integrand are expressed using polynomials with rational coefficients. The Kontsevich-Zagier period conjecture affirms that any polynomial relation between periods can be obtained by linear relations between their integral representations, expressed by classical rules of integral calculus. Using resolution of singularities, we introduce a semi-canonical reduction for periods focusing on give constructible and algorithmic methods respecting the classical rules of integral transformations: we prove that any non-zero real period, represented by an integral, can be expressed up to sign as the volume of a compact semi-algebraic set. The semi-canonical reduction permit a reformulation of the Kontsevich-Zagier conjecture in terms of volume-preserving change of variables between compact semi-algebraic sets. Via triangulations and methods of PL–geometry, we study the obstructions of this approach as a generalization of the Third Hilbert Problem. We complete the works of J. Wan to develop a degree theory for periods based on the minimality of the ambient space needed to obtain such a compact reduction, this gives a first geometric notion of transcendence of periods. We extend this study introducing notions of geometric and arithmetic complexities for periods based in the minimal polynomial complexity among the semi-canonical reductions of a period. The second part deals with the understanding of particular objects coming from algebraic geometry with a strong background in combinatorial geometry, for which we develop a dynamical approach. The logarithmic vector fields are an algebraic-analytic tool used to study sub-varieties and germs of analytic manifolds. We are concerned with the case of line arrangements in the affine or projective space. One is interested to study how the combinatorial data of the arrangement determines relations between its associated logarithmic vector fields: this problem is known as the Terao conjecture. We study the module of logarithmic vector fields of an affine line arrangement by the filtration induced by the degree of the polynomial components. We determine that there exist only two types of non-trivial polynomial vector fields fixing an infinitely many lines. Then, we describe the influence of the combinatorics of the arrangement on the expected minimal degree for these kind of vector fields. We prove that the combinatorics do not determine the minimal degree of the logarithmic vector fields of an affine line arrangement, giving two pair of counter-examples, each pair corresponding to a different notion of combinatorics. We determine that the dimension of the filtered spaces follows a quadratic growth from a certain degree, depending only on the combinatorics of the arrangements. We illustrate these formula by computations over some examples. In order to study computationally these filtration, we develop a library of functions in the mathematical software Sage
APA, Harvard, Vancouver, ISO, and other styles
7

Lezowski, Pierre. "Questions d’euclidianité." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14642/document.

Full text
Abstract:
Nous étudions l'euclidianité des corps de nombres pour la norme et quelques unes de ses généralisations. Nous donnons en particulier un algorithme qui calcule le minimum euclidien d'un corps de nombres de signature quelconque. Cela nous permet de prouver que de nombreux corps sont euclidiens ou non pour la norme. Ensuite, nous appliquons cet algorithme à l'étude des classes euclidiennes pour la norme, ce qui permet d'obtenir de nouveaux exemples de corps de nombres avec une classe euclidienne non principale. Par ailleurs, nous déterminons tous les corps cubiques purs avec une classe euclidienne pour la norme. Enfin, nous nous intéressons aux corps de quaternions euclidiens. Après avoir énoncé les propriétés de base, nous étudions quelques cas particuliers. Nous donnons notamment la liste complète des corps de quaternions euclidiens et totalement définis sur un corps de nombres de degré au plus deux
We study norm-Euclideanity of number fields and some of its generalizations. In particular, we provide an algorithm to compute the Euclidean minimum of a number field of any signature. This allows us to study the norm-Euclideanity of many number fields. Then, we extend this algorithm to deal with norm-Euclidean classes and we obtain new examples of number fields with a non-principal norm-Euclidean class. Besides, we describe the complete list of pure cubic number fields admitting a norm-Euclidean class. Finally, we study the Euclidean property in quaternion fields. First, we establish its basic properties, then we study some examples. We provide the complete list of Euclidean quaternion fields, which are totally definite over a number field with degree at most two
APA, Harvard, Vancouver, ISO, and other styles
8

Molin, Pascal. "Intégration numérique et calculs de fonctions L." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00537489.

Full text
Abstract:
Cette thèse montre la possibilité d'une application rigoureuse de la méthode d'intégration numérique double-exponentielle introduite par Takahasi et Mori en 1974, et sa pertinence pour les calculs à grande précision en théorie des nombres. Elle contient en particulier une étude détaillée de cette méthode, des critères simples sur son champ d'application, et des estimations rigoureuses des termes d'erreur. Des paramètres explicités et précis permettent de l'employer aisément pour le calcul garanti de fonctions définies par des intégrales. Cette méthode est également appliquée en détail au calcul de transformées de Mellin inverses de facteurs gamma intervenant dans les calculs numériques de fonctions L. Par une étude unifiée, ce travail démontre la complexité d'un algorithme de M. Rubinstein et permet de proposer des algorithmes de calcul de valeurs de fonctions L quelconques dont le résultat est garanti et dont la complexité est meilleure en la précision.
APA, Harvard, Vancouver, ISO, and other styles
9

Coles, Jonathan. "Algorithms for bounding Folkman numbers /." Online version of thesis, 2005. https://ritdml.rit.edu/dspace/handle/1850/2765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Domingues, Riaal. "A polynomial time algorithm for prime recognition." Diss., Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-08212007-100529.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Algorithmic number theory"

1

Eric, Bach. Algorithmic number theory. Cambridge, Mass: MIT Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hanrot, Guillaume, François Morain, and Emmanuel Thomé, eds. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14518-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Buhler, Joe P., ed. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hess, Florian, Sebastian Pauli, and Michael Pohst, eds. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11792086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van der Poorten, Alfred J., and Andreas Stein, eds. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79456-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fieker, Claus, and David R. Kohel, eds. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45455-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Buell, Duncan, ed. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/b98210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bosma, Wieb, ed. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/10722028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Adleman, Leonard M., and Ming-Deh Huang, eds. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-58691-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cohen, Henri, ed. Algorithmic Number Theory. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-61581-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Algorithmic number theory"

1

Yan, Song Y. "Algorithmic Number Theory." In Number Theory for Computing, 139–258. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-662-04053-9_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yan, Song Y. "Computational/Algorithmic Number Theory." In Number Theory for Computing, 173–302. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/978-3-662-04773-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gilbert, Hugo, Olivier Spanjaard, Paolo Viappiani, and Paul Weng. "Reducing the Number of Queries in Interactive Value Iteration." In Algorithmic Decision Theory, 139–52. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23114-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Michler, Gerhard O. "High Performance Computations in Group Representation Theory." In Algorithmic Algebra and Number Theory, 399–415. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Buchmann, Johannes, Michael J. Jacobson, Stefan Neis, Patrick Theobald, and Damian Weber. "Sieving Methods for Class Group Computation." In Algorithmic Algebra and Number Theory, 3–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Decker, Wolfram, Gert-Martin Greuel, and Gerhard Pfister. "Primary Decomposition: Algorithms and Comparisons." In Algorithmic Algebra and Number Theory, 187–220. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dolzmann, Andreas, Thomas Sturm, and Volker Weispfenning. "Real Quantifier Elimination in Practice." In Algorithmic Algebra and Number Theory, 221–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kemper, Gregor. "Hilbert Series and Degree Bounds in Invariant Theory." In Algorithmic Algebra and Number Theory, 249–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kemper, Gregor, and Gunter Malle. "Invariant Rings and Fields of Finite Groups." In Algorithmic Algebra and Number Theory, 265–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Martin, Bernd. "Computing Versal Deformations with Singular." In Algorithmic Algebra and Number Theory, 283–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-59932-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithmic number theory"

1

Gamarnik, David, and Eren C. Kizildag. "The Random Number Partitioning Problem: Overlap Gap Property and Algorithmic Barriers." In 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022. http://dx.doi.org/10.1109/isit50566.2022.9834647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ryabko, Boris. "Application of algorithmic information theory to calibrate tests of random number generators." In 2021 XVII International Symposium Problems of Redundancy in Information and Control Systems (REDUNDANCY). IEEE, 2021. http://dx.doi.org/10.1109/redundancy52534.2021.9606440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

A. N., Rybalov. "GENERIC COMPLEXITY OF ALGORITHMIC PROBLEMS." In Mechanical Science and Technology Update. Omsk State Technical University, 2022. http://dx.doi.org/10.25206/978-5-8149-3453-6-2022-10-14.

Full text
Abstract:
Generic approach is one of the approaches to the study of algorithmic problems for almost all inputs, born at the intersection of computational algebra and computer science. Within the framework of this approach, algorithms are studied that solve a problem for almost all inputs, and for the remaining rare inputs give an undefined answer. This review reflects two areas of research of generic complexity of algorithmic problems in algebra, mathematical logic, number theory, and theoretical computer science. The first direction is devoted to the construction of generic algorithms for problems that are unsolvable and hard in the classical sense. In the second direction, algorithmic problems are sought that remain unsolvable or hard even in the generic sense. Such problems are important in cryptography.
APA, Harvard, Vancouver, ISO, and other styles
4

Pődör, Lea. "Can Robot Judges Solve the So-Called “Hard Cases”?" In COFOLA International 2022. Brno: Masaryk University Press, 2022. http://dx.doi.org/10.5817/cz.muni.p280-0231-2022-16.

Full text
Abstract:
From the perspective of legal theory, there are two types of cases for judges to decide: “easy cases” and “hard cases”. This line of thought relates to cases that are decided by humans. The last few years have seen rapid progress in the development of artificial intelligence, and an increasing number of ideas have been put forward that envisage the transfer of algorithmic task execution to the world of law. Legal theory and jurisprudence are interdependent, and a solution needs to be found to the question of how much algorithms can reduce the burden on the judiciary in the application of the law. This problem is not alien to legal theory, since the idea of law as an axiomatic system and the idea of judgment machines was already present in Leibniz’s philosophy.
APA, Harvard, Vancouver, ISO, and other styles
5

Krus, Petter. "An Information Theoretical Perspective on Performance, Refinement and Cost." In ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2005. http://dx.doi.org/10.1115/detc2005-85403.

Full text
Abstract:
Design in general is about increasing the information of the product/system. Therefore it is natural to investigate the design process from an information theoretical point of view. There are basically two (although related) strains of information theory. The first is the information theory of communication. Another strain is the algorithmic theory of information. In this paper the design process is described as a information transformation process, where an initial set of requirements are transformed to a system specification. Performance and cost are both a functions of complexity and refinement, that can be expressed in information theoretical terms. The information theoretical model is demonstrated on examples. The model has implications for the balance between number of design parameters, and the degree of convergence in design optimization. Furthermore, the relationship between concept refinement and design space expansion can be viewed in information theoretical terms.
APA, Harvard, Vancouver, ISO, and other styles
6

Escanaverino, Jose Martinez, Jose A. Llamos Soriz, Alejandra Garcia Toll, and Tania Ortiz Cardenas. "Rational Design Automation by Dichromatic Graphs." In ASME 2001 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2001. http://dx.doi.org/10.1115/detc2001/dac-21050.

Full text
Abstract:
Abstract As the complexity of mechanical design increases, due to larger size mathematical models, the need for rational design procedures also goes up. As shown elsewhere, dichromatic graphs have proven their value as tools for the algorithmic education of mechanical engineers. This paper analyzes the worth of such graphs as a means to achieve rational design solutions in complex industrial problems. The paper covers plant maintenance and research & development professional case studies. A real-life problem in electromechanical system reengineering is the first application example. Attention is also given in the paper on the partitioning of large problems, involving many variables and relations. The design of a planetary gear unit, with a three-digit number of elements in the mathematical model, is an example problem in this area. In addition, changes and extensions to the computational problem solving theory are included.
APA, Harvard, Vancouver, ISO, and other styles
7

Oohama, Y. "Explicit expression of the interval algorithm for random number generation based on number systems." In IEEE Information Theory Workshop, 2005. IEEE, 2005. http://dx.doi.org/10.1109/itw.2005.1531878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yao, Bin, Shiying Kang, Xiao Zhao, Yuyan Chao, and Lifeng He. "A graph-theory-based Euler number computing algorithm." In 2015 IEEE International Conference on Information and Automation (ICIA). IEEE, 2015. http://dx.doi.org/10.1109/icinfa.2015.7279470.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

van Dam, Wim, and Yoshitaka Sasaki. "QUANTUM ALGORITHMS FOR PROBLEMS IN NUMBER THEORY, ALGEBRAIC GEOMETRY, AND GROUP THEORY." In Summer School on Diversities in Quantum Computation/Information. WORLD SCIENTIFIC, 2012. http://dx.doi.org/10.1142/9789814425988_0003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kobylkin, Konstantin. "Complexity and approximability for a problem of intersecting of proximity graphs with minimum number of equal disks." In NUMERICAL COMPUTATIONS: THEORY AND ALGORITHMS (NUMTA–2016): Proceedings of the 2nd International Conference “Numerical Computations: Theory and Algorithms”. Author(s), 2016. http://dx.doi.org/10.1063/1.4965392.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Algorithmic number theory"

1

Horrocks, Ian, Ulrike Sattler, and Stephan Tobies. A Description Logic with Transitive and Converse Roles, Role Hierarchies and Qualifying Number Restrictions. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.94.

Full text
Abstract:
As widely argued [HG97; Sat96], transitive roles play an important role in the adequate representation of aggregated objects: they allow these objects to be described by referring to their parts without specifying a level of decomposition. In [HG97], the Description Logic (DL) ALCHR+ is presented, which extends ALC with transitive roles and a role hierarchy. It is argued in [Sat98] that ALCHR+ is well-suited to the representation of aggregated objects in applications that require various part-whole relations to be distinguished, some of which are transitive. However, ALCHR+ allows neither the description of parts by means of the whole to which they belong, or vice versa. To overcome this limitation, we present the DL SHI which allows the use of, for example, has part as well as is part of. To achieve this, ALCHR+ was extended with inverse roles. It could be argued that, instead of defining yet another DL, one could make use of the results presented in [DL96] and use ALC extended with role expressions which include transitive closure and inverse operators. The reason for not proceeding like this is the fact that transitive roles can be implemented more efficiently than the transitive closure of roles (see [HG97]), although they lead to the same complexity class (ExpTime-hard) when added, together with role hierarchies, to ALC. Furthermore, it is still an open question whether the transitive closure of roles together with inverse roles necessitates the use of the cut rule [DM98], and this rule leads to an algorithm with very bad behaviour. We will present an algorithm for SHI without such a rule. Furthermore, we enrich the language with functional restrictions and, finally, with qualifying number restrictions. We give sound and complete decision proceduresfor the resulting logics that are derived from the initial algorithm for SHI. The structure of this report is as follows: In Section 2, we introduce the DL SI and present a tableaux algorithm for satisfiability (and subsumption) of SI-concepts—in another report [HST98] we prove that this algorithm can be refined to run in polynomial space. In Section 3 we add role hierarchies to SI and show how the algorithm can be modified to handle this extension appropriately. Please note that this logic, namely SHI, allows for the internalisation of general concept inclusion axioms, one of the most general form of terminological axioms. In Section 4 we augment SHI with functional restrictions and, using the so-called pairwise-blocking technique, the algorithm can be adapted to this extension as well. Finally, in Section 5, we show that standard techniques for handling qualifying number restrictions [HB91;BBH96] together with the techniques described in previous sections can be used to decide satisfiability and subsumption for SHIQ, namely ALC extended with transitive and inverse roles, role hierarchies, and qualifying number restrictions. Although Section 5 heavily depends on the previous sections, we have made it self-contained, i.e. it contains all necessary definitions and proofs from scratch, for a better readability. Building on the previous sections, Section 6 presents an algorithm that decides the satisfiability of SHIQ-ABoxes.
APA, Harvard, Vancouver, ISO, and other styles
2

Tabunov, I. A., T. N. Mikhalenko, L. D. Kuznetsova, A. V. Suetova, and M. A. Shilovskiy. METHODOLOGICAL RECOMMENDATIONS FOR WORKING WITH CHILDREN IN A SOCIALLY DANGEROUS SITUATION. Cherepovets State University, December 2022. http://dx.doi.org/10.12731/er0619.03122022.

Full text
Abstract:
Statistics show that in recent years there has been an increase in the number of families falling into a socially dangerous situation. According to statistics provided by the departments for juvenile affairs of the Ministry of Internal Affairs of Russia in Cherepovets, the number of crimes in 2021 decreased by only 2.1% compared to 2020. This was influenced by objective factors, in particular the low standard of living, "chronic" unemployment, alcohol abuse, drug use. Having embarked on such a path, the family degrades socially and morally, condemning children to the same existence. It is not surprising that children leave home, spend most of their time on the street, thereby replenishing antisocial groups. Thus, we can say that the current system of working with children of the SOP is not effective enough, since there is no clear algorithm for working with children in a socially dangerous situation. Therefore, methodological recommendations for working with children were developed by the SOP, which includes a telephone communication script for employees of the youth center, as well as a clear and understandable algorithm for working with children in a socially dangerous situation. These guidelines for working with children of SOP are clear and easy to use, and most importantly, they do not require special psychological knowledge, skills and abilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Kirichek, Galina, Vladyslav Harkusha, Artur Timenko, and Nataliia Kulykovska. System for detecting network anomalies using a hybrid of an uncontrolled and controlled neural network. [б. в.], February 2020. http://dx.doi.org/10.31812/123456789/3743.

Full text
Abstract:
In this article realization method of attacks and anomalies detection with the use of training of ordinary and attacking packages, respectively. The method that was used to teach an attack on is a combination of an uncontrollable and controlled neural network. In an uncontrolled network, attacks are classified in smaller categories, taking into account their features and using the self- organized map. To manage clusters, a neural network based on back-propagation method used. We use PyBrain as the main framework for designing, developing and learning perceptron data. This framework has a sufficient number of solutions and algorithms for training, designing and testing various types of neural networks. Software architecture is presented using a procedural-object approach. Because there is no need to save intermediate result of the program (after learning entire perceptron is stored in the file), all the progress of learning is stored in the normal files on hard disk.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, W. S., Victor Alchanatis, and Asher Levi. Innovative yield mapping system using hyperspectral and thermal imaging for precision tree crop management. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598158.bard.

Full text
Abstract:
Original objectives and revisions – The original overall objective was to develop, test and validate a prototype yield mapping system for unit area to increase yield and profit for tree crops. Specific objectives were: (1) to develop a yield mapping system for a static situation, using hyperspectral and thermal imaging independently, (2) to integrate hyperspectral and thermal imaging for improved yield estimation by combining thermal images with hyperspectral images to improve fruit detection, and (3) to expand the system to a mobile platform for a stop-measure- and-go situation. There were no major revisions in the overall objective, however, several revisions were made on the specific objectives. The revised specific objectives were: (1) to develop a yield mapping system for a static situation, using color and thermal imaging independently, (2) to integrate color and thermal imaging for improved yield estimation by combining thermal images with color images to improve fruit detection, and (3) to expand the system to an autonomous mobile platform for a continuous-measure situation. Background, major conclusions, solutions and achievements -- Yield mapping is considered as an initial step for applying precision agriculture technologies. Although many yield mapping systems have been developed for agronomic crops, it remains a difficult task for mapping yield of tree crops. In this project, an autonomous immature fruit yield mapping system was developed. The system could detect and count the number of fruit at early growth stages of citrus fruit so that farmers could apply site-specific management based on the maps. There were two sub-systems, a navigation system and an imaging system. Robot Operating System (ROS) was the backbone for developing the navigation system using an unmanned ground vehicle (UGV). An inertial measurement unit (IMU), wheel encoders and a GPS were integrated using an extended Kalman filter to provide reliable and accurate localization information. A LiDAR was added to support simultaneous localization and mapping (SLAM) algorithms. The color camera on a Microsoft Kinect was used to detect citrus trees and a new machine vision algorithm was developed to enable autonomous navigations in the citrus grove. A multimodal imaging system, which consisted of two color cameras and a thermal camera, was carried by the vehicle for video acquisitions. A novel image registration method was developed for combining color and thermal images and matching fruit in both images which achieved pixel-level accuracy. A new Color- Thermal Combined Probability (CTCP) algorithm was created to effectively fuse information from the color and thermal images to classify potential image regions into fruit and non-fruit classes. Algorithms were also developed to integrate image registration, information fusion and fruit classification and detection into a single step for real-time processing. The imaging system achieved a precision rate of 95.5% and a recall rate of 90.4% on immature green citrus fruit detection which was a great improvement compared to previous studies. Implications – The development of the immature green fruit yield mapping system will help farmers make early decisions for planning operations and marketing so high yield and profit can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
5

Khrushch, Nila, Pavlo Hryhoruk, Tetiana Hovorushchenko, Sergii Lysenko, Liudmyla Prystupa, and Liudmyla Vahanova. Assessment of bank's financial security levels based on a comprehensive index using information technology. [б. в.], October 2020. http://dx.doi.org/10.31812/123456789/4474.

Full text
Abstract:
The article considers the issues of assessing the level of financial security of the bank. An analysis of existing approaches to solving this problem. A scientific and methodological approach based on the application of comprehensive assessment technology is proposed. The computational algorithm is presented in the form of a four-stage procedure, which contains the identification of the initial data set, their normalization, calculation of the partial composite indexes, and a comprehensive index of financial security. Results have interpretation. Determining the levels of financial security and the limits of the relevant integrated indicator is based on the analysis of the configuration of objects in the two-scale space of partial composite indexes, which is based on the division of the set of initial indicators by content characteristics. The results of the grouping generally coincided with the results of the banks ranking according to the rating assessment of their stability, presented in official statistics. The article presents the practical implementation of the proposed computational procedure. To automate calculations and the possibility of scenario modeling, an electronic form of a spreadsheet was created with the help of form controls. The obtained results allowed us to identify the number of levels of financial security and their boundaries.
APA, Harvard, Vancouver, ISO, and other styles
6

Kuznetsov, Victor, Vladislav Litvinenko, Egor Bykov, and Vadim Lukin. A program for determining the area of the object entering the IR sensor grid, as well as determining the dynamic characteristics. Science and Innovation Center Publishing House, April 2021. http://dx.doi.org/10.12731/bykov.0415.15042021.

Full text
Abstract:
Currently, to evaluate the dynamic characteristics of objects, quite a large number of devices are used in the form of chronographs, which consist of various optical, thermal and laser sensors. Among the problems of these devices, the following can be distinguished: the lack of recording of the received data; the inaccessibility of taking into account the trajectory of the object flying in the sensor area, as well as taking into consideration the trajectory of the object during the approach to the device frame. The signal received from the infrared sensors is recorded in a separate document in txt format, in the form of a table. When you turn to the document, data is read from the current position of the input data stream in the specified list by an argument in accordance with the given condition. As a result of reading the data, it forms an array that includes N number of columns. The array is constructed in a such way that the first column includes time values, and columns 2...N- the value of voltage . The algorithm uses cycles that perform the function of deleting array rows where there is a fact of exceeding the threshold value in more than two columns, as well as rows where the threshold level was not exceeded. The modified array is converted into two new arrays, each of which includes data from different sensor frames. An array with the coordinates of the centers of the sensor operation zones was created to apply the Pythagorean theorem in three-dimensional space, which is necessary for calculating the exact distance between the zones. The time is determined by the difference in the response of the first and second sensor frames. Knowing the path and time, we are able to calculate the exact speed of the object. For visualization, the oscillograms of each sensor channel were displayed, and a chronograph model was created. The chronograph model highlights in purple the area where the threshold has been exceeded.
APA, Harvard, Vancouver, ISO, and other styles
7

Visser, R., H. Kao, R. M. H. Dokht, A. B. Mahani, and S. Venables. A comprehensive earthquake catalogue for northeastern British Columbia: the northern Montney trend from 2017 to 2020 and the Kiskatinaw Seismic Monitoring and Mitigation Area from 2019 to 2020. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329078.

Full text
Abstract:
To increase our understanding of induced seismicity, we develop and implement methods to enhance seismic monitoring capabilities in northeastern British Columbia (NE BC). We deploy two different machine learning models to identify earthquake phases using waveform data from regional seismic stations and utilize an earthquake database management system to streamline the construction and maintenance of an up-to-date earthquake catalogue. The completion of this study allows for a comprehensive catalogue in NE BC from 2014 to 2020 by building upon our previous 2014-2016 and 2017-2018 catalogues. The bounds of the area where earthquakes were located were between 55.5°N-60.0°N and 119.8°W-123.5°W. The earthquakes in the catalogue were initially detected by machine learning models, then reviewed by an analyst to confirm correct identification, and finally located using the Non-Linear Location (NonLinLoc) algorithm. Two distinct sub-areas within the bounds consider different periods to supplement what was not covered in previously published reports - the Northern Montney Trend (NMT) is covered from 2017 to 2020 while the Kiskatinaw Seismic Monitoring and Mitigation Area (KSMMA) is covered from 2019 to 2020. The two sub-areas are distinguished by the BC Oil & Gas Commission (BCOGC) due to differences in their geographic location and geology. The catalogue was produced by picking arrival phases on continuous seismic waveforms from 51 stations operated by various organizations in the region. A total of 17,908 events passed our quality control criteria and are included in the final catalogue. Comparably, the routine Canadian National Seismograph Network (CNSN) catalogue reports 207 seismic events - all events in the CNSN catalogue are present in our catalogue. Our catalogue benefits from the use of enhanced station coverage and improved methodology. The total number of events in our catalogue in 2017, 2018, 2019, and 2020 were 62, 47, 9579 and 8220, respectively. The first two years correspond to seismicity in the NMT where poor station coverage makes it difficult to detect small magnitude events. The magnitude of completeness within the KSMMA (ML = ~0.7) is significantly smaller than that obtained for the NMT (ML = ~1.4). The new catalogue is released with separate files for origins, arrivals, and magnitudes which can be joined using the unique ID assigned to each event.
APA, Harvard, Vancouver, ISO, and other styles
8

Wisniewski, Michael, Samir Droby, John Norelli, Dov Prusky, and Vera Hershkovitz. Genetic and transcriptomic analysis of postharvest decay resistance in Malus sieversii and the identification of pathogenicity effectors in Penicillium expansum. United States Department of Agriculture, January 2012. http://dx.doi.org/10.32747/2012.7597928.bard.

Full text
Abstract:
Use of Lqh2 mutants (produced at TAU) and rNav1.2a mutants (produced at the US side) for identifying receptor site-3: Based on the fact that binding of scorpion alpha-toxins is voltage-dependent, which suggests toxin binding at the mobile voltage-sensing region, we analyzed which of the toxin bioactive domains (Core-domain or NC-domain) interacts with the DIV Gating-module of rNav1.2a. This analysis was based on the assumption that the dissociation of toxin mutants upon depolarization would vary from that of the unmodified toxin should the substitutions affect a site of interaction with the channel Gating-module. Using a series of toxin mutants (mutations at both domains) and two channel mutants that were shown to reduce the sensitivity to scorpion alpha-toxins, and by comparison of depolarization-driven dissociation of Lqh2 derivatives off their binding site at rNav1.2a mutant channels we found that the toxin Core-domain interacts with the Gating-module of DIV. Details of the experiments and results appear in Guret al (2011). Mapping receptor site 3 at Nav1.2a by extensive channel mutagenesis (Seattle): Since previous studies with photoaffinity labeling and antibody mapping implicated domains I and IV in scorpion alpha-toxin binding, Nav1.2 channel mutants containing substitutions at these extracellular regions were expressed and tested for receptor function by whole-cell voltage clamp. Of a large number of channel mutants, T1560A, F1610A, and E1613A in domain IV had ~5.9-, ~10.7-, and ~3.9-fold lower affinities for the scorpion toxin Lqh2, respectively, and mutant E1613R had 73-fold lower affinity. Toxin dissociation was accelerated by depolarization for both wild-type and mutants, and the rates of dissociation were also increased by mutations T1560A, F1610A and E1613A. In contrast, association rates for these three mutant channels at negative membrane potentials were not significantly changed and were not voltage-dependent. These results indicated that Thr1560 in the S1-S2 loop, Phe1610 in the S3 segment, and Glu1613 in the S3-S4 loop in domain IV participate in toxin binding. T393A in the SS2-S6 loop in domain I also showed a ~3.4-fold lower affinity for Lqh2, indicating that this extracellular loop may form a secondary component of the toxin binding site. Analysis with the Rosetta-Membrane algorithm revealed a three-dimensional model of Lqh2 binding to the voltage sensor in a resting state. In this model, amino acid residues in an extracellular cleft formed by the S1-S2 and S3-S4 loops in domain IV that are important for toxin binding interact with amino acid residues on two faces of the wedge-shaped Lqh2 molecule that are important for toxin action. The conserved gating charges in the S4 transmembrane segment are in an inward position and likely form ion pairs with negatively charged amino acid residues in the S2 and S3 segments (Wang et al 2011; Gurevitz 2012; Gurevitzet al 2013).
APA, Harvard, Vancouver, ISO, and other styles
9

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
10

Payment Systems Report - June of 2021. Banco de la República, February 2022. http://dx.doi.org/10.32468/rept-sist-pag.eng.2021.

Full text
Abstract:
Banco de la República provides a comprehensive overview of Colombia’s finan¬cial infrastructure in its Payment Systems Report, which is an important product of the work it does to oversee that infrastructure. The figures published in this edition of the report are for the year 2020, a pandemic period in which the con¬tainment measures designed and adopted to alleviate the strain on the health system led to a sharp reduction in economic activity and consumption in Colom¬bia, as was the case in most countries. At the start of the pandemic, the Board of Directors of Banco de la República adopted decisions that were necessary to supply the market with ample liquid¬ity in pesos and US dollars to guarantee market stability, protect the payment system and preserve the supply of credit. The pronounced growth in mone¬tary aggregates reflected an increased preference for liquidity, which Banco de la República addressed at the right time. These decisions were implemented through operations that were cleared and settled via the financial infrastructure. The second section of this report, following the introduction, offers an analysis of how the various financial infrastructures in Colombia have evolved and per¬formed. One of the highlights is the large-value payment system (CUD), which registered more momentum in 2020 than during the previous year, mainly be¬cause of an increase in average daily remunerated deposits made with Banco de la República by the General Directorate of Public Credit and the National Treasury (DGCPTN), as well as more activity in the sell/buy-back market with sovereign debt. Consequently, with more activity in the CUD, the Central Securi¬ties Depository (DCV) experienced an added impetus sparked by an increase in the money market for bonds and securities placed on the primary market by the national government. The value of operations cleared and settled through the Colombian Central Counterparty (CRCC) continues to grow, propelled largely by peso/dollar non-deliverable forward (NDF) contracts. With respect to the CRCC, it is important to note this clearing house has been in charge of managing risks and clearing and settling operations in the peso/dollar spot market since the end of last year, following its merger with the Foreign Exchange Clearing House of Colombia (CCDC). Since the final quarter of 2020, the CRCC has also been re¬sponsible for clearing and settlement in the equities market, which was former¬ly done by the Colombian Stock Exchange (BVC). The third section of this report provides an all-inclusive view of payments in the market for goods and services; namely, transactions carried out by members of the public and non-financial institutions. During the pandemic, inter- and intra-bank electronic funds transfers, which originate mostly with companies, increased in both the number and value of transactions with respect to 2019. However, debit and credit card payments, which are made largely by private citizens, declined compared to 2019. The incidence of payment by check contin¬ue to drop, exhibiting quite a pronounced downward trend during the past last year. To supplement to the information on electronic funds transfers, section three includes a segment (Box 4) characterizing the population with savings and checking accounts, based on data from a survey by Banco de la República con-cerning the perception of the use of payment instruments in 2019. There also is segment (Box 2) on the growth in transactions with a mobile wallet provided by a company specialized in electronic deposits and payments (Sedpe). It shows the number of users and the value of their transactions have increased since the wallet was introduced in late 2017, particularly during the pandemic. In addition, there is a diagnosis of the effects of the pandemic on the payment patterns of the population, based on data related to the use of cash in circu¬lation, payments with electronic instruments, and consumption and consumer confidence. The conclusion is that the collapse in the consumer confidence in¬dex and the drop in private consumption led to changes in the public’s pay¬ment patterns. Credit and debit card purchases were down, while payments for goods and services through electronic funds transfers increased. These findings, coupled with the considerable increase in cash in circulation, might indicate a possible precautionary cash hoarding by individuals and more use of cash as a payment instrument. There is also a segment (in Focus 3) on the major changes introduced in regulations on the retail-value payment system in Colombia, as provided for in Decree 1692 of December 2020. The fourth section of this report refers to the important innovations and tech¬nological changes that have occurred in the retail-value payment system. Four themes are highlighted in this respect. The first is a key point in building the financial infrastructure for instant payments. It involves of the design and im¬plementation of overlay schemes, a technological development that allows the various participants in the payment chain to communicate openly. The result is a high degree of interoperability among the different payment service providers. The second topic explores developments in the international debate on central bank digital currency (CBDC). The purpose is to understand how it could impact the retail-value payment system and the use of cash if it were to be issued. The third topic is related to new forms of payment initiation, such as QR codes, bio¬metrics or near field communication (NFC) technology. These seemingly small changes can have a major impact on the user’s experience with the retail-value payment system. The fourth theme is the growth in payments via mobile tele¬phone and the internet. The report ends in section five with a review of two papers on applied research done at Banco de la República in 2020. The first analyzes the extent of the CRCC’s capital, acknowledging the relevant role this infrastructure has acquired in pro¬viding clearing and settlement services for various financial markets in Colom¬bia. The capital requirements defined for central counterparties in some jurisdic¬tions are explored, and the risks to be hedged are identified from the standpoint of the service these type of institutions offer to the market and those associated with their corporate activity. The CRCC’s capital levels are analyzed in light of what has been observed in the European Union’s regulations, and the conclusion is that the CRCC has a scheme of security rings very similar to those applied internationally and the extent of its capital exceeds what is stipulated in Colombian regulations, being sufficient to hedge other risks. The second study presents an algorithm used to identify and quantify the liquidity sources that CUD’s participants use under normal conditions to meet their daily obligations in the local financial market. This algorithm can be used as a tool to monitor intraday liquidity. Leonardo Villar Gómez Governor
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography