Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Shorův algoritmus.

Zeitschriftenartikel zum Thema „Shorův algoritmus“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-48 Zeitschriftenartikel für die Forschung zum Thema "Shorův algoritmus" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Litinskaia, Evgeniia L., Pavel A. Rudenko, Kirill V. Pozhar und Nikolai A. Bazaev. „Validation of Short-Term Blood Glucose Prediction Algorithms“. International Journal of Pharma Medicine and Biological Sciences 8, Nr. 2 (April 2019): 34–39. http://dx.doi.org/10.18178/ijpmbs.8.2.34-39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Cherckesova, Larissa, Olga Safaryan, Pavel Razumov, Irina Pilipenko, Yuriy Ivanov und Ivan Smirnov. „Speed improvement of the quantum factorization algorithm of P. Shor by upgrade its classical part“. E3S Web of Conferences 224 (2020): 01016. http://dx.doi.org/10.1051/e3sconf/202022401016.

Der volle Inhalt der Quelle
Annotation:
This report discusses Shor’s quantum factorization algorithm and ρ–Pollard’s factorization algorithm. Shor’s quantum factorization algorithm consists of classical and quantum parts. In the classical part, it is proposed to use Euclidean algorithm, to find the greatest common divisor (GCD), but now exist large number of modern algorithms for finding GCD. Results of calculations of 8 algorithms were considered, among which algorithm with lowest execution rate of task was identified, which allowed the quantum algorithm as whole to work faster, which in turn provides greater potential for practical application of Shor’s quantum algorithm. Standard quantum Shor’s algorithm was upgraded by replacing the binary algorithm with iterative shift algorithm, canceling random number generation operation, using additive chain algorithm for raising to power. Both Shor’s algorithms (standard and upgraded) are distinguished by their high performance, which proves much faster and insignificant increase in time in implementation of data processing. In addition, it was possible to modernize Shor’s quantum algorithm in such way that its efficiency turned out to be higher than standard algorithm because classical part received an improvement, which allows an increase in speed by 12%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ekerå, Martin. „Quantum algorithms for computing general discrete logarithms and orders with tradeoffs“. Journal of Mathematical Cryptology 15, Nr. 1 (01.01.2021): 359–407. http://dx.doi.org/10.1515/jmc-2020-0006.

Der volle Inhalt der Quelle
Annotation:
Abstract We generalize our earlier works on computing short discrete logarithms with tradeoffs, and bridge them with Seifert's work on computing orders with tradeoffs, and with Shor's groundbreaking works on computing orders and general discrete logarithms. In particular, we enable tradeoffs when computing general discrete logarithms. Compared to Shor's algorithm, this yields a reduction by up to a factor of two in the number of group operations evaluated quantumly in each run, at the expense of having to perform multiple runs. Unlike Shor's algorithm, our algorithm does not require the group order to be known. It simultaneously computes both the order and the logarithm. We analyze the probability distributions induced by our algorithm, and by Shor's and Seifert's order-finding algorithms, describe how these algorithms may be simulated when the solution is known, and estimate the number of runs required for a given minimum success probability when making different tradeoffs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Devitt, S. J., A. G. Fowler und L. C. L. Hollenberg. „Robustness of Shor's algorithm“. Quantum Information and Computation 6, Nr. 7 (November 2006): 616–29. http://dx.doi.org/10.26421/qic6.7-5.

Der volle Inhalt der Quelle
Annotation:
Shor's factorisation algorithm is a combination of classical pre- and post-processing and a quantum period finding (QPF) subroutine which allows an exponential speed up over classical factoring algorithms. We consider the stability of this subroutine when exposed to a discrete error model that acts to perturb the computational trajectory of a quantum computer. Through detailed state vector simulations of an appropriate quantum circuit, we show that the error locations within the circuit itself heavily influences the probability of success of the QPF subroutine. The results also indicate that the naive estimate of required component precision is too conservative.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Akbar, Rahmad, Bambang Pramono und Rizal Adi Saputra. „Implementasi Algoritma Simon Pada Aplikasi Kamus Perubahan Fi’il (Kata Kerja Bahasa Arab) Berbasis Android“. Ultimatics : Jurnal Teknik Informatika 13, Nr. 1 (11.06.2021): 12–18. http://dx.doi.org/10.31937/ti.v13i1.1850.

Der volle Inhalt der Quelle
Annotation:
Kata kunci—Algoritma String Matching, Algoritma Simon, Kamus Perubahan Fi’il Android Ilmu Shorof atau Tashrif adalah bidang keilmuan derivasi kata dalam Bahasa Arab, salah satu fokus pembahasan bidang ilmu tersebut ialah proses perubahan kata kerja atau disebut juga dengan Fi’il menjadi beberapa jenis kata yang lain, seperti Fi’il Mudhori’, Fi’il Madhi, Fi’il Amr, Fi’il Nahi, Isim Fa’il, Isim Maf’ul, Isim Zaman, Isim Makan, Isim Alat, Masdar maupun Masdar mim. Proses pembelajaran ilmu Shorof masih banyak dilakukan dengan cara tradisional, terutama dilingkungan pesantren dengan cara menghafal turunan-turunan kata tersebut beserta terjemahanya. Sedangkan salah satu kitab dasar yang sering digunakan ialah kitab Amtsilah At-Tashrifiyah karangan KH.Ma'shum bin Ali sebagai rujukan proses perubahan kata, sedangakan untuk mencari terjemahan dalam Bahasa Indonesia harus menggunakan kamus Arab-Indonesia. Penelitian ini bertujuan untuk mempermudah proses pencarian kata dengan cara membuat kamus perubahan Fi’il berbasis android dan memanfaatkan Algoritma Simon sebagai metode pencarian katanya, sehingga dapat mempermudah proses pembelajaran ilmu Shorof. Algoritma Simon merupakan salah satu algoritma string matching dengan fase pencocokannya dilakukan dari kiri ke kanan dengan tahapan inisialisasi tiap indeks pada pola yang diberikan. Setelah dilakukan pengujian, proses pencarian kata dapat dilakukan dengan rata-rata running time yang dibutuhkan selama 3,67097786 mili second untuk pencarian kata dalam Bahasa Indonesia dan 23,8447333 mili second untuk pencarian kata dalam Bahasa Arab
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Liu, Ye-Chao, Jiangwei Shang und Xiangdong Zhang. „Coherence Depletion in Quantum Algorithms“. Entropy 21, Nr. 3 (07.03.2019): 260. http://dx.doi.org/10.3390/e21030260.

Der volle Inhalt der Quelle
Annotation:
Besides the superior efficiency compared to their classical counterparts, quantum algorithms known so far are basically task-dependent, and scarcely any common features are shared between them. In this work, however, we show that the depletion of quantum coherence turns out to be a common phenomenon in these algorithms. For all the quantum algorithms that we investigated, including Grover’s algorithm, Deutsch–Jozsa algorithm, and Shor’s algorithm, quantum coherence of the system states reduces to the minimum along with the successful execution of the respective processes. Notably, a similar conclusion cannot be drawn using other quantitative measures such as quantum entanglement. Thus, we expect that coherence depletion as a common feature can be useful for devising new quantum algorithms in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kendon, V. M., und W. J. Munro. „Entanglement and its role in Shor's algorithm“. Quantum Information and Computation 6, Nr. 7 (November 2006): 630–40. http://dx.doi.org/10.26421/qic6.7-6.

Der volle Inhalt der Quelle
Annotation:
Entanglement has been termed a critical resource for quantum information processing and is thought to be the reason that certain quantum algorithms, such as Shor's factoring algorithm, can achieve exponentially better performance than their classical counterparts. The nature of this resource is still not fully understood: here we use numerical simulation to investigate how entanglement between register qubits varies as Shor's algorithm is run on a quantum computer. The shifting patterns in the entanglement are found to relate to the choice of basis for the quantum Fourier transform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Savran, I., M. Demirci und A. H. Yılmaz. „Accelerating Shor’s factorization algorithm on GPUs“. Canadian Journal of Physics 96, Nr. 7 (Juli 2018): 759–61. http://dx.doi.org/10.1139/cjp-2017-0768.

Der volle Inhalt der Quelle
Annotation:
Shor’s quantum algorithm is very important for cryptography, because it can factor large numbers much faster than classical algorithms. In this study, we implement a simulator for Shor’s quantum algorithm on graphic processor units (GPU) and compare our results with Liquid, which is a Microsoft quantum simulation platform, and two classical CPU implementations. We evaluate 10 benchmarks for comparing our GPU implementation with Liquid and single-core implementation. The analysis shows that GPU vector operations are more suitable for Shor’s quantum algorithm. Our GPU kernel function is compute-bound, due to all threads in a block reaching the same element of the state vector. Our implementation has 52.5× speedup over single-core algorithm and 20.5× speedup over Liquid.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

GAWRON, P., und J. A. MISZCZAK. „NUMERICAL SIMULATIONS OF MIXED STATE QUANTUM COMPUTATION“. International Journal of Quantum Information 03, Nr. 01 (März 2005): 195–99. http://dx.doi.org/10.1142/s0219749905000748.

Der volle Inhalt der Quelle
Annotation:
We describe the [Formula: see text] package of functions useful for simulations of quantum algorithms and protocols. The presented package allows one to perform simulations with mixed states. We present numerical implementation of important quantum mechanical operations — partial trace and partial transpose. Those operations are used as building blocks of algorithms for analysis of entanglement and quantum error correction codes. A simulation of Shor's algorithm is presented as an example of package capabilities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Muruganantham, B., P. Shamili, S. Ganesh Kumar und A. Murugan. „Quantum cryptography for secured communication networks“. International Journal of Electrical and Computer Engineering (IJECE) 10, Nr. 1 (01.02.2020): 407. http://dx.doi.org/10.11591/ijece.v10i1.pp407-414.

Der volle Inhalt der Quelle
Annotation:
Quantum cryptography is a method for accessing data with the cryptosystem more efficiently. The network security and the cryptography are the two major properties in securing the data in the communication network. The quantum cryptography uses the single photon passing through the polarization of a photon. In Quantum Cryptography, it's impossible for the eavesdropper to copy or modify the encrypted messages in the quantum states in which we are sending through the optical fiber channels. Cryptography performed by using the protocols BB84 and B92 protocols. The two basic algorithms of quantum cryptography are Shor’s algorithm and the Grover’s’s algorithm. For finding the number of integer factorization of each photon, Shor’s algorithm is used. Grover’s’s algorithm used for searching the unsorted data. Shor’s algorithm overcomes RSA algorithm by high security. By the implementation of quantum cryptography, we are securing the information from the eavesdropper and thereby preventing data in the communication channel.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

MOSCA, MICHELE, und CHRISTOF ZALKA. „EXACT QUANTUM FOURIER TRANSFORMS AND DISCRETE LOGARITHM ALGORITHMS“. International Journal of Quantum Information 02, Nr. 01 (März 2004): 91–100. http://dx.doi.org/10.1142/s0219749904000109.

Der volle Inhalt der Quelle
Annotation:
We show how the Quantum Fast Fourier Transform (QFFT) can be made exact for arbitrary orders (first showing it for large primes). Most quantum algorithms only need a good approximation of the quantum Fourier transform of order 2n to succeed with high probability, and this QFFT can in fact be done exactly. Kitaev1 showed how to approximate the Fourier transform for any order. Here we show how his construction can be made exact by using the technique known as "amplitude amplification". Although unlikely to be of any practical use, this construction allows one to make Shor's discrete logarithm quantum algorithm exact. Thus we have the first example of an exact non black box fast quantum algorithm, thereby giving more evidence that "quantum" need not be probabilistic. We also show that in a certain sense the family of circuits for the exact QFFT is uniform. Namely, the parameters of the gates can be approximated efficiently.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Johansson, Niklas, und Jan-Åke Larsson. „Quantum Simulation Logic, Oracles, and the Quantum Advantage“. Entropy 21, Nr. 8 (15.08.2019): 800. http://dx.doi.org/10.3390/e21080800.

Der volle Inhalt der Quelle
Annotation:
Query complexity is a common tool for comparing quantum and classical computation, and it has produced many examples of how quantum algorithms differ from classical ones. Here we investigate in detail the role that oracles play for the advantage of quantum algorithms. We do so by using a simulation framework, Quantum Simulation Logic (QSL), to construct oracles and algorithms that solve some problems with the same success probability and number of queries as the quantum algorithms. The framework can be simulated using only classical resources at a constant overhead as compared to the quantum resources used in quantum computation. Our results clarify the assumptions made and the conditions needed when using quantum oracles. Using the same assumptions on oracles within the simulation framework we show that for some specific algorithms, such as the Deutsch-Jozsa and Simon’s algorithms, there simply is no advantage in terms of query complexity. This does not detract from the fact that quantum query complexity provides examples of how a quantum computer can be expected to behave, which in turn has proved useful for finding new quantum algorithms outside of the oracle paradigm, where the most prominent example is Shor’s algorithm for integer factorization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Nimmy, Sonia Farhana, und M. S. Kamal. „Next generation sequencing under de novo genome assembly“. International Journal of Biomathematics 08, Nr. 05 (13.08.2015): 1530001. http://dx.doi.org/10.1142/s1793524515300018.

Der volle Inhalt der Quelle
Annotation:
The next generation sequencing (NGS) is an important process which assures inexpensive organization of vast size of raw sequence dataset over any traditional sequencing systems or methods. Various aspects of NGS such as template preparation, sequencing imaging and genome alignment and assembly outline the genome sequencing and alignment. Consequently, de Bruijn graph (dBG) is an important mathematical tool that graphically analyzes how the orientations are constructed in groups of nucleotides. Basically, dBG describes the formation of the genome segments in circular iterative fashions. Some pivotal dBG-based de novo algorithms and software packages such as T-IDBA, Oases, IDBA-tran, Euler, Velvet, ABySS, AllPaths, SOAPde novo and SOAPde novo2 are illustrated in this paper. Consequently, overlap layout consensus (OLC) graph-based algorithms also play vital role in NGS assembly. Some important OLC-based algorithms such as MIRA3, CABOG, Newbler, Edena, Mosaik and SHORTY are portrayed in this paper. It has been experimented that greedy graph-based algorithms and software packages are also vital for proper genome dataset assembly. A few algorithms named SSAKE, SHARCGS and VCAKE help to perform proper genome sequencing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

CRAUSER, A., P. FERRAGINA, K. MEHLHORN, U. MEYER und E. A. RAMOS. „RANDOMIZED EXTERNAL-MEMORY ALGORITHMS FOR LINE SEGMENT INTERSECTION AND OTHER GEOMETRIC PROBLEMS“. International Journal of Computational Geometry & Applications 11, Nr. 03 (Juni 2001): 305–37. http://dx.doi.org/10.1142/s0218195901000523.

Der volle Inhalt der Quelle
Annotation:
We show that the well-known random incremental construction of Clarkson and Shor18 can be adapted to provide efficient external-memory algorithms for some geometric problems. In particular, as the main result, we obtain an optimal randomized algorithm for the problem of computing the trapezoidal decomposition determined by a set of N line segments in the plane with K pairwise intersections, that requires [Formula: see text] expected disk accesses, where M is the size of the available internal memory and B is the size of the block transfer. The approach is sufficiently general to derive algorithms for other geometric problems: 3-d half-space intersections, 2-d and 3-d convex hulls, 2-d abstract Voronoi diagrams and batched planar point location; these algorithms require an optimal expected number of disk accesses and are simpler than the ones previously known. The results extend to an external-memory model with multiple disks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Van den Nest, Maarten. „Simulating quantum computers with probabilistic methods“. Quantum Information and Computation 11, Nr. 9&10 (September 2011): 784–812. http://dx.doi.org/10.26421/qic11.9-10-5.

Der volle Inhalt der Quelle
Annotation:
We investigate the boundary between classical and quantum computational power. This work consists of two parts. First we develop new classical simulation algorithms that are centered on sampling methods. Using these techniques we generate new classes of classically simulatable quantum circuits where standard techniques relying on the exact computation of measurement probabilities fail to provide efficient simulations. For example, we show how various concatenations of matchgate, Toffoli, Clifford, bounded-depth, Fourier transform and other circuits are classically simulatable. We also prove that sparse quantum circuits as well as circuits composed of CNOT and $\exp[{i\theta X}]$ gates can be simulated classically. In a second part, we apply our results to the simulation of quantum algorithms. It is shown that a recent quantum algorithm, concerned with the estimation of Potts model partition functions, can be simulated efficiently classically. Finally, we show that the exponential speed-ups of Simon's and Shor's algorithms crucially depend on the very last stage in these algorithms, dealing with the classical postprocessing of the measurement outcomes. Specifically, we prove that both algorithms would be classically simulatable if the function classically computed in this step had a sufficiently peaked Fourier spectrum.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Gidney, Craig, und Austin G. Fowler. „Efficient magic state factories with a catalyzed|CCZ⟩to2|T⟩transformation“. Quantum 3 (30.04.2019): 135. http://dx.doi.org/10.22331/q-2019-04-30-135.

Der volle Inhalt der Quelle
Annotation:
We present magic state factory constructions for producing|CCZ⟩states and|T⟩states. For the|CCZ⟩factory we apply the surface code lattice surgery construction techniques described in \cite{fowler2018} to the fault-tolerant Toffoli \cite{jones2013, eastin2013distilling}. The resulting factory has a footprint of12d×6d(wheredis the code distance) and produces one|CCZ⟩every5.5dsurface code cycles. Our|T⟩state factory uses the|CCZ⟩factory's output and a catalyst|T⟩state to exactly transform one|CCZ⟩state into two|T⟩states. It has a footprint25%smaller than the factory in \cite{fowler2018} but outputs|T⟩states twice as quickly. We show how to generalize the catalyzed transformation to arbitrary phase angles, and note that the caseθ=22.5∘produces a particularly efficient circuit for producing|T⟩states. Compared to using the12d×8d×6.5d|T⟩factory of \cite{fowler2018}, our|CCZ⟩factory can quintuple the speed of algorithms that are dominated by the cost of applying Toffoli gates, including Shor's algorithm \cite{shor1994} and the chemistry algorithm of Babbush et al. \cite{babbush2018}. Assuming a physical gate error rate of10−3, our CCZ factory can produce∼1010states on average before an error occurs. This is sufficient for classically intractable instantiations of the chemistry algorithm, but for more demanding algorithms such as Shor's algorithm the mean number of states until failure can be increased to∼1012by increasing the factory footprint∼20%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Aoki, M. Ávila, Guo Hua Sun und Shi Hai Dong. „Bounds on the quantity of entanglement in parallel quantum computing of a single ensemble quantum computer“. Canadian Journal of Physics 92, Nr. 2 (Februar 2014): 159–62. http://dx.doi.org/10.1139/cjp-2013-0083.

Der volle Inhalt der Quelle
Annotation:
Speeding up of the processing of quantum algorithms has been focused on from the point of view of an ensemble quantum computer (EQC) working in a parallel mode. As a consequence of such efforts, additional speed up has been achieved for processing both Shor’s and Grover’s algorithms. On the other hand, in the literature there is scarce concern about the quantity of entanglement contained in EQC approaches, for this reason in the present work we study such a quantity. As a first result, an upper bound on the quantity of entanglement contained in EQC is imposed. As a main result we prove that equally weighted states are not appropriate for EQC working in parallel mode. In order that our results are not exclusively purely theoretical, we exemplify the situation by discussing the entanglement on an ensemble of n1 = 3 diamond quantum computers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Bub, J. „Quantum computaton from a quantum logical perspective“. Quantum Information and Computation 7, Nr. 4 (Mai 2007): 281–96. http://dx.doi.org/10.26421/qic7.4-1.

Der volle Inhalt der Quelle
Annotation:
It is well-known that Shor's factorization algorithm, Simon's period-finding algorithm, and Deutsch's original XOR algorithm can all be formulated as solutions to a hidden subgroup problem. Here the salient features of the information-processing in the three algorithms are presented from a different perspective, in terms of the way in which the algorithms exploit the non-Boolean quantum logic represented by the projective geometry of Hilbert space. From this quantum logical perspective, the XOR algorithm appears directly as a special case of Simon's algorithm, and all three algorithms can be seen as exploiting the non-Boolean logic represented by the subspace structure of Hilbert space in a similar way. Essentially, a global property of a function (such as a period, or a disjunctive property) is encoded as a subspace in Hilbert space representing a quantum proposition, which can then be efficiently distinguished from alternative propositions, corresponding to alternative global properties, by a measurement (or sequence of measurements) that identifies the target proposition as the proposition represented by the subspace containing the final state produced by the algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yamashita, Shigeru, und Igor L. Markov. „Fast equivalence -- checking for quantum circuits“. Quantum Information and Computation 10, Nr. 9&10 (September 2010): 721–34. http://dx.doi.org/10.26421/qic10.9-10-1.

Der volle Inhalt der Quelle
Annotation:
We perform formal verification of quantum circuits by integrating several techniques specialized to particular classes of circuits. Our verification methodology is based on the new notion of a reversible miter that allows one to leverage existing techniques for simplification of quantum circuits. For reversible circuits which arise as runtime bottlenecks of key quantum algorithms, we develop several verification techniques and empirically compare them. We also combine existing quantum verification tools with the use of SAT-solvers. Experiments with circuits for Shor's number-factoring algorithm, containing thousands of gates, show improvements in efficiency by four orders of magnitude.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Caraballo Cortés, Kamila, Osvaldo Zagordi, Tomasz Laskus, Rafał Płoski, Iwona Bukowska-Ośko, Agnieszka Pawełczyk, Hanna Berak und Marek Radkowski. „Ultradeep Pyrosequencing of Hepatitis C Virus Hypervariable Region 1 in Quasispecies Analysis“. BioMed Research International 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/626083.

Der volle Inhalt der Quelle
Annotation:
Genetic variability of hepatitis C virus (HCV) determines pathogenesis of infection, including viral persistence and resistance to treatment. The aim of the present study was to characterize HCV genetic heterogeneity within a hypervariable region 1 (HVR1) of a chronically infected patient by ultradeep 454 sequencing strategy. Three independent sequencing error correction methods were applied. First correction method (Method I) implemented cut-off for genetic variants present in less than 1%. In the second method (Method II), a condition to call a variant was bidirectional coverage of sequencing reads. Third method (Method III) usedShort Read Assembly into Haplotypes(ShoRAH) program. After the application of these three different algorithms, HVR1 population consisted of 8, 40, and 186 genetic haplotypes. The most sensitive method was ShoRAH, allowing to reconstruct haplotypes constituting as little as 0.013% of the population. The most abundant genetic variant constituted only 10.5%. Seventeen haplotypes were present in a frequency above 1%, and there was wide dispersion of the population into very sparse haplotypes. Our results indicate that HCV HVR1 heterogeneity andquasispeciespopulation structure may be reconstructed by ultradeep sequencing. However, credible analysis requires proper reconstruction methods, which would distinguish sequencing error from real variabilityin vivo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Mahfudh, Adzhal Arwani, und Hery Mustofa. „Klasifikasi Pemahaman Santri Dalam Pembelajaran Kitab Kuning Menggunakan Algoritma Naive Bayes Berbasis Forward Selection“. Walisongo Journal of Information Technology 1, Nr. 2 (20.12.2019): 101. http://dx.doi.org/10.21580/wjit.2019.1.2.4529.

Der volle Inhalt der Quelle
Annotation:
Kitab kuning merupakan kitab tradisional yang mengandung diraasah islamiyah yang diajarakan pada pondok pesantren, mulai dari struktur bahasa arab (ilmu nahwu dan shorof), ‘ulumul qur’an, hadits, aqidah, tasawuf/akhlaq, tafsir, fiqh sampai ilmu sosial dan kemasyarakatan (mu’amalah). Disebut juga dengan kitab gundul karena tidak memiliki harakat (fathah, kasroh, dhammah, sukun) untuk bisa membaca dan memahami secara menyeluruh dibutuhkan waktu yang relatif lama. Penelitian ini bertujuan untuk mendapatkan model klasifikasi dari data pembelajaran kitab kuning di pondok pesantren. Metode yang digunakan dalam penelitian ini adalah <em>forward selection</em> sebagai praproses dalam mengurangi dimensi data, menghilangkan data yang tidak relevan dan naive bayes yang berguna untuk mengklasifikasi data. Hasil dari klasifikasi data pembelajaran kitab kuning menggunakan atribut yang telah diklasifikasi berdasarkan fitur-fiturnya dan dilakukan iterasi pada <em>cross validation </em>sehingga menghasilkan akurasi yang tepat. Berdasarkan hasil pengujian dengan dua metode, pengujian dengan algoritma Naive bayes saja menghasilkan akurasi 96,02%, untuk algoritma Naive bayes berbasis forward selection menghasilkan akurasi 97,38% . Terdapat peningkatan akurasi dengan penambahan fitur seleksi.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Van den Nest, Maarten. „Efficient classical simulations of quantum Fourier transforms and Normalizer circuits over Abelian groups“. Quantum Information and Computation 13, Nr. 11&12 (November 2013): 1007–37. http://dx.doi.org/10.26421/qic13.11-12-7.

Der volle Inhalt der Quelle
Annotation:
The quantum Fourier transform (QFT) is an important ingredient in various quantum algorithms which achieve superpolynomial speed-ups over classical computers. In this paper we study under which conditions the QFT can be simulated efficiently classically. We introduce a class of quantum circuits, called \emph{normalizer circuits}: a normalizer circuit over a finite Abelian group is any quantum circuit comprising the QFT over the group, gates which compute automorphisms and gates which realize quadratic functions on the group. In our main result we prove that all normalizer circuits have polynomial-time classical simulations. The proof uses algorithms for linear diophantine equation solving and the monomial matrix formalism introduced in our earlier work. Our result generalizes the Gottesman-Knill theorem: in particular, Clifford circuits for $d$-level qudits arise as normalizer circuits over the group ${\mathbf Z}_d^m$. We also highlight connections between normalizer circuits and Shor's factoring algorithm, and the Abelian hidden subgroup problem in general. Finally we prove that quantum factoring cannot be realized as a normalizer circuit owing to its modular exponentiation subroutine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Avila, Anderson, Renata Hax Sander Reiser, Maurício Lima Pilla und Adenauer Correa Yamin. „Improving in situ GPU simulation of quantum computing in the D-GM environment“. International Journal of High Performance Computing Applications 33, Nr. 3 (16.01.2019): 462–72. http://dx.doi.org/10.1177/1094342018823251.

Der volle Inhalt der Quelle
Annotation:
Exponential increase and global access to read/write memory states in quantum computing (QC) simulation limit both the number of qubits and quantum transformations which can be currently simulated. Although QC simulation is parallel by nature, spatial and temporal complexity are major performance hazards, making this a nontrivial application for high performance computing. A new methodology employing reduction and decomposition optimizations has shown relevant results, but its GPU implementation could be further improved. In this work, we develop a new kernel for in situ GPU simulation that better explores its resources without requiring further hardware. Shor’s and Grover’s algorithms are simulated up to 25 and 21 qubits respectively and compared to our previous version, to [Formula: see text] simulator and to ProjectQ framework, showing better results with relative speedups up to 4.38×, 3357.76× and 333× respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yu, Feng, Jinglong Fang, Bin Chen und Yanli Shao. „An Incremental Learning Based Convolutional Neural Network Model for Large-Scale and Short-Term Traffic Flow“. International Journal of Machine Learning and Computing 11, Nr. 2 (März 2021): 143–51. http://dx.doi.org/10.18178/ijmlc.2021.11.2.1027.

Der volle Inhalt der Quelle
Annotation:
Traffic flow prediction is very important for smooth road conditions in cities and convenient travel for residents. With the explosive growth of traffic flow data size, traditional machine learning algorithms cannot fit large-scale training data effectively and the deep learning algorithms do not work well because of the huge training and update costs, and the prediction accuracy may need to be further improved when an emergency affecting traffic occurs. In this study, an incremental learning based convolutional neural network model, TF-net, is proposed to achieve the efficient and accurate prediction of large-scale and short-term traffic flow. The key idea is to introduce the uncertainty features into the model without increasing the training cost to improve the prediction accuracy. Meanwhile, based on the idea of combining incremental learning with active learning, a certain percentage of typical samples in historical traffic flow data are sampled to fine-tune the prediction model, so as to further improve the prediction accuracy for special situations and ensure the real-time requirement. The experimental results show that the proposed traffic flow prediction model has better performance than the existing methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Kotukh, E. V., O. V. Severinov, A. V. Vlasov, A. O. Tenytska und E. O. Zarudna. „Some results of development of cryptographic transformations schemes using non-abelian groups“. Radiotekhnika, Nr. 204 (09.04.2021): 66–72. http://dx.doi.org/10.30837/rt.2021.1.204.07.

Der volle Inhalt der Quelle
Annotation:
Implementation of a successful attack on classical public key cryptosystems becomes more and more real with the advent of practical results in the implementation of Shor's and Grover's algorithms on quantum computers. Modern results in tackling the problem of building a quantum computer of sufficiently power justify the need to revise the existing approaches and determine the most effective in terms of solving problems of post-quantum cryptography. One of these promising research priorities is the study of the cryptosystems based on non-abelian groups. The problems of conjugacy search, membership search, and others are difficult to solve in the theory of non-abelian groups and are the basis for building provably secure public key cryptosystems. This paper gives an overview of the most frequently discussed algorithms using non-abelian groups: matrix groups braid groups, semi direct products, and algebraic erasers (AE). The analysis of the construction of encryption and decryption schemes, key establishment mechanisms is given. Many non-abelian group-based key establishment protocols are associated with the Diffie – Hellman (DH) protocol. The paper analyzes the properties of non-abelian group public key encryption schemes. Various cryptographic primitives using non-commutative groups as a basis for post-quantum schemes are considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Li, Kai, und Qing-yu Cai. „Practical Security of RSA Against NTC-Architecture Quantum Computing Attacks“. International Journal of Theoretical Physics 60, Nr. 8 (19.06.2021): 2733–44. http://dx.doi.org/10.1007/s10773-021-04789-x.

Der volle Inhalt der Quelle
Annotation:
AbstractQuantum algorithms can greatly speed up computation in solving some classical problems, while the computational power of quantum computers should also be restricted by laws of physics. Due to quantum time-energy uncertainty relation, there is a lower limit of the evolution time for a given quantum operation, and therefore the time complexity must be considered when the number of serial quantum operations is particularly large. When the key length is about at the level of KB (encryption and decryption can be completed in a few minutes by using standard programs), it will take at least 50-100 years for NTC (Neighbor-only, Two-qubit gate, Concurrent) architecture ion-trap quantum computers to execute Shor’s algorithm. For NTC architecture superconducting quantum computers with a code distance 27 for error-correcting, when the key length increased to 16 KB, the cracking time will also increase to 100 years that far exceeds the coherence time. This shows the robustness of the updated RSA against practical quantum computing attacks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Chandrasekaran, Yaspy Joshva, Shine Let Gunamony und Benin Pratap Chandran. „Integration of 5G Technologies in Smart Grid Communication-A Short Survey“. International Journal of Renewable Energy Development 8, Nr. 3 (05.10.2019): 275–83. http://dx.doi.org/10.14710/ijred.8.3.275-283.

Der volle Inhalt der Quelle
Annotation:
Smart grid is an intelligent power distribution system that employs dual communication between the energy devices and the substation. Dual communication helps to overseer the internet access points, energy meters, and power demand of the entire grid. Deployment of advanced communication and control technologies makes smart grid system efficient for energy availability and low-cost maintenance. Appropriate algorithms are analyzed first for the convenient grid to have proper routing and security with a high-level of power transmission and distribution. Information and Communication Technology plays a significant role in monitoring, demand response, and control of the energy distribution. This paper presents a broad review of communication and network technologies with regard to Internet of Things, Machine to Machine Communication, and Cognitive radio terminologies which comprises 5G technology. Networks suitable for future smart-grid are compared with respect to standard protocols, data rate, throughput, delay, security, and routing. Approaches adopted for the smart-grid system has been commended based on the performance and the parameters observed. ©2019. CBIORE-IJRED. All rights reserved
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Sigov, A., E. Andrianova, D. Zhukov, S. Zykov und I. E. Tarasov. „QUANTUM INFORMATICS: OVERVIEW OF THE MAIN ACHIEVEMENTS“. Russian Technological Journal 7, Nr. 1 (28.02.2019): 5–37. http://dx.doi.org/10.32362/2500-316x-2019-7-1-5-37.

Der volle Inhalt der Quelle
Annotation:
The urgency of conducting research in the field of quantum informatics is grounded. Promising areas of research are highlighted. For foreign and Russian publications and materials, a review of the main scientific results that characterize the current state of research in quantum computer science is made. It is noted that knowledge and funds are invested most intensively in the development of the architecture of a quantum computer and its elements. Despite the fact that today there is no information on the creation of a physical implementation of a quantum computer comparable in functionality to a classical digital computer, the development of quantum algorithms is one of the popular areas of research. An advantage of quantum algorithms is the fact that they reduce the time required to solve the problem due to the parallelization of operations by generating entangled quantum states and their subsequent use. This advantage (quantum acceleration) is most important when solving the problem of modeling the dynamics of complex systems and enumerated mathematical problems. (The general case of enumeration is the Grover scheme and its variants; the tasks of searching for hidden periods: Shor's scheme of using the fast quantum Fourier transform and its analogues.) The demand for cybersecurity developments (search for vulnerabilities in smart spaces, secure storage and use of big data, quantum cryptography) is noted. More than a dozen articles are devoted to quantum algorithms of key search, key distribution on optical fibers of various lengths, and the analysis of quantum resources necessary for conducting a cyber attack. In the field of artificial quantum intelligence, attention is paid, first of all, to the “search” for a model of a quantum neural network that is optimal from the point of view of using all the advantages presented by quantum computing and neural networks, as well as machine learning algorithms. Examples of the use of quantum computing in cognitive and social sciences for studying the decision-making mechanism with incomplete data are given. It is concluded that quantum informatics is promising for the simulation of complex natural and artificial phenomena and processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Vasudeva, Vaishali, Subrata Nandy, Hitendra Padalia, Ritika Srinet und Prakash Chauhan. „Mapping spatial variability of foliar nitrogen and carbon in Indian tropical moist deciduous sal (Shorea robusta) forest using machine learning algorithms and Sentinel-2 data“. International Journal of Remote Sensing 42, Nr. 3 (03.12.2020): 1139–59. http://dx.doi.org/10.1080/01431161.2020.1823043.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Potes, Miguel, Gonçalo Rodrigues, Alexandra Marchã Penha, Maria Helena Novais, Maria João Costa, Rui Salgado und Maria Manuela Morais. „Use of Sentinel 2 – MSI for water quality monitoring at Alqueva reservoir, Portugal“. Proceedings of the International Association of Hydrological Sciences 380 (18.12.2018): 73–79. http://dx.doi.org/10.5194/piahs-380-73-2018.

Der volle Inhalt der Quelle
Annotation:
Abstract. Alqueva reservoir located in southeast of Portugal has a surface area of 250 km2 and total capacity of 4150 hm3. Since 2006 the water quality of this reservoir is explored by the authors using remote sensing techniques. First using MERIS multi-spectral radiometer on-board of ENVISAT-1 and presently with MSI multi-spectral radiometer on-board SENTINEL-2. The existence of two satellites (A and B) equipped with MSI enable the area to be revisited, under the same viewing conditions, every 2–3 days. Since 2017 the multidisciplinary project ALOP (ALentejo Observation and Prediction systems) expands the team knowledge about the physical and bio-chemical properties of the reservoir. This project includes an integrated field campaign at different experimental sites in the reservoir and its shores, at least until September 2018. Previous algorithms developed by the team for MERIS are tested with the new MSI instrument for water turbidity, chlorophyll a concentration and density of cyanobacteria. Results from micro-algae bloom occurred in late summer/early autumn 2017 on the reservoir are presented, showing the capabilities of MSI sensor for detection and high resolution mapping over the reservoir. The results are compared with in situ sampling and laboratorial analysis of chlorophyll a associated with the bloom.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Mulerova, T. A., N. I. Morozova, V. N. Maksimov und M. Yu Ogarkov. „The role of the sympathoadrenal system candidate gene (ADRB 1, ADRA2B) polymorphism in the antihypertensive efficiency of β-adrenoblocker in the indigenous Shor people“. "Arterial’naya Gipertenziya" ("Arterial Hypertension") 26, Nr. 4 (28.08.2020): 421–30. http://dx.doi.org/10.18705/1607-419x-2020-26-4-421-430.

Der volle Inhalt der Quelle
Annotation:
Objective – is to study the dynamics of blood pressure (BP) in the indigenous population of the Shors in response to the antihypertensive therapy with a β-blocker (metoprolol succinate), taking into account the polymorphism of the candidate genes of the sympathetic adrenal system (ADRB1 and ADRA2B).Design and methods. The study of the indigenous population (the Shors) living in the areas of Mountain Shoria (Kemerovo region) was carried out. Using continuous sampling method, 901 people (18 years old and above) were examined. Measurement of blood pressure was carried out in accordance with the recommendations of the Russian Society of Cardiology / the Russian Society of Hypertension (2010). The survey allowed identifying the number of respondents with arterial hypertension (AH) for further observation - 367 people (40.7%). The second screening was performed one year after the patients were included in the study. The criterion for putting the patient under prospective observation was regular intake of the medication prescribed (162 people). Antihypertensive therapy was prescribed by a cardiologist according to the recommendations of the Russian Society of Cardiology /the Russian Society of Hypertension (2010). It included taking a β1-selective medication, metoprolol succinate, from the group of β-adrenergic blockers in a dose of 100 mg per day. Gene polymorphism ADRB1 (c.145A> G, Ser49Gly, rs1801252) and ADRA2B (I/D, rs28365031) were tested using polymerase chain reaction.Results. The study of the indigenous population proves that there is an association between carrying the A allele in the homozygous state of the ADRB1 gene and achieving the target level of blood pressure (OR = 2.36) while taking a β-blocker (metoprolol succinate). In the Shor population ADRA2B gene polymorphism was not associated with the effective treatment of hypertension using this medication.Conclusion. An epidemiological study in Mountain Shoria has demonstrated an antihypertensive effect of metoprolol succinate in its dependence on the polymorphism of the candidate genes (ADRB1 and ADRA2B). They are coding the components of the sympathetic adrenal system which plays an important role in the pathogenesis of hypertension. Today there are no clinical algorithms for checking personalized sensibility to the β-blockers, and therefore further research in this area remains a topical subject.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

KAMENEV, D. I., G. P. BERMAN, R. B. KASSMAN und V. I. TSIFRINOVICH. „MODELING FULL ADDER IN ISING SPIN QUANTUM COMPUTER WITH 1000 QUBITS USING QUANTUM MAPS“. International Journal of Quantum Information 02, Nr. 03 (September 2004): 323–40. http://dx.doi.org/10.1142/s0219749904000304.

Der volle Inhalt der Quelle
Annotation:
The quantum adder is an essential attribute of a quantum computer, just as classical adder is needed for operation of a digital computer. We model the quantum full adder as a realistic complex algorithm on a large number of qubits in an Ising spin quantum computer. Our results are an important step toward effective modeling of the quantum modular adder which is needed for Shor's and other quantum algorithms. Our full adder has the following features. (i) The near-resonant transitions with small detunings are completely suppressed, which allows us to decrease errors by several orders of magnitude and to model a 1000-qubit full adder. (We add a 1000-bit number using 2001 spins.) (ii) We construct the full adder gates directly as sequences of radio-frequency pulses, rather than breaking them down into generalized logical gates, such as Control-Not and one qubit gates. This substantially reduces the number of pulses needed to implement the full adder. (The maximum number of pulses required to add one bit (F-gate) is 15.) (iii) Full adder is realized in a homogeneous spin chain. (iv) The phase error is minimized: the F-gates generate approximately the same phase for different states of the superposition. (v) Modeling of the full adder is performed using quantum maps instead of differential equations. This allows us to reduce the calculation time to a reasonable value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Stenis, Jan, Varvara Sachpazidou und William Hogland. „An Economic Instrument to Address Beach Wrack“. Applied Economics and Finance 8, Nr. 1 (09.12.2020): 50. http://dx.doi.org/10.11114/aef.v8i1.5100.

Der volle Inhalt der Quelle
Annotation:
ObjectivesThis article introduces a practical, economic instrument based on the Naturally Optimised Revenue Demand in Communities, the NORDIC model, to improve the management of beach wrack. Tourism is an important sector in a country’s or region’s economy, as it generates employment and business opportunities. Verifiably, sandy shorelines have served as areas for amusement and as attractions upon which tourism advancement has been based. The accumulations of beach wrack result in a significant decrease in the recreational value of a coastal area. The decomposition of beach wrack emits an unpleasant odor, as it releases essential nitrate, phosphate and hydrogen sulfide (H2S). In this investigation, we provide coastal communities with a powerful tool to address the harmful damage inflicted on their beaches, by marine biomass mounds.MethodsWe adapted the NORDIC model and used a case study to illustrate how the adapted NORDIC model could alleviate the municipalities’ burden, caused by beach wrack.ResultsThe application of a versatile tool, the NORDIC model, by various managers in manage and promote a sustainable use of beach wrack would boost the tourism industry in coastal areas.ConclusionsWe recommend the application of the NORDIC model to beach wrack management in general, and in particular to the tourism sector, to enhance the economic value of attractive shores. Future research should focus on developing additional algorithms for valuation of specific kinds of beach wrack.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Pongsena, Watthana, Prakaidoy Sitsayabut, Nittaya Kerdprasop und Kittisak Kerdprasop. „Development of a Model for Predicting the Direction of Daily Price Changes in the Forex Market Using Long Short-Term Memory“. International Journal of Machine Learning and Computing 11, Nr. 1 (Januar 2021): 61–67. http://dx.doi.org/10.18178/ijmlc.2021.11.1.1015.

Der volle Inhalt der Quelle
Annotation:
Forex is the largest global financial market in the world. Traditionally, fundamental and technical analysis are strategies that the Forex traders often used. Nowadays, advanced computational technology, Artificial Intelligence (AI) has played a significant role in the financial domain. Various applications based on AI technologies particularly machine learning and deep learning have been constantly developed. As the historical data of the Forex are time-series data where the values from the past affect the values that will appear in the future. Several existing works from other domains of applications have proved that the Long-Short Term Memory (LSTM), which is a particular kind of deep learning that can be applied to modeling time series, provides better performance than traditional machine learning algorithms. In this paper, we aim to develop a powerful predictive model targeting to predicts the daily price changes of the currency pairwise in the Forex market using LSTM. Besides, we also conduct an extensive experiment with the intention to demonstrate the effect of various factors contributing to the performance of the model. The experimental results show that the optimized LSTM model accurately predicts the direction of the future price up to 61.25 percent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Saied, Marwa, Sabah Khaled, Thomas Down, Jacek Marzec, Paul Smith, Silvana Debernardi und Bryan D. Young. „Genome Wide Study of DNA Methylation In AML“. Blood 116, Nr. 21 (19.11.2010): 3618. http://dx.doi.org/10.1182/blood.v116.21.3618.3618.

Der volle Inhalt der Quelle
Annotation:
Abstract Abstract 3618 DNA methylation is the most stable epigenetic modification and has a major role in cancer initiation and progression. The two main aims for this research were, firstly, to use the genome wide analysis of DNA methylation to better understand the development of acute myeloid leukemia (AML). The second aim was to detect differentially methylated genes/regions between certain subtypes of AML and normal bone marrow (NBM). We used the methylated DNA immunoprecipitation technique followed by high-throughput sequencing by Illumina Genome Analyser II (MeDIP -seq) for 9 AML samples for which ethical approval has been obtained. The selected leukemias included three with the t(8; 21), three with the t(15; 17) translocations and three with normal karyotypes (NK). The control samples were 3 normal bone marrows (NBMs) from healthy donors. The number of reads generated from Illumina ranged between 18– 20 million paired-end reads/lane with a good base quality from both ends (base quality > 30 represented 75%-85% of reads). The reads were aligned using 2 algorithms (Maq and Bowtie) and the methylation analysis was performed by Batman software (Bayesian Tool for Methylation Analysis). The creation of this genome-wide methylation map for AML permits the examination of the patterns for key genetic elements. Investigation of the 35,072 promoter regions identified 80 genes, which showed a significant differential methylation levels in leukemic cases in comparison to NBM; consistently high methylation levels in leukaemia were detected in the promoters of 70 genes e.g. DPP6, ID4, DCC, whereas high methylation levels in NBM, lost in leukaemia was observed in 10 genes e.g. ATF4. For each AML subtype, we also identified significant differentially methylated promoter regions e.g. PAX1 for t(8; 21), GRM7 for t(15; 17), NPM2 for NK. An analysis of gene body methylation identified 49 genes with significantly higher methylation in AML in comparison to NBM e.g. MYOD1 and 31 genes with a higher methylation in NBMs than AML e.g. GNG8. A similar analysis of 23,600 CpG islands identified 400 CpG islands with significant differential methylation levels between leukaemia and NBMs (212 CpG islands were found to have significantly increased methylation in leukaemia and 188 CpG islands had significantly higher methylation in NBMs). The pattern of methylation in CpG island “shores” (2 KB from either side of each CpG island) has been investigated and 312 CpG island shores showed a higher methylation in leukaemia and 88 CpG shores had a significant increase methylation levels in NBMs. This genome wide methylation map has been validated by using direct bisulfite sequencing of the regions identified above (Spearman r= 0.8, P <0.0001) and also by using Illumina Infinium assay (Spearman r= 0.7 P <0.0001) which interrogates regions at single representative CpGs. Comparison of previous array based gene expression data with this methylation map revealed a significant negative correlation between promoter methylation and gene expression (Pearson r= -0.9, P< 0.0001) while, gene body methylation showed a small negative correlation with gene expression, that was found in genes of CpG density >3% (Pearson r= -0.3, P< 0.0001). Conclusion: we have established a high-resolution (100bp) map of DNA methylation in AML and thus identified a novel list of genes, which have significantly differential methylation levels in AML. Disclosures: No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Poblete, Carlos, Francisco Suárez, Sebastián Vicuña, Carolina Meruane, Alberto de La Fuente und Jorge Reyes A. „Avances en el pronóstico operacional de corto plazo y la evolución futura de largo plazo del fenómeno de la turbiedad en el río Maipo“. Aqua-LAC 12, Nr. 2 (30.09.2020): 37–46. http://dx.doi.org/10.29104/phi-aqualac/2020-v12-2-03.

Der volle Inhalt der Quelle
Annotation:
El fenómeno de la turbiedad extrema ha sido abordado en trabajos previos a través del análisis de eventos históricos, estudios que han permitido, además de caracterizar y comprender de mejor forma el fenómeno, estar en condiciones de intentar una predicción del mismo. Se presentan a continuación avances conseguidos en dos líneas de predicción: 1) pronósticos operacionales de corto plazo, alimentados por forzantes que gatillan aumentos de turbiedad en el río y 2) evolución futura de largo plazo, considerando los patrones de cambio climáticos que pueden ocurrir sobre la cuenca del Maipo. El modelo operacional pronostica la serie de tiempo horaria de turbiedad para un horizonte de 3 días y consiste en un modelo híbrido que combina información geomorfológica y meteorológica de la cuenca, unida a un pronóstico hidrometeorológico base, con algoritmos Deep Learning que definen el peso de cada forzante. Desde su puesta en operación, la herramienta se ha utilizado para generar advertencias del tipo alerta temprana y estimaciones del tiempo de duración de la emergencia. La evolución futura del fenómeno se abordó utilizando simulaciones de un modelo climático de alta resolución las cuales alimentaron un modelo hidrológico local, resultados que unidos a un modelo basado en redes neuronales, generaron series diarias de turbiedad. Respecto de la situación actual y bajo un escenario de Cambio Climático RCP 8.5, se esperarían para el futuro incrementos en el número de eventos, en la duración máxima y magnitud media asociada y aparentes reducciones en las intensidades máximas de los mismos. Si bien ambas herramientas deben seguir siendo evaluadas, se consideran interesantes alternativas para avanzar en la predicción del comportamiento del fenómeno, tanto en el corto como en el largo plazo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Moradi, N., M. Hasanlou und M. Saadatseresht. „OCEAN COLOR RETRIEVAL USING LANDSAT-8 IMAGERY IN COASTAL CASE 2 WATERS (CASE STUDY PERSIAN AND OMAN GULF)“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (24.06.2016): 1161–64. http://dx.doi.org/10.5194/isprs-archives-xli-b8-1161-2016.

Der volle Inhalt der Quelle
Annotation:
Ocean color (OC) monitoring using satellite imageries provides an appropriate tool for a better understanding of marine processes and changes in the coastal environment. Radiance measurements in the range of visible light of the electromagnetic spectrum provides information of ocean color that is associated with the water constituents. This measurements are used to monitor the level of biological activity and the presence of particles in the water. Ocean features such as the concentration of chlorophyll, suspended sediment concentration and sea surface temperature have a significant impact on the dynamics of the ocean. The concentration of chlorophyll (<i>chla</i>), active pigments of phytoplankton photosynthesis, as a key indicator applied for assessment of water quality and biochemistry. Experimental algorithms <i>chla</i> related to internal communication various optical components in the water that may be change in space and time in the water with different optical characteristics. Therefore, the algorithms have been developed for one area may not work for other places and each region according to its specific characteristics needs that determined by an algorithm may be appropriate to local. We have tried treatment several algorithms for determination of chlorophyll, including experimental algorithms with a simple band ratio of blue-green band (i.e. OCx) and algorithms includes two bands ratio with variable 𝑅<sub>𝑟𝑠</sub>(λ<sub>2</sub>)/𝑅<sub>𝑟𝑠</sub>(λ<sub>1</sub>), the three bands ratio with variable [𝑅<sub>𝑟𝑠</sub>(λ<sub>1</sub>)−1−𝑅<sub>𝑟𝑠</sub>(λ<sub>2</sub>)−1]×𝑅<sub>𝑟𝑠</sub>(λ<sub>3</sub>) and four bands ratio with variable [𝑅<sub>𝑟𝑠</sub>(λ<sub>1</sub>)−1−𝑅<sub>𝑟𝑠</sub>(λ<sub>2</sub>)−1]/[𝑅<sub>𝑟𝑠</sub>(λ<sub>4</sub>)−1−𝑅<sub>𝑟𝑠</sub>(λ<sub>3</sub>)−1] that desired wavelength (i.e. λ<sub>1</sub>, λ<sub>2</sub>, λ<sub>3</sub> and λ<sub>4</sub>) in the range of red and near-infrared wavelengths of the electromagnetic spectrum are in the region of the Persian Gulf and Oman Sea look. Despite the high importance of the Persian Gulf and Oman Sea which can have up basin countries, to now few studies have been done in this area. The focus of this article on the northern part of Oman Sea and Persian Gulf, the shores of neighboring Iran (case 2 water). In this paper, by using Landsat 8 satellite imageries, we have discussed chla concentrations and customizing different OC algorithms for this new dataset (Landsat-8 imagery). This satellite was launched in 2013 and its data using two sensors continuously are provided operating one sensor imager land (OLI: Operational Land Imager) and the Thermal Infrared Sensor (TIRS: Thermal InfraRed Sensor) and are available. This sensors collect image data, respectively, for the nine-band short wavelength in the range of 433-2300 nm and dual-band long wavelength thermal. Seven band of the nine band picked up by the sensor information of OLI to deal with sensors TM (Thematic Mapper) and ETM+ (Enhanced Thematic Mapper Plus) in previous satellite Landsat compatible and two other band, the band of coastal water (433 to 453 nm) and Cirrus band (1360 to 1390 nm), short wave infrared provides to measure water quality and high thin clouds. Since OLI sensor in Landsat satellite 8 compared with other sensors to study OC have been allocated a much better spatial resolution can be more accurate to determine changes in OC. To evaluate the results of the image sensor MODIS (Moderate Resolution Imaging Spectroradiometer) at the same time satellite images Landsat 8 is used. The statistical parameters used in order to evaluate the performance of different algorithms, including root mean square error (RMSE) and coefficient of determination (R<sup>2</sup>), and on the basis of these parameters we choose the most appropriate algorithm for the area. Extracted results for implementing different OC algorithms clearly shows superiority of utilized method by R2=0.71 and RMSE=0.07.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Pokhre, Shiva. „Assessment of Above Ground Biomass and Fire Risk Zonation in Selected Forest Areas of LudhiKhola Watershed, Gorkha Nepal“. Remote Sensing of Land 2, Nr. 1 (15.12.2018): 47–64. http://dx.doi.org/10.21523/gcj1.18020104.

Der volle Inhalt der Quelle
Annotation:
The drive for robust, accurate and cost-effective methods for biomass estimation over large areas is ever great with the launch of carbon crediting mechanisms in the developing countries such as UN-REDD [United Nations Programme on Reducing Emissions from Deforestation and Forest Degradation] and climate change mitigation program. Traditional ground based measurement requires abundant manpower, resources, cost and time. Remote sensing based technologies pertinently answer the need of time in enhancing the successful implementation of such programs. The region growing and valley following algorithm used to delineate individual tree crowns produced a segmentation accuracy of 59.35% and 54.83%, respectively. Both algorithms have similar approaches for delineation. Above ground biomass was calculated using allometric equation form and height, diameter measured from the field. Linear regression models were applied to derive the relation of biomass with crown projection area, field measured height with biomass. All models were significant at 95% confidence level and the lowest Root Mean Square Error (RMSE %) of 27.45 % (Shorea robusta) and 33.33% (others species). The total amount of biomass stocks was approximately 30620 Kg/ha-1. For forest fire hazard zonation an Analytic Hierarchy Process (AHP) method was used .The result show that 11% of the study area falls under very low fire risk zone, 55 % falls under low fire risk zone and 30 % falls under moderate fire potential zone while 4% of area falls under high forest fire risk zone. The map is also validated through major past fire incidents. The results show that the predicted fire zones are found to be in good agreement with past fire incidents, and hence, the map can be used for future forest resources management.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Bader, Mary Kay, Annabelle Braun, Cherie Fox, Lauren Dwinell, Jennifer Cord, Marne Andersen, Bryan Noakes und Daniel Ponticiello. „A California Hospital’s Response to COVID-19: From a Ripple to a Tsunami Warning“. Critical Care Nurse 40, Nr. 6 (01.12.2020): e1-e16. http://dx.doi.org/10.4037/ccn2020799.

Der volle Inhalt der Quelle
Annotation:
Background The outbreak of coronavirus disease 2019 (COVID-19) rippled across the world from Wuhan, China, to the shores of the United States within a few months. Hospitals and intensive care units were suddenly faced with a “tsunami” warning requiring instantaneous implementation and escalation of disaster plans. Evidence Review An evidence-based question was developed and an extensive review of the literature was completed, resulting in a structured plan for the intensive care units to manage a surge of patients critically ill with COVID-19 in March 2020. Twenty-five sources of evidence focusing on pandemic intensive care unit and COVID-19 management laid the foundation for the team to navigate the crisis. Implementation The Critical Care Services task force adopted recommendations from the CHEST consensus statement on surge capacity principles and other sources, which served as the framework for the organized response. The 4 S’s became the focus: space, staff, supplies, and systems. Development of algorithms, workflows, and new processes related to treating patients, staffing shortages, and limited supplies. New intensive care unit staffing solutions were adopted. Evaluation Using a framework based on the literature reviewed, the Critical Care Services task force controlled the surge of patients with COVID-19 in March through May 2020. Patients received excellent care, and the mortality rate was 0.008%. The intensive care unit team had the needed respiratory and general supplies but had to continually adapt to shortages of personal protective equipment, cleaning products, and some medications. Sustainability The intensive care unit pandemic response plan has been established and the team is prepared for the next wave of COVID-19.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Moradi, N., M. Hasanlou und M. Saadatseresht. „OCEAN COLOR RETRIEVAL USING LANDSAT-8 IMAGERY IN COASTAL CASE 2 WATERS (CASE STUDY PERSIAN AND OMAN GULF)“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (24.06.2016): 1161–64. http://dx.doi.org/10.5194/isprsarchives-xli-b8-1161-2016.

Der volle Inhalt der Quelle
Annotation:
Ocean color (OC) monitoring using satellite imageries provides an appropriate tool for a better understanding of marine processes and changes in the coastal environment. Radiance measurements in the range of visible light of the electromagnetic spectrum provides information of ocean color that is associated with the water constituents. This measurements are used to monitor the level of biological activity and the presence of particles in the water. Ocean features such as the concentration of chlorophyll, suspended sediment concentration and sea surface temperature have a significant impact on the dynamics of the ocean. The concentration of chlorophyll (&lt;i&gt;chla&lt;/i&gt;), active pigments of phytoplankton photosynthesis, as a key indicator applied for assessment of water quality and biochemistry. Experimental algorithms &lt;i&gt;chla&lt;/i&gt; related to internal communication various optical components in the water that may be change in space and time in the water with different optical characteristics. Therefore, the algorithms have been developed for one area may not work for other places and each region according to its specific characteristics needs that determined by an algorithm may be appropriate to local. We have tried treatment several algorithms for determination of chlorophyll, including experimental algorithms with a simple band ratio of blue-green band (i.e. OCx) and algorithms includes two bands ratio with variable 𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;2&lt;/sub&gt;)/𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;1&lt;/sub&gt;), the three bands ratio with variable [𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;1&lt;/sub&gt;)−1−𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;2&lt;/sub&gt;)−1]×𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;3&lt;/sub&gt;) and four bands ratio with variable [𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;1&lt;/sub&gt;)−1−𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;2&lt;/sub&gt;)−1]/[𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;4&lt;/sub&gt;)−1−𝑅&lt;sub&gt;𝑟𝑠&lt;/sub&gt;(λ&lt;sub&gt;3&lt;/sub&gt;)−1] that desired wavelength (i.e. λ&lt;sub&gt;1&lt;/sub&gt;, λ&lt;sub&gt;2&lt;/sub&gt;, λ&lt;sub&gt;3&lt;/sub&gt; and λ&lt;sub&gt;4&lt;/sub&gt;) in the range of red and near-infrared wavelengths of the electromagnetic spectrum are in the region of the Persian Gulf and Oman Sea look. Despite the high importance of the Persian Gulf and Oman Sea which can have up basin countries, to now few studies have been done in this area. The focus of this article on the northern part of Oman Sea and Persian Gulf, the shores of neighboring Iran (case 2 water). In this paper, by using Landsat 8 satellite imageries, we have discussed chla concentrations and customizing different OC algorithms for this new dataset (Landsat-8 imagery). This satellite was launched in 2013 and its data using two sensors continuously are provided operating one sensor imager land (OLI: Operational Land Imager) and the Thermal Infrared Sensor (TIRS: Thermal InfraRed Sensor) and are available. This sensors collect image data, respectively, for the nine-band short wavelength in the range of 433-2300 nm and dual-band long wavelength thermal. Seven band of the nine band picked up by the sensor information of OLI to deal with sensors TM (Thematic Mapper) and ETM+ (Enhanced Thematic Mapper Plus) in previous satellite Landsat compatible and two other band, the band of coastal water (433 to 453 nm) and Cirrus band (1360 to 1390 nm), short wave infrared provides to measure water quality and high thin clouds. Since OLI sensor in Landsat satellite 8 compared with other sensors to study OC have been allocated a much better spatial resolution can be more accurate to determine changes in OC. To evaluate the results of the image sensor MODIS (Moderate Resolution Imaging Spectroradiometer) at the same time satellite images Landsat 8 is used. The statistical parameters used in order to evaluate the performance of different algorithms, including root mean square error (RMSE) and coefficient of determination (R&lt;sup&gt;2&lt;/sup&gt;), and on the basis of these parameters we choose the most appropriate algorithm for the area. Extracted results for implementing different OC algorithms clearly shows superiority of utilized method by R2=0.71 and RMSE=0.07.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Skosana, Unathi, und Mark Tame. „Demonstration of Shor’s factoring algorithm for N $$=$$ 21 on IBM quantum processors“. Scientific Reports 11, Nr. 1 (16.08.2021). http://dx.doi.org/10.1038/s41598-021-95973-w.

Der volle Inhalt der Quelle
Annotation:
AbstractWe report a proof-of-concept demonstration of a quantum order-finding algorithm for factoring the integer 21. Our demonstration involves the use of a compiled version of the quantum phase estimation routine, and builds upon a previous demonstration. We go beyond this work by using a configuration of approximate Toffoli gates with residual phase shifts, which preserves the functional correctness and allows us to achieve a complete factoring of $$N=21$$ N = 21 . We implemented the algorithm on IBM quantum processors using only five qubits and successfully verified the presence of entanglement between the control and work register qubits, which is a necessary condition for the algorithm’s speedup in general. The techniques we employ may be useful in carrying out Shor’s algorithm for larger integers, or other algorithms in systems with a limited number of noisy qubits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Aradyamath, Poornima, Naghabhushana N M und Rohitha Ujjinimatad. „Quantum Computing Concepts with Deutsch Jozsa Algorithm“. JOIV : International Journal on Informatics Visualization 3, Nr. 1 (10.01.2019). http://dx.doi.org/10.30630/joiv.3.1.218.

Der volle Inhalt der Quelle
Annotation:
In this paper, we briefly review the basic concepts of quantum computation, entanglement, quantum cryptography and quantum fourier transform. Quantum algorithms like Deutsch Jozsa, Shor’s factorization and Grover’s data search are developed using fourier transform and quantum computation concepts to build quantum computers. Researchers are finding a way to build quantum computer that works more efficiently than classical computer. Among the standard well known algorithms in the field of quantum computation and communication we describe mathematically Deutsch Jozsa algorithm in detail for 2 and 3 qubits. Calculation of balanced and unbalanced states is shown in the mathematical description of the algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Childs, Andrew M., und Gábor Ivanyos. „Quantum computation of discrete logarithms in semigroups“. Journal of Mathematical Cryptology 8, Nr. 4 (01.01.2014). http://dx.doi.org/10.1515/jmc-2013-0038.

Der volle Inhalt der Quelle
Annotation:
AbstractWe describe an efficient quantum algorithm for computing discrete logarithms in semigroups using Shor's algorithms for period finding and the discrete logarithm problem as subroutines. Thus proposed cryptosystems based on the presumed hardness of discrete logarithms in semigroups are insecure against quantum attacks. In contrast, we show that some generalizations of the discrete logarithm problem are hard in semigroups despite being easy in groups. We relate a shifted version of the discrete logarithm problem in semigroups to the dihedral hidden subgroup problem, and we show that the constructive membership problem with respect to
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Kiselova, O. M., O. M. Prytomanova, S. V. Dzyuba und V. G. Padalko. „Construction of a multiplicatively weighted diagram of a crow with fuzzy parameters“. Problems of applied mathematics and mathematic modeling, 27.11.2019. http://dx.doi.org/10.15421/321912.

Der volle Inhalt der Quelle
Annotation:
An algorithm for constructing a multiplicatively weighted Voronoi diagram in the presence of fuzzy parameters with optimal location of a finite number of generator points in a bounded set of n-dimensional Euclidean space En is proposed in the paper. The algorithm is based on the formulation of a continuous set partitioning problem from En into non-intersecting subsets with a partitioning quality criterion providing the corresponding form of Voronoi diagram. Algorithms for constructing the classical Voronoi diagram and its various generalizations, which are based on the usage of the methods of the optimal set partitioning theory, have several advantages over the other used methods: they are out of thedependence of En space dimensions, which containing a partitioned bounded set into subsets, independent of the geometry of the partitioned sets, the algorithm’s complexity is not growing under increasing of number of generator points, it can be used for constructing the Voronoi diagram with optimal location of the points and others. The ability of easily construction not only already known Voronoi diagrams but also the new ones is the result of this general-purpose approach. The proposed in the paper algorithm for constructing a multiplicatively weighted Voronoi diagram in the presence of fuzzy parameters with optimal location of a finite number of generator points in a bounded set of n-dimensional Euclidean space En is developed using a synthesis of methods for solving optimal set partitioning problems, neurofuzzy technologies and modifications of the Shor’s r-algorithm for solving non-smooth optimization problems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Pfeifer, Peter, und Chen Hou. „Quantum Computing: From Bragg Reflections to Decoherence Estimates“. MRS Proceedings 746 (2002). http://dx.doi.org/10.1557/proc-746-q8.5.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTWe give an exposition of the principles of quantum computing (logic gates, exponential parallelism from polynomial hardware, fast quantum algorithms, quantum error correction, hardware requirements, and experimental milestones). A compact description of the quantum Fourier transform to find the period of a function—the key step in Shor's factoring algorithm—illustrates how parallel state evolution along many classical computational paths produces fast algorithms by constructive interference similar to Bragg reflections in x-ray crystallography. On the hardware side, we present a new method to estimate critical time scales for the operation of a quantum computer. We derive a universal upper bound on the probability of a computation to fail due to decoherence (entanglement of the computer with the environment), as a function of time. The bound is parameter-free, requiring only the interaction between the computer and the environment, and the time-evolving state in the absence of any interaction. For a simple model we find that the bound performs well and decoherence is small when the energy of the computer state is large compared to the interaction energy. This supports a recent estimate of minimum energy requirements for quantum computation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Banerjee, Utsav, Tenzin S. Ukyab und Anantha P. Chandrakasan. „Sapphire: A Configurable Crypto-Processor for Post-Quantum Lattice-based Protocols“. IACR Transactions on Cryptographic Hardware and Embedded Systems, 09.08.2019, 17–61. http://dx.doi.org/10.46586/tches.v2019.i4.17-61.

Der volle Inhalt der Quelle
Annotation:
Public key cryptography protocols, such as RSA and elliptic curve cryptography, will be rendered insecure by Shor’s algorithm when large-scale quantum computers are built. Cryptographers are working on quantum-resistant algorithms, and lattice-based cryptography has emerged as a prime candidate. However, high computational complexity of these algorithms makes it challenging to implement lattice-based protocols on low-power embedded devices. To address this challenge, we present Sapphire – a lattice cryptography processor with configurable parameters. Efficient sampling, with a SHA-3-based PRNG, provides two orders of magnitude energy savings; a single-port RAM-based number theoretic transform memory architecture is proposed, which provides 124k-gate area savings; while a low-power modular arithmetic unit accelerates polynomial computations. Our test chip was fabricated in TSMC 40nm low-power CMOS process, with the Sapphire cryptographic core occupying 0.28 mm2 area consisting of 106k logic gates and 40.25 KB SRAM. Sapphire can be programmed with custom instructions for polynomial arithmetic and sampling, and it is coupled with a low-power RISC-V micro-processor to demonstrate NIST Round 2 lattice-based CCA-secure key encapsulation and signature protocols Frodo, NewHope, qTESLA, CRYSTALS-Kyber and CRYSTALS-Dilithium, achieving up to an order of magnitude improvement in performance and energy-efficiency compared to state-of-the-art hardware implementations. All key building blocks of Sapphire are constant-time and secure against timing and simple power analysis side-channel attacks. We also discuss how masking-based DPA countermeasures can be implemented on the Sapphire core without any changes to the hardware.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Mohana, P. V. Ananda. „Post–Quantum Cryptography – A Primer“. Advanced Computing and Communications, 31.03.2020. http://dx.doi.org/10.34048/acc.2020.1.f2.

Der volle Inhalt der Quelle
Annotation:
Traditionally, information security needed encryption, authentication, key management, non-repudiation and authorization which were being met using several techniques. Standardization of algorithms by National Institute of Standards and Technology (NIST) has facilitated international communication for banking and information transfer using these standards. Encryption can be carried out using Advanced Encryption Standard (AES) using variable block lengths (128, 192 or 256 bits) and variable key lengths (128, 192 or 256 bits). Solutions for light weight applications such as those for Internet of Things (IoT) are also being standardized. Message integrity is possible using host of hash algorithms such as SHA-1, SHA-2 etc., and more recently using SHA-3 algorithm. Authentication is possible using well known Rivest-Shamir-Adleman (RSA) algorithm needing 2048/4096 bit operations. Elliptic Curve Cryptography (ECC) is also quite popular and used in several practical systems such as WhatsApp, Blackberry etc. Key exchange is possible using Diffie-Hellman algorithm and its variations. Digital Signatures can be carried out using RSA algorithm or Elliptic Curve Digital Signature Algorithm (ECDSA) or DSA (Digital Signature Algorithm). All these algorithms derive security from difficulty in solving some mathematical problems such as factorization problem or discrete logarithm problem. Though published literature gives evidence of solving factorization problem upto 768 bits only, it is believed that using Quantum computers, these problems could be solved by the end of this decade. This is due to availability of the pioneering work of Shor and Grover [1]. For factoring an integer of N bits, Shor’s algorithm takes quantum gates. As such, there is ever growing interest in being ready for the next decade with algorithms that may resist attacks in the quantum computer era. NIST has foreseen this need and has invited proposals from researchers all over the world. In the first round, about 66 submissions were received which have been scrutinized for completeness of submissions , novelty of the approach and security and 25 of these were promote to second round to improve based on the comments received on the first round submission. These will be analyzed for security and some will be selected for final recommendation for use by industry. These are for encryption/decryption, key agreement, hashing and Digital Signatures for both hardware and software implementations. In this paper, we present a brief survey of the state of the art in post-Quantum Cryptography (PQC) followed by study of one of technique referred to as Learning With Errors (LWE) in some detail.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Mackenzie, Adrian. „Making Data Flow“. M/C Journal 5, Nr. 4 (01.08.2002). http://dx.doi.org/10.5204/mcj.1975.

Der volle Inhalt der Quelle
Annotation:
Why has software code become an object of intense interest in several different domains of cultural life? In art (.net art or software art), in Open source software (Linux, Perl, Apache, et cetera (Moody; Himanen)), in tactical media actions (hacking of WEF Melbourne and Nike websites), and more generally, in the significance attributed to coding as work at the pinnacle of contemporary production of information (Negri and Hardt 298), code itself has somehow recently become significant, at least for some subcultures. Why has that happened? At one level, we could say that this happened because informatic interaction (websites, email, chat, online gaming, ecommerce, etc) has become mainstream to media production, organisational practice and indeed, quotidian life in developed and developing countries. As information production moves into the mainstream, working against mainstream control of flows of information means going upstream. For artists, tactical media groups and hackers, code seems to provide a way to, so to speak, reach over the shoulder of mainstream media channels and contest their control of information flows.1 A basic question is: does it? What code does We all see content flowing through the networks. Yet the expressive traits of the flows themselves are harder to grapple with, partly because they are largely infrastructural. When media and cultural theory discuss information-network society, cyberculture or new media, questions of flow specificity are usually downplayed in favour of high-level engagement with information as content. Arguably, the heightened attention to code attests to an increasing awareness that power relations are embedded in the generation and control of flow rather than just the meanings or contents that might be transported by flow. In this context, loops provide a really elementary and concrete way to explore how code participates in information flows. Loops structure almost every code object at a basic level. The programmed loop, a very mundane construct, can be found in any new media artist's or software engineer's coding toolkit. All programming languages have them. In popular programming and scripting languages such as FORTRAN, C, Pascal, C++, Java, Visual Basic, Perl, Python, JavaScript, ActionScript, etc, an almost identical set of looping constructs are found.2 Working with loops as material and as instrument constitutes an indispensable part of producing code-based objects. On the one hand, the loop is the most basic technical element of code as written text. On the other hand, as process executed by CPUs, and in ways that are not immediately obvious even to programmers themselves, loops of various kinds underpin the generative potential of code.3 Crucially, code is concerned with operationality rather than meaning (Lash 203). Code does not directly create meaning. It circulates, transforms, and reproduces messages and patterns of widely varying semantic and contextual richness. By definition, flow is something continuous. In the case of information, what flows are not things but patterns which can be rendered perceptible in different ways—as image, text, sound—on screen, display, and speaker. While the patterns become perceptible in a range of different spatio-temporal modes, their circulation is serialised. They are, as we know, composed of sequences of modulations (bits). Loops control the flow of patterns. Lev Manovich writes: programming involves altering the linear flow of data through control structures, such as 'if/then' and 'repeat/while'; the loop is the most elementary of these control structures (Manovich 189). Drawing on these constructs, programming or coding work gain traction in flows. Interactive looping Loops also generate flows by multiplying events. The most obvious example of how code loops generate and control flows comes from the graphic user interfaces (GUIs) provided by typical operating systems such as Windows, MacOs or one of the Linux desktop environments. These operating systems configure the visual space of millions of desktop screen according to heavily branded designs. Basically they all divide the screen into different framing areas—panels, dividing lines, toolbars, frames, windows—and then populate those areas with controls and indicators—buttons, icons, checkboxes, dropdown lists, menus, popup menus. Framing areas hold content—text, tables, images, video. Controls, usually clustered around the edge of the frame, transform the content displayed in the framed areas in many different ways. Visual controls are themselves hooked up via code to physical input devices such as keyboard, mouse, joystick, buttons and trackpad. The highly habituated and embodied experience of interacting with contemporary GUIs consists of moving in and out, within and between different framing areas, using visual controls that respond either to pointing (with the mouse) or keyboard command to change what is displayed, how it is displayed or indeed to move that content elsewhere (onto disk, across a network). Beneath the highly organised visual space of the GUI, lie hundreds if not thousands of loops. The work of coding these interfaces involves making loops, splicing loops together, and nesting loops within loops. At base, the so-called event loop means that the GUI in principle stands ready at any time to accept input from the physical interface devices. Depending on what that input is, it may translate into direct changes within the framed areas (for instance, keystrokes appear in a text field as letters) or changes affecting the controls (for instance, Control-Enter might signal send the text as an email). What we usually understand by interactivity stems from the way that a loop constantly accepts signals from the physical inputs, queues the signals as events, and deals with them one by one as discrete changes in what appears on screen. Within the GUI's basic event loop, many other loops are constantly starting and finishing. They are nested and unnested. They often affect some or other of the dozens of processes running at any one time within the operating system. Sometimes a command coming from the keyboard or a signal arriving from some other peripheral interface (the network interface card, the printer, a scanner, etc) will trigger the execution of a new process, itself composed of manifold loops. Hence loops often transiently interact with each other during execution of code. At base, the GUI shows something important, something that extends well beyond the domain of the GUI per se: the event loop generates and controls informations flows at the same time. People type on keyboards or manipulate game controllers. A single keypress or mouse click itself hardly constitutes a flow. Yet the event loop can amplify it into a cascade of thousands of events because it sets other loops in process. What we call information flow springs from the multiplicatory effect of loops. A typology of looping Information flows don't come from nowhere. They always go somewhere. Perhaps we could generalise a little from the mundane example of the GUI and say that the generation and control of information flows through loops is itself regulated by bounding conditions. A bounding condition determines the number of times and the sequence of operations carried out by a loop. They often come from outside the machine (interfaces of many different kinds) and from within it (other processes running at the same time, dependent on the operating system architecture and the hardware platform). Their regulatory role suggests the possibility of classifying loops according to boundary conditions.4 The following table classifies loops based on bounding conditions: Type of loop Bounding condition Typical location Simple & indefinite No bounding conditions Event loops in GUIs, servers ... Simple & definite Bounding conditions determined by a finite set of elements Counting, sorting, input and output Nested & definite Multiple bounding conditions Transforming grid and table structures Recursive Depth of possible recursion (memory or time) Searching and sorting of tree or network structures Result controlled Loop ends when some goal has been reached Goal-seeking algorithms Interactive and indefinite Bounding conditions change during the course of the loop User interfaces or interaction Although it risks simplifying something that is quite intricate in any actually executing process, this classification does stress that the distinguishing feature of loops may well be their bounding conditions. In practical terms, within program code, a bounding condition takes the form of some test carried out before, during or after each iteration of a loop. The bounding conditions for some loops relate to data that the code expects to come from other places—across networks, from the user interface, or some other devices. For other loops, the bounding conditions continually emerge in the course of the loop itself—the result of a calculation, finding some result in the course of searching a collection or receiving some new input in a flow of data from an interface or network connection. Based on the classification, we could suggest that loops not only generate flows, but they generate those flows within particular spatio-temporal manifolds. Put less abstractly, if we accept that flows don't come from nowhere, we then need to say what kind of places they do come from. The classification shows that they do not come from homogeneous spaces. In fact they relate to different topologies, to the hugely diverse orderings of signs and gestures within mediatic cultures. To take a mundane example, why has the table become such an important element in the HTML coding of webpages? Clearly tables provide an easy way to organise a page. Tables as classifying and visual ordering devices are nothing new. Along with lists, they have been used for centuries. However, the table as onscreen spatial entity also maps very directly onto a nested loop: the inner loop generates the horizontal row contents; the outer loop places the output of the inner loop in vertical order. As web-designers quickly discovered during the 1990s, HTML tables are rendered quickly by browsers and can easily position different contents—images, headings, text, lines, spaces—in proximity. In shorts, nested loops can quickly turn a table into a serial flow or quickly render a table out of a serial flow. Implications We started with the observation that artists, writers, hackers and media activists are working with code in order to reposition themselves in relation to information flows. Through technical elements such as loops, they reappropriate certain facets of the production of information and communication. Working with these and other elements, they look for different points of entry into the flows, attempting to move upstream of the heavily capitalised sites of mainstream production such as the Windows GUI, eCommerce websites or blockbuster game titles. The proliferation of information objects in music, in visual culture, in database and net-centred forms of interactivity ranging from computer games to chat protocols, suggests that the coding work can trigger powerful shifts in the cultures of circulation. Analysis of loops also suggests that the notion of data or information flow, understood as the continuous gliding of bits through systems of communication, needs revision. Rather than code simply controlling flow, code generates flows as well. What might warrant further thought is just how different kinds of bounding conditions generate different spatio-temporal patterns and modes of inclusion within flows. The diversity of loops within information objects imply a variety of topologically complicated places. It would be possible to work through the classification describing how each kind of loop maps into different spatial and temporal orderings. In particular, we might want to focus on how more complicated loops—result controlled, recursive, or interactive and indefinite types—map out more topologically complicated spaces and times. For my purposes, the important point is that bounding conditions not only regulate loops, they bring different kinds of spatio-temporal manifold into the seriality of flow. They imprint spatial and temporal ordering. Here the operationality of code begins to display a generative dimension that goes well beyond merely transporting or communicating content. Notes 1. At a more theoretical level, for a decade or so fairly abstract notions of virtuality have dominated media and cultural studies approaches to new media. While that domination has been increasingly contested by more fine grained studies of how the Internet is enmeshed with different places (Miller and Slater), attention to code is justified on the grounds that it constitutes an increasingly important form of expression within information flows. 2. Detailed discussion of these looping constructs can be found in any programming textbook or introductory computer science course, so I will not be going through them in any detail. 3. For instance, the cycles of the clock chip are absolutely irreducible. Virtually all programs implicitly rely on a clock chip to regulate execution of their instructions. 4. A classification can act as a symptomatology, that is, as something that sets out the various signs of the existence of a particular condition (Deleuze 368), in this case, the operationality of code. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: U of Minnesota P, 1996. Deleuze, Gilles. The Brain is the Screen. An Interview with Gilles Deleuze. The Brain is the Screen. Deleuze and the Philosophy of Cinema. Ed Gregory Flaxman. Minneapolis: U of Minnesota P, 2000. 365-68. Hardt, Michael and Antonio Negri. Empire. Cambridge, MA: Harvard U P, 2000. Himanen, Pekka. The Hacker Ethic and the Spirit of the Information Age. London: Secker and Warburg, 2001. Lash, Scott. Critique of Information. London: Sage, 2002. Manovich, Lev. What is Digital Cinema? Ed. Peter Lunenfeld. The Digital Dialectic: New Essays on New Media. Cambridge, MA: MIT, 1999. 172-92. Miller, Daniel, and Don Slater. The Internet: An Ethnographic Approach. Oxford: Berg, 2000. Moody, Glyn. Rebel Code: Linux and the Open Source Revolution. Middlesworth: Penguin, 2001. Citation reference for this article MLA Style Mackenzie, Adrian. "Making Data Flow" M/C: A Journal of Media and Culture 5.4 (2002). [your date of access] < http://www.media-culture.org.au/mc/0208/data.php>. Chicago Style Mackenzie, Adrian, "Making Data Flow" M/C: A Journal of Media and Culture 5, no. 4 (2002), < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]). APA Style Mackenzie, Adrian. (2002) Making Data Flow. M/C: A Journal of Media and Culture 5(4). < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie