To see the other types of publications on this topic, follow the link: Shorův algoritmus.

Dissertations / Theses on the topic 'Shorův algoritmus'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Shorův algoritmus.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nwaokocha, Martyns. "Shorův algoritmus v kvantové kryptografii." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445457.

Full text
Abstract:
Kryptografie je velmi důležitým aspektem našeho každodenního života, protože poskytuje teoretický základ informační bezpečnosti. Kvantové výpočty a informace se také stávají velmi důležitou oblastí vědy kvůli mnoha aplikačním oblastem včetně kryptologie a konkrétněji v kryptografii veřejných klíčů. Obtížnost čísel do hlavních faktorů je základem některých důležitých veřejných kryptosystémů, jejichž klíčem je kryptosystém RSA . Shorův kvantový faktoringový al-goritmus využívá zejména kvantový interferenční účinek kvantového výpočtu k faktorovým semi-prime číslům v polynomiálním čase na kvantovém počítači. Ačkoli kapacita současných kvantových počítačů vykonávat Shorův algoritmus je velmi omezená, existuje mnoho rozsáhlých základních vědeckých výzkumů o různých technikách optimalizace algoritmu, pokud jde o faktory, jako je počet qubitů, hloubka obvodu a počet bran. v této práci jsou diskutovány, analyzovány a porovnávány různé varianty Shorova factoringového algoritmu a kvantových obvodů. Některé varianty Shorova algoritmu jsou také simulované a skutečně prováděné na simulátorech a kvantových počítačích na platformě IBM QuantumExperience. Výsledky simulace jsou porovnávány z hlediska jejich složitosti a míry úspěšnosti. Organizace práce je následující: Kapitola 1 pojednává o některých klíčových historických výsledcích kvantové kryptografie, uvádí problém diskutovaný v této práci a představuje cíle, kterých má být dosaženo. Kapitola 2 shrnuje matematické základy kvantového výpočtu a kryptografie veřejných klíčů a popisuje notaci použitou v celé práci. To také vysvětluje, jak lze k rozbití kryptosystému RSA použít realizovatelný algoritmus pro vyhledávání objednávek nebo factoring. Kapitola 3 představuje stavební kameny Shorova algoritmu, včetně kvantové Fourierovy transformace, kvantového odhadu fází, modulární exponentiace a Shorova algoritmu. Zde jsou také uvedeny a porovnány různé varianty optimalizace kvantových obvodů. Kapitola 4 představuje výsledky simulací různých verzí Shorova algoritmu. V kapitole 5 pojednejte o dosažení cílů disertační práce, shrňte výsledky výzkumu a nastíňte budoucí směry výzkumu.
APA, Harvard, Vancouver, ISO, and other styles
2

Nyman, Peter. "Representation of Quantum Algorithms with Symbolic Language and Simulation on Classical Computer." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2329.

Full text
Abstract:

Utvecklandet av kvantdatorn är ett ytterst lovande projekt som kombinerar teoretisk och experimental kvantfysik, matematik, teori om kvantinformation och datalogi. Under första steget i utvecklandet av kvantdatorn låg huvudintresset på att skapa några algoritmer med framtida tillämpningar, klargöra grundläggande frågor och utveckla en experimentell teknologi för en leksakskvantdator som verkar på några kvantbitar. Då dominerade förväntningarna om snabba framsteg bland kvantforskare. Men det verkar som om dessa stora förväntningar inte har besannats helt. Många grundläggande och tekniska problem som dekoherens hos kvantbitarna och instabilitet i kvantstrukturen skapar redan vid ett litet antal register tvivel om en snabb utveckling av kvantdatorer som verkligen fungerar. Trots detta kan man inte förneka att stora framsteg gjorts inom kvantteknologin. Det råder givetvis ett stort gap mellan skapandet av en leksakskvantdator med 10-15 kvantregister och att t.ex. tillgodose de tekniska förutsättningarna för det projekt på 100 kvantregister som aviserades för några år sen i USA. Det är också uppenbart att svårigheterna ökar ickelinjärt med ökningen av antalet register. Därför är simulering av kvantdatorer i klassiska datorer en viktig del av kvantdatorprojektet. Självklart kan man inte förvänta sig att en kvantalgoritm skall lösa ett NP-problem i polynomisk tid i en klassisk dator. Detta är heller inte syftet med klassisk simulering. Den klassiska simuleringen av kvantdatorer kommer att täcka en del av gapet mellan den teoretiskt matematiska formuleringen av kvantmekaniken och ett förverkligande av en kvantdator. Ett av de viktigaste problemen i vetenskapen om kvantdatorn är att utveckla ett nytt symboliskt språk för kvantdatorerna och att anpassa redan existerande symboliska språk för klassiska datorer till kvantalgoritmer. Denna avhandling ägnas åt en anpassning av det symboliska språket Mathematica till kända kvantalgoritmer och motsvarande simulering i klassiska datorer. Konkret kommer vi att representera Simons algoritm, Deutsch-Joszas algoritm, Grovers algoritm, Shors algoritm och kvantfelrättande koder i det symboliska språket Mathematica. Vi använder samma stomme i alla dessa algoritmer. Denna stomme representerar de karaktäristiska egenskaperna i det symboliska språkets framställning av kvantdatorn och det är enkelt att inkludera denna stomme i framtida algoritmer.


Quantum computing is an extremely promising project combining theoretical and experimental quantum physics, mathematics, quantum information theory and computer science. At the first stage of development of quantum computing the main attention was paid to creating a few algorithms which might have applications in the future, clarifying fundamental questions and developing experimental technologies for toy quantum computers operating with a few quantum bits. At that time expectations of quick progress in the quantum computing project dominated in the quantum community. However, it seems that such high expectations were not totally justified. Numerous fundamental and technological problems such as the decoherence of quantum bits and the instability of quantum structures even with a small number of registers led to doubts about a quick development of really working quantum computers. Although it can not be denied that great progress had been made in quantum technologies, it is clear that there is still a huge gap between the creation of toy quantum computers with 10-15 quantum registers and, e.g., satisfying the technical conditions of the project of 100 quantum registers announced a few years ago in the USA. It is also evident that difficulties increase nonlinearly with an increasing number of registers. Therefore the simulation of quantum computations on classical computers became an important part of the quantum computing project. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation. Classical simulation of quantum computations will cover part of the gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. One of the most important problems in "quantum computer science" is the development of new symbolic languages for quantum computing and the adaptation of existing symbolic languages for classical computing to quantum algorithms. The present thesis is devoted to the adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulation on the classical computer. Concretely we shall represent in the Mathematica symbolic language Simon's algorithm, the Deutsch-Josza algorithm, Grover's algorithm, Shor's algorithm and quantum error-correcting codes. We shall see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include this framework in future algorithms.

APA, Harvard, Vancouver, ISO, and other styles
3

Kugel, Felix. "Das Shor-Verfahren als stochastischer Algorithmus." [S.l.] : [s.n.], 2006. http://137.193.200.177/ediss/kugel-felix/meta.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thakkar, Darshan Suresh, and darshanst@gmail com. "FPGA Implementation of Short Word-Length Algorithms." RMIT University. Electrical and Computer Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080806.140908.

Full text
Abstract:
Short Word-Length refers to single-bit, two-bit or ternary processing systems. SWL systems use Sigma-Delta Modulation (SDM) technique to express an analogue or multi-bit input signal in terms of a high frequency single-bit stream. In Sigma-Delta Modulation, the input signal is coarsely quantized into a single-bit representation by sampling it at a much higher rate than twice the maximum input frequency viz. the Nyquist rate. This single-bit representation is almost exclusively filtered to remove conversion quantization noise and sample decimated to the Nyquist frequency in preparation for traditional signal processing. SWL algorithms have a huge potential in a variety of applications as they offer many advantages as compared to multi-bit approaches. Features of SWL include efficient hardware implementation, increased flexibility and massive cost savings. Field Programmable Gate Arrays (FPGAs) are SRAM/FLASH based integrated circuits that can be programmed and re-programmed by the end user. FPGAs are made up of arrays of logic gates, routing channels and I/O blocks. State-of-the-art FPGAs include features such as Advanced Clock Management, Dedicated Multipliers, DSP Slices, High Speed I/O and Embedded Microprocessors. A System-on-Programmable-Chip (SoPC) design approach uses some or all the aforementioned resources to create a complete processing system on the device itself, ensuring maximum silicon area utilization and higher speed by eliminating inter-chip communication overheads. This dissertation focuses on the application of SWL processing systems in audio Class-D Amplifiers and aims to prove the claims of efficient hardware implementation and higher speeds of operation. The analog Class-D Amplifier is analyzed and an SWL equivalent of the system is derived by replacing the analogue components with DSP functions wherever possible. The SWL Class-D Amplifier is implemented on an FPGA, the standard emulation platform, using VHSIC Hardware Description Languages (VHDL). The approach is taken a step forward by adding re-configurability and media selectivity and proposing SDM adaptivity to improve performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Bandini, Michele. "Crittografia quantistica e algoritmo di Shor." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17073/.

Full text
Abstract:
In questo elaborato si cerca di dare un'idea di come funzioni un computer quantistico, portando come esempio l'Algoritmo di Shor per la fattorizzazione: si cerca di chiarirne la matematica e la fisica che vi stanno dietro e l'importanza applicativa e storica che ha avuto. Brevi cenni sull'odierna tecnologia dei calcolatori quantistici.
APA, Harvard, Vancouver, ISO, and other styles
6

Schilling, Gordian Hansjoerg. "Algorithms for short-term and periodic process scheduling and rescheduling." Thesis, Imperial College London, 1998. http://hdl.handle.net/10044/1/7696.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Feng, Wenlan. "Modelling market demand and manufacturing response using genetic algorithms." Thesis, Glasgow Caledonian University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Drobouchevitch, Inna G. "Design and analysis of algorithms for short-route shop scheduling problems." Thesis, University of Greenwich, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Healy, Cornelius Thomas. "Short-length low-density parity-check codes : construction and decoding algorithms." Thesis, University of York, 2014. http://etheses.whiterose.ac.uk/7875/.

Full text
Abstract:
Error control coding is an essential part of modern communications systems. LDPC codes have been demonstrated to offer performance near the fundamental limits of channels corrupted by random noise. Optimal maximum likelihood decoding of LDPC codes is too complex to be practically useful even at short block lengths and so a graph-based message passing decoder known as the belief propagation algorithm is used instead. In fact, on graphs without closed paths known as cycles the iterative message passing decoding is known to be optimal and may converge in a single iteration, although identifying the message update schedule which allows single-iteration convergence is not trivial. At finite block lengths graphs without cycles have poor minimum distance properties and perform poorly even under optimal decoding. LDPC codes with large block length have been demonstrated to offer performance close to that predicted for codes of infinite length, as the cycles present in the graph are quite long. In this thesis, LDPC codes of shorter length are considered as they offer advantages in terms of latency and complexity, at the cost of performance degradation from the increased number of short cycles in the graph. For these shorter LDPC codes, the problems considered are: First, improved construction of structured and unstructured LDPC code graphs of short length with a view to reducing the harmful effects of the cycles on error rate performance, based on knowledge of the decoding process. Structured code graphs are particularly interesting as they allow benefits in encoding and decoding complexity and speed. Secondly, the design and construction of LDPC codes for the block fading channel, a particularly challenging scenario from the point of view of error control code design. Both established and novel classes of codes for the channel are considered. Finally the decoding of LDPC codes by the belief propagation algorithm is considered, in particular the scheduling of messages passed in the iterative decoder. A knowledge-aided approach is developed based on message reliabilities and residuals to allow fast convergence and significant improvements in error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
10

Resende, Diogo Soares. "Ferramenta trifásica para síntese e análise da proteção em sistemas desequilibrados." Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3134.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-01-09T16:42:49Z No. of bitstreams: 1 diogosoaresresende.pdf: 5068811 bytes, checksum: e6f520585ffef78e123573bd1e41ff4d (MD5)
Approved for entry into archive by Diamantino Mayra (mayra.diamantino@ufjf.edu.br) on 2017-01-31T11:22:04Z (GMT) No. of bitstreams: 1 diogosoaresresende.pdf: 5068811 bytes, checksum: e6f520585ffef78e123573bd1e41ff4d (MD5)
Made available in DSpace on 2017-01-31T11:22:04Z (GMT). No. of bitstreams: 1 diogosoaresresende.pdf: 5068811 bytes, checksum: e6f520585ffef78e123573bd1e41ff4d (MD5) Previous issue date: 2016-09-01
FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais
Esta dissertação propõe uma ferramenta computacional para síntese e análise da proteção em sistemas de subtransmissão e de distribuição de energia elétrica na presença de desequilíbrios. Tais sistemas, em especial os de distribuição, são geralmente desequilibrados e podem conter trechos mono, bi ou trifásicos, além de operarem com carregamento distribuído de maneira assimétrica, o que torna as grandezas elétricas observadas diferentes para cada fase analisada. Neste trabalho foram desenvolvidos dois métodos, que foram implementados na ferramenta proposta: (i) procedimentos automáticos para ajustes ótimo de dispositivos de proteção em sistemas de distribuição desequilibrados; (ii) método para a análise gráfica do desempenho da proteção aplicada a sistemas desequilibrados. A ferramenta proposta foi desenvolvida em ambiente MatLab e permite também a síntese dos ajustes de proteções de sobrecorrente (mediante a utilização de um Algoritmo Genético) e de distância, além de calcular as relações de transformação para TCs e TPs. A ferramenta foi calibrada através da comparação dos resultados obtidos pela mesma com os obtidos por equipamentos reais de proteção, tendo também sido realizados testes com diferentes sistemas da literatura.
This thesis proposes a computacional tool for synthesis and analysis of protection in electricity subtransmission and distribution systems in presence of imbalances. Such systems, in particular distribution, are generally unbalanced, and can contain mono-, bi- or triphasic sections, and operate with asymmetrically distributed loads, which makes electrical quantities observed different for each phase analyzed. In this work were developed two methods, which have been implemented in the proposed tool: (i) automatic procedures for optimal settings of protection devices in unbalanced distribution systems; (ii) method for graphical analysis of protection’s performance applied to unbalanced systems. The proposed tool was developed in MatLab and also allows the synthesis of overcurrent (using a Genetic Algorithm) and distance settings and calculates the transformation ratios to PTs and CTs. The tool is calibrated by comparing the results obtained by the same with those obtained by actual equipment protection, has also been made tests with different systems in the literature.
APA, Harvard, Vancouver, ISO, and other styles
11

Mullen, Patrick B. "Learning in short-time horizons with measurable costs /." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1585.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sansuke, Maranhão Watanabe Mário. "O algoritmo polinomial de Shor para fatoração em um computador quântico." Universidade Federal de Pernambuco, 2003. https://repositorio.ufpe.br/handle/123456789/7361.

Full text
Abstract:
Made available in DSpace on 2014-06-12T18:31:41Z (GMT). No. of bitstreams: 2 arquivo8516_1.pdf: 556858 bytes, checksum: 61691f022e165231e3147bd9b1b11a63 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2003
Sistemas de criptografia largamente difundidos como o RSA fundamentam a sua eficiência na suposição de que, em termos práticos, é impossível fatorar números inteiros suficientemente grandes em uma escala de tempo aceitável. Mais precisamente, não existem, até o momento, algoritmos de fatoração em tempo polinomial que possam ser implementados nos atuais computadores. Dentre os algoritmos conhecidos, o mais eficiente requer um tempo computacional de ordem exponencial na quantidade de dígitos binários do número a ser fatorado. Em 1994, baseado nos trabalhos anteriores de Benioff, Bennett, Deutsch, Feynman e Simon, dentre outros, Peter Shor apresentou um algoritmo de fatoração que requer assintoticamente uma quantidade em ordem polinomial de passos em um computador quântico para fatorar um número inteiro de tamanho arbitrário. Esse algoritmo ao invés de abordar o problema de decompor tal número em dois fatores não triviais pelo método direto de divisões sucessivas, utiliza o problema equivalente de encontrar a ordem de um certo inteiro modulo o número fatorado, onde esse inteiro é escolhido aleatoriamente relativamente primo com o número fatorado. Shor faz uso de um algoritmo quântico para calcular essa ordem. A computação quântica revela um paradigma computacional bastante adverso da computação clássica. Enquanto esta última é realizada através de operações binárias determinísticas com base na lógica booleana clássica, a computação quântica fundamenta as suas operações nos postulados que descrevem o comportamento quântico da matéria. Portanto, é probabilística no seu modus operandi. Essa diferença entre os formalismos lógicos da computação clássica e da computação quântica é um reflexo direto da natureza dos sistemas físicos que são utilizados para implementar concretamente cada uma dessas computações. Esta dissertação apresenta o algoritmo de Shor para fatoração em um computador quântico. Na seqüência, introduzimos no capítulo 1 alguns conceitos básicos da computação clássica com o objetivo de criar um ambiente de idéias favorável à apresentação da computação quântica como uma extensão, tão natural quanto possível, do modelo clássico computacional. Assim, no capítulo 2, apresentamos as bases do formalismo matemático que modela a computação quântica, atendo-nos apenas aos aspectos conceituais que são, direta ou indiretamente, aplicados na descrição do algoritmo de Shor. Os capítulos 3 e 4 são dedicados à apresentação do algoritmo de fatoração de Shor, feita em duas partes. A primeira diz respeito a parte não quântica e aborda os aspectos algébricos do algoritmo. Também é demonstrado o teorema que assegura a viabilidade probabilística da solução desse problema. No capítulo 4, apresentamos a parte quântica do algoritmo de Shor. O ponto alto da dissertação é alcançado mostrando-se como encontrar a ordem de um inteiro módulo o número a ser fatorado relativamente primo com este, conciliando o algoritmo quântico com uma interpretação clássica de seus dados de saída, mediante o uso da expansão de um número racional em frações contínuas
APA, Harvard, Vancouver, ISO, and other styles
13

Gatti, Lorenzo. "Algoritmi quantistici e classi di complessità." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9128/.

Full text
Abstract:
Solitamente il concetto di difficoltà è piuttosto soggettivo, ma per un matematico questa parola ha un significato diverso: anche con l’aiuto dei più potenti computer può essere impossibile trovare la soluzione di un sudoku, risolvere l’enigma del commesso viaggiatore o scomporre un numero nei suoi fattori primi; in questo senso le classi di complessità computazionale quantificano il concetto di difficoltà secondo le leggi dell’informatica classica. Una macchina quantistica, però, non segue le leggi classiche e costituisce un nuovo punto di vista in una frontiera della ricerca legata alla risoluzione dei celebri problemi del millennio: gli algoritmi quantistici implementano le proprietà straordinarie e misteriose della teoria dei quanti che, quando applicate lucidamente, danno luogo a risultati sorprendenti.
APA, Harvard, Vancouver, ISO, and other styles
14

Freitas, Adriana Xavier. "Algoritmo de Shor e sua aplicação à fatoração de números inteiros." Universidade Federal de Minas Gerais, 2010. http://hdl.handle.net/1843/EABA-85FJXP.

Full text
Abstract:
Shors algorithm is a quantum algorithm that finds with high probability the order of an element $x \in Z_{N}^{*}$. One of its applications is the construction of an algorithm that finds the factors of N. In the initial chapters we approach necessary tools for the comprehension of Shors algorithm such as: modular arithmetic, algorithms, continued fractions, basic concepts of quantum computing and Fourier quantum transform. In the following chapters we present Shors algorithm an its application in factorization.
O algoritmo de Shor é um algoritmo quântico que encontra com alta probabilidade a ordem de um elemento $x \in Z_{N}^{*}$. Uma de suas aplicações é a construção de um algoritmo que encontra fatores de N. Nos capítulos iniciais abordaremos ferramentas necessárias para o entendimento do algoritmo de Shor, tais como: aritmética modular, algoritmos, frações contínuas, conceitos introdutórios de computação quântica e transformada quântica de Fourier. Nos capítulos seguintes apresentamos o algoritmo de Shor e sua aplicação á fatoração de números inteiros.
APA, Harvard, Vancouver, ISO, and other styles
15

Thinsz, David. "A Comparative Study of Machine Learning Algorithms for Short-Term Building Cooling Load Predictions." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264534.

Full text
Abstract:
Buildings account for a large part of the total energy demand in the world. The building energy demand increases every year and space cooling is the main contributor for this increase. In order to create a sustainable global energy system it is therefore of great importance to improve the energy efficiency in buildings. Cooling load predictions are an essential part of improving the building energy efficiency. The widespread use of Building Automation Systems (BAS) in modern buildings makes it possible to use data-driven methods for such predictions. The purpose of this study is twofold: to compare the performance of five different machine learning algorithms by analyzing their accuracy and robustness; and to examine what effect different versions of a data set have on these algorithms. The data that is used in this study is one-year operational data from a building in the city of Shenzhen in China. This data set is engineered in multiple different ways and is used to test the algorithms. This study show that the predictive accuracy of machine learning models can be improved by introducing time-lag variables into the data set, especially if the original data set is very low-dimensional. It is also shown that some algorithms benefit from a binary representation of calendar variables instead a decimal representation. All algorithms in this study show quite similar results which suggests that they are all capable of capturing the relationships in the data. The accuracy of all models that are developed in this study are considered good enough for engineering purposes.
Byggnader står för en stor del av världens totala energibehov. Byggnaders energibehov ökar varje år och den största bidragande faktorn är det ökade behovet av energi för nedkylning av byggnader. För att skapa ett hållbart globalt energisystem är det viktigt att förbättra energianvändningen i byggnader. Att uppskatta köldbelastningen i byggnader är en nödvändig del för att i nästa steg kunna förbättra energieffektiviteten i byggnader. Den breda användningen av automationssystem i moderna byggnader (BAS) gör det möjligt att använda data-baserade metoder för sådana uppskattningar. Syftet med den här studien är dels att jämföra hur fem olika maskininlärningsalgoritmer fungerar genom att analysera hur noggranna och robusta de är, dels att undersöka vilken effekt olika versioner av ett dataset har på dessa algoritmer. Datan som används i denna studie är ett års operativ data från en byggnad i Shenzhen i Kina. Detta dataset modifieras på ett antal olika sätt och används för att pröva algoritmerna. Den här studien visar att noggrannheten i maskininlärningsmodellerna kan förbättras genom att skapa tidslaggade variabler i datasetet, speciellt om det ursprungliga datasetet har en låg dimension. Studien visar även att vissa algoritmer fungerar bättre med en binär representation av kalendervariabler än en decimal representation. Alla algoritmer i den här studien visar likande resultat vilket tyder på att de alla kan fånga sambanden i datan. Noggrannheten i alla modellerna i studien visar sig vara tillräckligt bra med ingenjörsmässig noggrannhet.
APA, Harvard, Vancouver, ISO, and other styles
16

Olandersson, Sandra. "Evaluation of Machine Learning Algorithms for Classification of Short-Chain Dehydrogenase/Reductase Protein Sequences." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3828.

Full text
Abstract:
The classification of protein sequences is a subfield in the area of Bioinformatics that attracts a substantial interest today. Machine Learning algorithms are here believed to be able to improve the performance of the classification phase. This thesis considers the application of different Machine Learning algorithms to the classification problem of a data set of short-chain dehydrogenases/reductases (SDR) proteins. The classification concerns both the division of the proteins into the two main families, Classic and Extended, and into their different subfamilies. The results of the different algorithms are compared to select the most appropriate algorithm for this particular classification problem.
Klassificeringen av proteinsekvenser är ett område inom Bioinformatik, vilket idag drar till sig ett stort intresse. Maskininlärningsalgoritmer anses här kunna förbättra utförandet av klassificeringsfasen. Denna uppsats rör tillämpandet av olika maskininlärningsalgoritmer för klassificering av ett dataset med short-chain dehydrogenases/reductases (SDR) proteiner. Klassificeringen rör både indelningen av proteinerna i två huvudklasser, Classic och Extended, och deras olika subklasser. Resultaten av de olika algoritmerna jämförs för att välja ut den mest lämpliga algoritmen för detta specifika klassificeringsproblem.
Sandra Olandersson Blåbärsvägen 27 372 38 Ronneby home: 0457-12084
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Rui. "Gossiping in meshes in all-port mode and with short packets." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B37046494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Rui, and 王銳. "Gossiping in meshes in all-port mode and with short packets." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B37046494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Modesto, Sonia Angelina Garcia. "Um modelo de despacho e pré-despacho de retaivas através de algoritmos genéticos, fluxo de potência ótimo e busca tabu = A model of reactive dispatch and a model of short-term reactive operation planning through geneti algorithms, optimal power flow and search tabu." [s.n.], 2003. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261159.

Full text
Abstract:
Orientador: Takaaki Ohishi
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T16:39:20Z (GMT). No. of bitstreams: 1 Modesto_SoniaAngelinaGarcia_D.pdf: 7966542 bytes, checksum: e394ce8619b189ec96e83326835c45df (MD5) Previous issue date: 2003
Resumo: Esta tese trata do problema de planejamento da operação de curto prazo (operação diária) de reativos. A Cadeia de Planejamento do Setor Elétrico brasileiro inicia com um Planejamento de Longo Prazo, o qual considera a operação do sistema através de modelos equivalentes em um horizonte de cinco anos. Em seguida o Planejamento de Médio Prazo, utilizando um modelo individualizado para cada usina hidrelétrica e termelétrica, analisa a operação para um horizonte de um ano à frente. A próxima etapa é de Programação da Operação, a qual determina a operação do sistema para o horizonte de alguns dias à frente. Finalmente, a solução da Programação da Operação é utilizada pela operação em tempo real como uma referência operativa. Em todas estas etapas, as modelagens representam apenas a geração e transmissão de potência ativa (Modelo DC). Para a parte reativa não há planejamento. Neste estudo sobre a parte reativa, enfocou-se o aspecto das variáveis discretas, mais precisamente os taps dos transformadores. Supõe-se conhecido um despacho de geração ativa, dado pela Programação da Operação, e um despacho de potência reativa, obtido por exemplo através de um caso base, ou através de um modelo de fluxo de potência ótimo reativo. Inicialmente foi estudado o problema de despacho de reativos, o qual considera o operação em um dado instante de tempo (problema estático). Para este problema, foram desenvolvidos três abordagens. A primeira é uma metodologia híbrida combinando Algoritmos Genéticos (AG) e um modelo de Fluxo de Potência Ótimo (FPO). Nesta abordagem as variáveis discretas são tratadas através dos AG e cada indivíduo é avaliado através de um FPO. A vantagem desta metodologia é que esta determina conjuntamente as variáveis continuas (geração de potência ativa e reativa) e as variáveis discretas; a desvantagem é o elevado tempo computacional. Buscando diminuir o tempo de processamento, foram desenvolvidos duas outras metodologias, uma utilizando um método de Busca Local e a outra baseada em Busca Tabu. Em ambas as metodologias, o FPO foi substituído por um modelo de fluxo de potência não linear. O plano de operação reativa ao longo de um dia, aqui denominado prédespacho de reativos, visa estimar uma solução em termos do posicionamento dos taps para cada intervalo do dia, buscando obter um perfil de tensão mais adequado do ponto de vista operativo. A abordagem para o pré-despacho de reativos utiliza o modelo de despacho de reativos baseado em Busca Tabu, mas considera a solução obtida no intervalo anterior. Todas as metodologias utilizaram o sistema IEEE30 como sistema teste
Abstract: This thesis treats the short-term (daily operation) reactive operation planning. The Brazilian operation planning initially performs a long-term operation study. This planning uses a equivalent model representation and take into account a time horizon of five years. The next step is the mid operation planning, that uses a individual representation and considers one year operation. The last planning determines the short-term operation using a more detailed model, including transmission operation constraints through a linear model. Finally, this short-term planning is used as operational guidelines in the real time operation. The models used in these different planning represent only the active power generation and transmission. For reactive operation there are not planning. This study about reactive operation focused the discrete decision. More specifically, the taps are treated as discrete variables. The methodologies supposes that the active and reactive power generations are given. Initially, the reactive dispatch problem is treated, and three methodologies are developed. The first is a hybrid model, that combines a Genetic Algorithm approach with an Optimal Power Flow model. This methodology has the advantage to determine the active and reactive generations and discrete variables values; the inconvenient is the high computer time processing. Two others methodologies are developed in order to obtain more faster approach. One approach is based on Local Search and the other is based on Tabu Search. In both methodologies, the OPF model is substituted by a non-linear power flow. The proposed daily reactive operation planning aims to determines the taps position for each time interval, looking for a more operationally suitable voltage profile. This planning is based on the Tabu Search Reactive Dispatch model. All methodologies used the IEEE30 as test system
Doutorado
Automação
Doutora em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
20

Ge, Jianye. "Computational Algorithms and Evidence Interpretation in DNA Forensics based on Genomic Data." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1234916402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Oka, Anand. "Towards vaulting the hurdle of short lifetimes in wireless sensor networks : distributed algorithms and UWB impulse radio." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/11080.

Full text
Abstract:
Wireless Sensor Networks (WSNs) offer a compelling solution for distributed sensing problems because they can be deployed rapidly and inexpensively, and are robust to failures. However, since they operate on batteries, they tend to have short lifetimes. We present several algorithmic techniques for reducing the power consumption of such networks, based on Algorithmic Data Reduction (ADR) and low-power Ultra-Wide-Band Impulse Radio (UWB-IR). In the ADR approach, we minimize power-hungry communication out of the network via distributed in-situ broadcast `message-passing' algorithms for filtering, compression and model identification. These algorithms are scalable, power-efficient, stable, and computationally tractable. At the same time their performance is close to the respective ultimate theoretical limits. Specifically, the filter performs close to an optimal Bayesian recursion, the compressor approaches the rate-distortion and channel-capacity bound, and the identification scheme is asymptotically efficient in the Cramer-Rao sense. The UWB-IR approach exploits a well-known tradeoff predicted by Shannon theory, namely that one can maintain reliable communication at a given data rate at a reduced transmit power provided the transmission bandwidth is requisitely increased. We propose a novel UWB-IR receiver, which is eminently suited to the bursty mode of operation of the WSN physical layer. The receiver is based on the principle of Compressed Sensing and offers a practical alternative to costly high-rate analog-to-digital conversion. It can tolerate strong inter-symbol interference and can therefore operate at high pulsing rates, which allows us to fully leverage the power-vs-bandwidth tradeoff. It is impervious to poor timing synchronization, which means that the transmitter can avoid sending training headers, thus further saving a significant amount of power. In addition, it is also robust to strong narrow-band interference from licensed systems like WiMAX. With a synergy of the ADR and UWB-IR techniques, the communication related power consumption of the WSN can be reduced by about 30 dB or more in practical scenarios, which substantially alleviates the handicap of limited lifetimes. We study a practical application of these techniques in the problem of target tracking by interpreting the received signal strength of transmissions from RFID tags.
APA, Harvard, Vancouver, ISO, and other styles
22

Koussa, Eliane. "Analysis and design of post-quantum cryptographic algorithms : PKP-based signature scheme and ultra-short multivariate signatures." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG027.

Full text
Abstract:
La construction d’un ordinateur quantique remettrait en cause la plupart des schémas à clef publique utilisés aujourd’hui. Par conséquent, il existe actuellement un effort de recherche important pour développer de nouveauxschémas cryptographiques post-quantique. En particulier, nous nous intéressons aux schémas post-quantiques dont la sécurité repose sur la dureté de la résolution de certains problèmes mathématiques tels que le problème PKP et leproblème HFE. Ce travail étudie d’abord la complexité de PKP. Et après une analyse approfondie des attaques connus sur PKP, nous avons pu mettre à jour certains résultats qui n’étaient pas précis, et fournir une formule de complexité explicite qui nous permet d’identifier les instances difficilesde ce problème et de donner des ensembles de paramètres sécurisés. PKP a été utilisé en 1989 pour développer le premier schéma d’identification à divulgation nulle de connaissance (ZK-IDS) qui a une implémentation efficace sur les cartes à puce. Dans un deuxième temps, nous optimisons le ZK-IDS basé sur PKP, puis nous introduisons PKP-DSS: un schéma de signature digitale basé sur PKP.Nous construisons PKP-DSS à partir du ZK-IDS basé sur PKP en utilisant la transformation Fiat-Shamir (FS) traditionnelle qui convertit les schémas d’identification en schémas de signature. Nous développons une implémentation àtemps constant de PKP-DSS. Il semble que notre schéma soit très compétitif par rapport aux autres schémas de signature FS post-quantiques. Étant donné que PKP est un problème NP-Complet et qu’il n’y a pas d’attaques quantiquesconnues pour résoudre PKP nettement mieux que les attaques classiques, nous pensons que notre schéma est post-quantique.D’autre part, nous étudions les schémas de signature à clé publique de type multivariés qui fournissent des signatures ultra-courtes. Nous analysons d’abord les attaques les plus connues contre les signatures multivariées, puis nous définissons les paramètres minimaux permettant une signatureultra-courte. Nous présentons également de nouveaux modes d’opérations spécifiques afin d’éviter des attaques particulières. Deuxièmement, nous fournissons divers exemples explicites de schémas de signature ultra-courts,pour plusieurs niveaux de sécurité classique, qui sont basés sur des variantes de HFE sur différents corps finis
The construction of large quantum computers would endanger most of the public-key cryptographic schemes in use today. Therefore, there is currently a large research effort to develop new post-quantum secure schemes. In particular, we are interested in post-quantum cryptographic schemes whose security relies on the hardness of solving some mathematical problems such as thePermuted Kernel Problem (PKP) and the Hidden Field Equations (HFE). This work investigates first the complexity of PKP. And after a thorough analysis of the State-of-theart attacks of PKP, we have been able to update some results that were not accurate, and to provide an explicit complexity formula which allows us to identify hard instances and secure sets of parameters of this problem. PKP was used in 1989 to develop the first Zero-Knowledge Identification Scheme (ZK-IDS) that has an efficient implementation on low-cost smart cards. In a second step, we optimize the PKP-based ZK-IDS and then we introduce PKP-DSS:a Digital Signature Scheme based on PKP. We construct PKP-DSS from the ZK-IDS based on PKP by using the traditional Fiat-Shamir (FS) transform that converts Identification schemes into Signature schemes. We develop a constant time implementation of PKP-DSS. It appears that our scheme is very competitive with other post-quantum FS signature schemes. Since that PKP is an NP-Complete problem and since there are no known quantum attacks for solving PKP significantly better than classical attacks, we believe that our scheme is post-quantum secure. On the other hand, we study multivariate public-key signature schemes that provide“ultra”-short signatures. We first analyze the most known attacks against multivariate signatures, and then define the minimal parameters that allow ultra-short signature. We also design some specific newmodes of operations in order to avoid particular attacks.Second, we provide various explicit examples of ultra-short signature schemes that are based on variants of HFE. We present parameters for several level of classical security: 80, 90, 100 bits in addition to 128, 192, and 256 bits; foreach level, we propose different choices of finite fields
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Dapeng. "Two-phase multi-objective evolutionary approach for short-term optimal thermal generation scheduling in electric power systems." Diss., Kansas State University, 2010. http://hdl.handle.net/2097/6691.

Full text
Abstract:
Doctor of Philosophy
Department of Electrical and Computer Engineering
Sanjoy Das
Anil Pahwa
The task of short-term optimal thermal generation scheduling can be cast in the form of a multi-objective optimization problem. The goal is to determine an optimal operating strategy to operate power plants, in such a way that certain objective functions related to economic and environmental issues, as well as transmission losses are minimized, under typical system and operating constraints. Due to the problem’s inherent complexity, and the large number of associated constraints, standard multi-objective optimization algorithms fail to yield optimal solutions. In this dissertation, a novel, two-phase multi-objective evolutionary approach is proposed to address the short-term optimal thermal generation scheduling problem. The objective functions, which are based on operation cost, emission and transmission losses, are minimized simultaneously. During the first phase of this approach, hourly optimal dispatches for each period are obtained separately, by minimizing the operation cost, emission and transmission losses simultaneously. The constraints applied to this phase are the power balance, spinning reserve and power generation limits. Three well known multi-objective evolutionary algorithms, NSGA-II, SPEA-2 and AMOSA, are modified, and several new features are added. This hourly schedule phase also includes a repair scheme that is used to meet the constraint requirements of power generation limits for each unit as well as balancing load with generation. The new approach leads to a set of highly optimal solutions with guaranteed feasibility. This phase is applied separately to each hour long period. In the second phase, the minimum up/down time and ramp up/down rate constraints are considered, and another improved version of the three multi-objective evolutionary algorithms, are used again to obtain a set of Pareto-optimal schedules for the integral interval of time (24 hours). During this phase, the hourly optimal schedules that are obtained from the first phase are used as inputs. A bi-objective version of the problem, as well as a three-objective version that includes transmission losses as an objective, are studied. Simulation results on four test systems indicate that even though NSGA-II achieved the best performance for the two-objective model, the improved AMOSA, with new features of crossover, mutation and diversity preservation, outperformed NSGA-II and SPEA-2 for the three-objective model. It is also shown that the proposed approach is effective in addressing the multi-objective generation dispatch problem, obtaining a set of optimal solutions that account for trade-offs between multiple objectives. This feature allows much greater flexibility in decision-making. Since all the solutions are non-dominated, the choice of a final 24-hour schedule depends on the plant operator’s preference and practical operating conditions. The proposed two-phase evolutionary approach also provides a general frame work for some other multi-objective problems relating to power generation as well as in other real world applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Otava, Lukáš. "Algoritmy monitorování a diagnostiky pohonů se synchronními motory." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-447552.

Full text
Abstract:
Permanent magnet synchronous machine drives are used more often. Although, synchronous machines drive also suffer from possible faults. This thesis is focused on the detection of the three-phase synchronous motor winding faults and the detection of the drive control loop sensors' faults. Firstly, a model of the faulty winding of the motor is presented. Effects of the inter-turn short fault were analyzed. The model was experimentally verified by fault emulation on the test bench with an industrial synchronous motor. Inter-turn short fault detection algorithms are summarized. Three existing conventional winding fault methods based on signal processing of the stator voltage and stator current residuals were verified. Three new winding fault detection methods were developed by the author. These methods use a modified motor model and the extended Kalman filter state estimator. Practical implementation of the algorithms on a microcontroller is described and experimental results show the performance of the presented algorithms in different scenarios on test bench measurements. Highly related motor control loop sensors fault detection algorithms are also described. These algorithms are complementary to winding fault algorithms. The decision mechanism integrates outputs of sensor and winding fault detection algorithms and provides an overall drive fault diagnosis concept.
APA, Harvard, Vancouver, ISO, and other styles
25

Mayrink, Victor Teixeira de Melo. "Avaliação do algoritmo Gradient Boosting em aplicações de previsão de carga elétrica a curto prazo." Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3563.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-03-07T14:25:21Z No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-03-07T15:06:57Z (GMT) No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5)
Made available in DSpace on 2017-03-07T15:06:57Z (GMT). No. of bitstreams: 1 victorteixeirademelomayrink.pdf: 2587774 bytes, checksum: 1319cc37a15480796050b618b4d7e5f7 (MD5) Previous issue date: 2016-08-31
FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais
O armazenamento de energia elétrica em larga escala ainda não é viável devido a restrições técnicas e econômicas. Portanto, toda energia consumida deve ser produzida instantaneamente; não é possível armazenar o excesso de produção, ou tampouco cobrir eventuais faltas de oferta com estoques de segurança, mesmo que por um curto período de tempo. Consequentemente, um dos principais desafios do planejamento energético consiste em realizar previsões acuradas para as demandas futuras. Neste trabalho, apresentamos um modelo de previsão para o consumo de energia elétrica a curto prazo. A metodologia utilizada compreende a construção de um comitê de previsão, por meio da aplicação do algoritmo Gradient Boosting em combinação com modelos de árvores de decisão e a técnica de amortecimento exponencial. Esta estratégia compreende um método de aprendizado supervisionado que ajusta o modelo de previsão com base em dados históricos do consumo de energia, das temperaturas registradas e de variáveis de calendário. Os modelos propostos foram testados em duas bases de dados distintas e demonstraram um ótimo desempenho quando comparados com resultados publicados em outros trabalhos recentes.
The storage of electrical energy is still not feasible on a large scale due to technical and economic issues. Therefore, all energy to be consumed must be produced instantly; it is not possible to store the production leftover, or either to cover any supply shortages with safety stocks, even for a short period of time. Thus, one of the main challenges of energy planning consists in computing accurate forecasts for the future demand. In this paper, we present a model for short-term load forecasting. The methodology consists in composing a prediction comitee by applying the Gradient Boosting algorithm in combination with decision tree models and the exponential smoothing technique. This strategy comprises a supervised learning method that adjusts the forecasting model based on historical energy consumption data, the recorded temperatures and calendar variables. The proposed models were tested in two di erent datasets and showed a good performance when compared with results published in recent papers.
APA, Harvard, Vancouver, ISO, and other styles
26

Mori, Andrew. "Time domain pulse shaping using a genetic algorithm." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4321.

Full text
Abstract:
Thesis (MSc (Physics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: Through the use of complex Laser Pulse Shaping, numerous fundamental laser induced processes may be controlled as well as studied. This work serves as an introduction into Laser Pulse Shaping, with the focus on a simple Pulse Shaping experiment, as well as to determine whether future, more complex processes may be similarly controlled. A description of Laser Pulse Shaping theory is presented here, along with a full explanation of a simple experiment to maximize second harmonic generation (SHG) through Pulse Shaping. This experiment is simple on a theoretical level yet complicated in both implementation as well as operation. The experimental setup and software integration required hardware compatibility in multiple programming languages. This work was successful in the sense that a fully automated dispersion compensation system, accomplished through the use of a genetic algorithm in a feedback controlled loop, was constructed and tested. The success of this experiment and the understanding gained in this work has laid the foundation for further complex Pulse Shaping systems to be achieved in future.
AFRIKAANSE OPSOMMING: Komplekse Laserpuls-vervorming kan gebruik word om verskeie fundamentele laser-geinduseerde prosesse beide te beheer asook te bestudeer. Hierdie navorsingstuk dien as n inleiding tot Laserpuls-vervorming, spesifiek gefokus op n eenvoudige Pulsvervormings-eksperiment. Meer komplekse Pulsvervormingsopstellings kan toegepas word deur die kennis opgedoen in hierdie tesis. Die teoretiese agtergrond van Laserpuls-vervormings word bespreek, tesame met n eenvoudige eksperiment om die Tweede Harmoniek Skeppingsproses (SHG) te maksimeer deur van Laserpuls-vervorming gebruik te maak. Die eksperiment is teoreties eenvoudig, waar die implimentering asook bedryf meer kompleks is. Die bedryf van die eksperiment word in 2 dele hanteer: die hoofprogram en n Genetiese Algoritme gebruik in optimering. Die werking van Genetiese Algoritmes asook Ultrakort Pulse (USPs) en pulskarakterisering word ook bespreek. Die suksesvolle opstelling van die experiment en ook die eind resultate wat gevind is, maak dit moontlik om meer ingewikkeld komplekse laserpulsvervorming experimente te bestudeer.
APA, Harvard, Vancouver, ISO, and other styles
27

Yahaya, Alassan Mahaman Sanoussi. "Amélioration du système de recueils d'information de l'entreprise Semantic Group Company grâce à la constitution de ressources sémantiques." Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100086/document.

Full text
Abstract:
Prendre en compte l'aspect sémantique des données textuelles lors de la tâche de classification s'est imposé comme un réel défi ces dix dernières années. Cette difficulté vient s'ajouter au fait que la plupart des données disponibles sur les réseaux sociaux sont des textes courts, ce qui a notamment pour conséquence de rendre les méthodes basées sur la représentation "bag of words" peu efficientes. L'approche proposée dans ce projet de recherche est différente des approches proposées dans les travaux antérieurs sur l'enrichissement des messages courts et ce pour trois raisons. Tout d'abord, nous n'utilisons pas des bases de connaissances externes comme Wikipedia parce que généralement les messages courts qui sont traités par l'entreprise proveniennent des domaines spécifiques. Deuxièment, les données à traiter ne sont pas utilisées pour la constitution de ressources à cause du fonctionnement de l'outil. Troisièment, à notre connaissance il n'existe pas des travaux d'une part qui exploitent des données structurées comme celles de l'entreprise pour constituer des ressources sémantiques, et d'autre part qui mesurent l'impact de l'enrichissement sur un système interactif de regroupement de flux de textes. Dans cette thèse, nous proposons la création de ressources permettant d'enrichir les messages courts afin d'améliorer la performance de l'outil du regroupement sémantique de l'entreprise Succeed Together. Ce dernier implémente des méthodes de classification supervisée et non supervisée. Pour constituer ces ressources, nous utilisons des techniques de fouille de données séquentielles
Taking into account the semantic aspect of the textual data during the classification task has become a real challenge in the last ten years. This difficulty is in addition to the fact that most of the data available on social networks are short texts, which in particular results in making methods based on the "bag of words" representation inefficient. The approach proposed in this research project is different from the approaches proposed in previous work on the enrichment of short messages for three reasons. First, we do not use external knowledge like Wikipedia because typically short messages that are processed by the company come from specific domains. Secondly, the data to be processed are not used for the creation of resources because of the operation of the tool. Thirdly, to our knowledge there is no work on the one hand, which uses structured data such as the company's data to constitute semantic resources, and on the other hand, which measure the impact of enrichment on a system Interactive grouping of text flows. In this thesis, we propose the creation of resources enabling to enrich the short messages in order to improve the performance of the tool of the semantic grouping of the company Succeed Together. The tool implements supervised and unsupervised classification methods. To build these resources, we use sequential data mining techniques
APA, Harvard, Vancouver, ISO, and other styles
28

Bechtold, Bastian [Verfasser], de Par Steven [Akademischer Betreuer] van, Jörg [Akademischer Betreuer] Bitzer, and Udo [Akademischer Betreuer] Zölzer. "Pitch of Voiced Speech in the Short-Time Fourier Transform: Algorithms, Ground Truths, and Evaluation Methods / Bastian Bechtold ; Steven van de Par, Jörg Bitzer, Udo Zölzer." Oldenburg : BIS der Universität Oldenburg, 2021. http://d-nb.info/1232405213/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Faus, García Óscar. "Signal Processing for mmWave MIMO Radar." Thesis, Högskolan i Gävle, Avdelningen för elektronik, matematik och naturvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-19866.

Full text
Abstract:
This thesis addresses the design study, implementation and analysis of signal processing algorithms for a 79 GHz millimeter-wave Phase Modulated Continuous Wave (PMCW) Multi Input Multi Output (MIMO) short range radar; performed in IMEC research institute (Leuven, Belgium). The radar system targets high resolution performance with low power consumption in order to integrate a full MIMO radar transceiver with digital processor and antennas in a compact package featuring a size of 1 cm2. Achieving such radar system characteristics requires the exploitation of a highly demanding digital architecture with signal processing gain and high range, speed and angle resolutions. The improved resolution and detection capabilities will be achieved by performing signal processing algorithms on the reflected waveform. The digital front-end implements parallel range gate processing with a bank of correlators that perform: pulse compression, coherent accumulation to further increase Signal to Noise Ratio (SNR) and N-point FFT to extract the Doppler information. The use of MIMO is proposed implementing a code domain technique in the PMCW waveform, the Outer Hadamard Code MIMO. This concept makes use of a unique sequence for all the transmitting antennas that is rendered by an outer sequence to ensure the orthogonality of the transmitted waveforms. The outer code makes use of the good cross-correlation properties of the Hadamard sequences and the waveform uses sequences that exhibit perfect auto-correlation profile, the Almost Perfect Autocorrelation Sequences (APAS). The MIMO implementation results in higher angular resolution and extra processing gain. The use of beamforming techniques in the radar allows the angle estimation of the detected targets; using rough and fine beamforming that provides with coarse and precise Angle of Arrival (AoA) estimation in an early and late stage respectively. A Constant False Alarm Rate (CFAR) processing stage is implemented in the stage of the system where higher signal processing gain is achieved. This algorithm allows the variation of the CFAR parameters and analyzes the detections in order to improve the probability of detection (Pd) while decreasing the probability of false alarm (Pfa). A series of simulations with different scenarios and variable parameters are set in order to analyze the performance of the system. The simulations analyze the gain achieved in each stage and their outcomes show an impressive processing gain that can reach SNR improvements as high as 77 dB for a small virtual array while keeping the Pfa low with the CFAR adjustment. The use of bigger arrays demonstrates the possibility to enable clear detections for low Radar Cross Section (RCS) targets in far distances of the unambiguous range. The use of beamforming shows interference reduction improvement as the beam widths narrow with the increasing number of virtual array antennas. These results have been achieved while keeping the system design parameters to a range resolution of 7.5 cm for a maximum range of 37.5 meters with speed resolution of 0.2 m/s and a maximum detectable speed of 12.66 m/s. The outcomes support the good performance of the signal processing techniques implemented and the benefits in applying them in a SoC mmWave MIMO radar.
APA, Harvard, Vancouver, ISO, and other styles
30

Ishi, Soares de Lima Leandro. "De novo algorithms to identify patterns associated with biological events in de Bruijn graphs built from NGS data." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1055/document.

Full text
Abstract:
L'objectif principal de cette thèse est le développement, l'amélioration et l'évaluation de méthodes de traitement de données massives de séquençage, principalement des lectures de séquençage d'ARN courtes et longues, pour éventuellement aider la communauté à répondre à certaines questions biologiques, en particulier dans les contextes de transcriptomique et d'épissage alternatif. Notre objectif initial était de développer des méthodes pour traiter les données d'ARN-seq de deuxième génération à l'aide de graphes de De Bruijn afin de contribuer à la littérature sur l'épissage alternatif, qui a été exploré dans les trois premiers travaux. Le premier article (Chapitre 3, article [77]) a exploré le problème que les répétitions apportent aux assembleurs de transcriptome si elles ne sont pas correctement traitées. Nous avons montré que la sensibilité et la précision de notre assembleur local d'épissage alternatif augmentaient considérablement lorsque les répétitions étaient formellement modélisées. Le second (Chapitre 4, article [11]) montre que l'annotation d'événements d'épissage alternatifs avec une seule approche conduit à rater un grand nombre de candidats, dont beaucoup sont importants. Ainsi, afin d'explorer de manière exhaustive les événements d'épissage alternatifs dans un échantillon, nous préconisons l'utilisation combinée des approches mapping-first et assembly-first. Étant donné que nous avons une énorme quantité de bulles dans les graphes de De Bruijn construits à partir de données réelles d'ARN-seq, qui est impossible à analyser dans la pratique, dans le troisième travail (Chapitre 5, articles [1, 2]), nous avons exploré théoriquement la manière de représenter efficacement et de manière compacte l'espace des bulles via un générateur des bulles. L'exploration et l'analyse des bulles dans le générateur sont réalisables dans la pratique et peuvent être complémentaires aux algorithmes de l'état de l'art qui analysent un sous-ensemble de l'espace des bulles. Les collaborations et les avancées sur la technologie de séquençage nous ont incités à travailler dans d'autres sous-domaines de la bioinformatique, tels que: études d'association à l'échelle des génomes, correction d'erreur et assemblage hybride. Notre quatrième travail (Chapitre 6, article [48]) décrit une méthode efficace pour trouver et interpréter des unitigs fortement associées à un phénotype, en particulier la résistance aux antibiotiques, ce qui rend les études d'association à l'échelle des génomes plus accessibles aux panels bactériens, surtout ceux qui contiennent des bactéries plastiques. Dans notre cinquième travail (Chapitre 7, article [76]), nous évaluons dans quelle mesure les méthodes existantes de correction d'erreur ADN à lecture longue sont capables de corriger les lectures longues d'ARN-seq à taux d'erreur élevé. Nous concluons qu'aucun outil ne surpasse tous les autres pour tous les indicateurs et est le mieux adapté à toutes les situations, et que le choix devrait être guidé par l'analyse en aval. Les lectures longues d'ARN-seq fournissent une nouvelle perspective sur la manière d'analyser les données transcriptomiques, puisqu'elles sont capables de décrire les séquences complètes des ARN messagers, ce qui n'était pas possible avec des lectures courtes dans plusieurs cas, même en utilisant des assembleurs de transcriptome de l'état de l'art. En tant que tel, dans notre dernier travail (Chapitre 8, article [75]), nous explorons une méthode hybride d'assemblage d'épissages alternatifs qui utilise des lectures à la fois courtes et longues afin de répertorier les événements d'épissage alternatifs de manière complète, grâce aux lectures courtes, guidé par le contexte intégral fourni par les lectures longues
The main goal of this thesis is the development, improvement and evaluation of methods to process massively sequenced data, mainly short and long RNA-sequencing reads, to eventually help the community to answer some biological questions, especially in the transcriptomic and alternative splicing contexts. Our initial objective was to develop methods to process second-generation RNA-seq data through de Bruijn graphs to contribute to the literature of alternative splicing, which was explored in the first three works. The first paper (Chapter 3, paper [77]) explored the issue that repeats bring to transcriptome assemblers if not addressed properly. We showed that the sensitivity and the precision of our local alternative splicing assembler increased significantly when repeats were formally modeled. The second (Chapter 4, paper [11]), shows that annotating alternative splicing events with a single approach leads to missing out a large number of candidates, many of which are significant. Thus, to comprehensively explore the alternative splicing events in a sample, we advocate for the combined use of both mapping-first and assembly-first approaches. Given that we have a huge amount of bubbles in de Bruijn graphs built from real RNA-seq data, which are unfeasible to be analysed in practice, in the third work (Chapter 5, papers [1, 2]), we explored theoretically how to efficiently and compactly represent the bubble space through a bubble generator. Exploring and analysing the bubbles in the generator is feasible in practice and can be complementary to state-of-the-art algorithms that analyse a subset of the bubble space. Collaborations and advances on the sequencing technology encouraged us to work in other subareas of bioinformatics, such as: genome-wide association studies, error correction, and hybrid assembly. Our fourth work (Chapter 6, paper [48]) describes an efficient method to find and interpret unitigs highly associated to a phenotype, especially antibiotic resistance, making genome-wide association studies more amenable to bacterial panels, especially plastic ones. In our fifth work (Chapter 7, paper [76]), we evaluate the extent to which existing long-read DNA error correction methods are capable of correcting high-error-rate RNA-seq long reads. We conclude that no tool outperforms all the others across all metrics and is the most suited in all situations, and that the choice should be guided by the downstream analysis. RNA-seq long reads provide a new perspective on how to analyse transcriptomic data, since they are able to describe the full-length sequences of mRNAs, which was not possible with short reads in several cases, even by using state-of-the-art transcriptome assemblers. As such, in our last work (Chapter 8, paper [75]) we explore a hybrid alternative splicing assembly method, which makes use of both short and long reads, in order to list alternative splicing events in a comprehensive manner, thanks to short reads, guided by the full-length context provided by the long reads
APA, Harvard, Vancouver, ISO, and other styles
31

Brito, Felix Estevam de Jesus. "Seleção e alocação de dispositivos limitadores de corrente de curto-circuito utilizando o algoritmo evolucionário multiobjetivo NSGA-II." Universidade Federal de Sergipe, 2016. https://ri.ufs.br/handle/riufs/5024.

Full text
Abstract:
The working condition of the electrical transmission systems is a factor that takes a lot of concern among energy suppliers and users. To maintain appropriate conditions for operation of the system is important in all aspects. For this purpose, several devices are used, monitoring and protection electrical devices, for example. These include the fault current limiter, with this device you can limit the fault current in the bus or lines of the electrical system. In this work, it is proposed to accomplish the selection and allocation of fault current limiters devices in electrical systems to minimize the short-circuit current, considering the device cost and the cost due to technical losses. For this, it has been used the multi-objective genetic algorithm NSGA-II for his known efficiency in optimization problems in electrical systems. The proposed algorithm was applied in benchmark systems, a 30-bus and a 57-bus, and the systems were simulated with and without the presence of the devices for comparison and validation of results.
A condição de funcionamento dos sistemas elétricos de transmissão é um fator que demanda muita preocupação por parte dos fornecedores e usuários de energia. Manter condições adequadas para a operação do sistema é importante em todos os aspectos. Para este fim, vários equipamentos são utilizados, dispositivos elétricos de monitoramento e proteção, por exemplo. Entre eles, estão os dispositivos limitadores de corrente de curto-circuito. Com este dispositivo é possível limitar a corrente de falta nas barras ou linhas do sistema elétrico. Neste trabalho, propõe-se realizar a seleção e alocação de dispositivos limitadores de corrente de curto-circuito em sistemas elétricos de modo a minimizar a corrente de curto-circuito, considerando o custo do dispositivo e o custo devido à perdas técnicas. Para isso, foi utilizado o algoritmo genético multiobjetivo NSGA-II por sua já conhecida eficiência em problemas de otimização em sistemas elétricos. O algoritmo proposto foi aplicado em sistemas teste de 30 e 57 barras e os sistemas foram simulados com e sem a presença dos dispositivos para efeito de comparação e validação dos resultados.
APA, Harvard, Vancouver, ISO, and other styles
32

Giacometto, Torres Francisco Javier. "Adaptive load consumption modelling on the user side: contributions to load forecasting modelling based on supervised mixture of experts and genetic programming." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/457631.

Full text
Abstract:
This research work proposes three main contributions on the load forecasting field: the enhancement of the forecasting accuracy, the enhancement of the model adaptiveness, and the automatization on the execution of the load forecasting strategies implemented. On behalf the accuracy contribution, learning algorithms have been implemented on the basis of machine learning, computational intelligence, evolvable networks, expert systems, and regression approaches. The options for increase the forecasting quality, through the minimization of the forecasting error and the exploitation of hidden insights and miscellaneous properties of the training data, are equally explored in the form of feature based specialized base learners inside of a modelling ensemble structure. Preprocessing and the knowledge discovery algorithms are also implemented in order to boost the accuracy trough cleaning of variables, and to enhance the autonomy of the modelling algorithm via non-supervised intelligent algorithms respectively. The Adaptability feature has been enhanced by the implementation of three components inside of an ensemble learning strategy. The first one corresponds to resampling techniques, it ensures the replication of the global probability distribution on multiple independent training sub-sets and consequently the training of base learners on representatives spaces of occurrences. The second one corresponds to multi-resolution and cyclical analysis techniques; through the decomposition of endogenous variables on their time-frequency components, major insights are acquired and applied on the definition of the ensemble structure layout. The third one corresponds to Self-organized modelling algorithms, which provides of fully customized base learner's. The Autonomy feature is reached by the combination of automatic procedures in order to minimize the interaction of an expert user on the forecasting procedure. Experimental results obtained, from the application of the load forecasting strategies proposed, have demonstrated the suitability of the techniques and methodologies implemented, especially on the case of the novel ensemble learning strategy.
Este trabajo de investigación propone tres aportaciones principales en el campo de la previsión de consumos: la mejora en la exactitud de la predicción, la mejora en la adaptabilidad del modelo ante diferentes escenarios de consumo y la automatización en la ejecución de los algoritmos de modelado y predicción. La mejora de precisión que ha sido introducida en la estrategia de modelado propuesta ha sido obtenida tras la implementación de algoritmos de aprendizaje supervisados pertenecientes a las siguientes familias de técnicas: aprendizaje de máquinas, inteligencia computacional, redes evolutivas, sistemas expertos y técnicas de regresión. Otras las medidas implementadas para aumentar la calidad de la predicción han sido: la minimización del error de pronóstico a través de la extracción de información basada en análisis multi-variable, la combinación de modelos expertos especializados en atributos específicos del perfil de consumo, el uso de técnicas de pre procesamiento para aumentar la precisión a través de la limpieza de variables, y por último implementación de la algoritmos de clasificación no supervisados para obtener los atributos y las clases características del consumo. La mejora en la adaptación del algoritmo de modelado se ha conseguido mediante la implementación de tres componentes al interior de la estrategia de combinación de modelos expertos. El primer componente corresponde a la implementación de técnicas de muestreo sobre cada conjunto de datos agrupados por clase; esto asegura la replicación de la distribución de probabilidad global en múltiples y estadísticamente independientes subconjuntos de entrenamiento. Estos sub conjuntos son usados para entrenar los modelos expertos que consecuentemente pasaran a formar los modelos base de la estructura jerárquica que combina los modelos expertos. El segundo componente corresponde a técnicas de análisis multi-resolución. A través de la descomposición de variables endógenas en sus componentes tiempo-frecuencia, se abstraen e implementan conocimientos importantes sobre la forma de la estructura jerárquica que adoptaran los modelos expertos. El tercero componente corresponde a los algoritmos de modelado que generan una topología interior auto organizada, que proporciona de modelo experto base completamente personalizado al perfil de consumo analizado. La mejora en la automatización se alcanza mediante la combinación de procedimientos automáticos para minimizar la interacción de un usuario experto en el procedimiento de predicción. Los resultados experimentales obtenidos, a partir de la aplicación de las estrategias de predicción de consumos propuestas, han demostrado la idoneidad de las técnicas y metodologías implementadas; sobre todo en el caso de la novedosa estrategia para la combinación de modelos expertos.
APA, Harvard, Vancouver, ISO, and other styles
33

Mohammadisohrabi, Ali. "Design and implementation of a Recurrent Neural Network for Remaining Useful Life prediction." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
A key idea underlying many Predictive Maintenance solutions is Remaining Useful Life (RUL) of machine parts, and it simply involves a prediction on the time remaining before a machine part is likely to require repair or replacement. Nowadays, with respect to fact that the systems are getting more complex, the innovative Machine Learning and Deep Learning algorithms can be deployed to study the more sophisticated correlations in complex systems. The exponential increase in both data accumulation and processing power make the Deep Learning algorithms more desirable that before. In this paper a Long Short-Term Memory (LSTM) which is a Recurrent Neural Network is designed to predict the Remaining Useful Life (RUL) of Turbofan Engines. The dataset is taken from NASA data repository. Finally, the performance obtained by RNN is compared to the best Machine Learning algorithm for the dataset.
APA, Harvard, Vancouver, ISO, and other styles
34

Borges, Fábbio Anderson Silva. "Método híbrido baseado no algoritmo k-means e regras de decisão para localização das fontes de variações de tensões de curta duração no contexto de Smart Grid." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-04102017-105849/.

Full text
Abstract:
No contexto de Smart Grids, determinar a correta localização das fontes causadoras de Variação de Tensão de Curta Duração (VTCD) não é uma tarefa simples, devido à curta duração destes eventos e também, por sua rápida propagação nas redes de distribuição de energia elétrica. Neste sentido, esse trabalho apresentou um método híbrido recursivo baseado em ferramentas da área de aprendizado de máquinas (algoritmo de agrupamento e base de regras), o qual é capaz de localizar as fontes de VTCD, a partir da análise dos das características dos distúrbios disponibilizadas pelos smart meters instalados no sistema. Assim, o trabalho destinouse ao desenvolvimento de uma plataforma em hardware para aquisição, detecção e classificação dos distúrbios, através de um Sistema Operacional de Tempo Real. Em seguida o algoritmo de agrupamento (k-means) agrupou os dados dos medidores de forma a definir dois clusters, onde um deles correspondeu aos medidores que estão longe da região que ocorreu o distúrbio e o outro, correspondeu aos medidores que estavam localizados próximos da região de ocorrência do distúrbio. Na segunda etapa, um sistema baseado em regras determinou qual dos clusters abrangeu o nó de origem. No entanto, quando o algoritmo determinou uma região muito grande, essa região é introduzida recursivamente, como entrada da metodologia desenvolvida, para refinar a região de localização. O sistema resultante foi capaz de estimar a região de localização com uma taxa de acerto acima de 90%. Assim, o método teve sua concepção adequada ao empregado nos centros de controle e operações de concessionárias de energia elétrica, visando apoiar a decisão do corpo técnico para que ações corretivas fossem estabelecidas de forma assertiva.
In the Smart Grids context, the correct location of short-duration voltage variations sources is not a trivial task, because of the short duration of these events and for rapid propagation in the distribution feeder. In this sense, aiming to develop a recursive hybrid method based on machine learning area tools (clustering algorithm and rule base) that is able to locate the sources of short-duration voltage variations, it was used data from smart meters installed along the distribution feeder. The recursive hybrid method, as input, received the disturbance characteristics provided by the meters installed in the system. Thus, this thesis aimed to development of a measurement hardware for signal acquisition, detection, classification through a realtime operating system. Then, k-means clustering algorithm grouped the meters data in order to define two clusters, where one of them corresponded to the meters that were distant from the region that occurred the disturbance and the other one corresponded to the meters, which were located near to the disturbance occurrence region. In a second step, a rule-based system determined which of the clusters corresponded to the source node. When the algorithm determined a very large region, that region was recursively introduced as input of the developed methodology to decrease its size. The resulting system was able to estimate the location region with a accuracy above 90%. Therefore, this method showed a suitable design for employment by operation control centers of power sector concessionaires, aiming to support technical staff decision to stablish assertive corrective actions.
APA, Harvard, Vancouver, ISO, and other styles
35

Nyman, Peter. "On relations between classical and quantum theories of information and probability." Doctoral thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13830.

Full text
Abstract:
In this thesis we study quantum-like representation and simulation of quantum algorithms by using classical computers.The quantum--like representation algorithm (QLRA) was  introduced by A. Khrennikov (1997) to solve the ``inverse Born's rule problem'', i.e. to construct a representation of probabilistic data-- measured in any context of science-- and represent this data by a complex or more general probability amplitude which matches a generalization of Born's rule.The outcome from QLRA matches the formula of total probability with an additional trigonometric, hyperbolic or hyper-trigonometric interference term and this is in fact a generalization of the familiar formula of interference of probabilities. We study representation of statistical data (of any origin) by a probability amplitude in a complex algebra and a Clifford algebra (algebra of hyperbolic numbers). The statistical data is collected from measurements of two dichotomous and trichotomous observables respectively. We see that only special statistical data (satisfying a number of nonlinear constraints) have a quantum--like representation. We also study simulations of quantum computers on classical computers.Although it can not be denied that great progress have been made in quantum technologies, it is clear that there is still a huge gap between the creation of experimental quantum computers and realization of a quantum computer that can be used in applications. Therefore the simulation of quantum computations on classical computers became an important part in the attempt to cover this gap between the theoretical mathematical formulation of quantum mechanics and the realization of quantum computers. Of course, it can not be expected that quantum algorithms would help to solve NP problems for polynomial time on classical computers. However, this is not at all the aim of classical simulation.  The second part of this thesis is devoted to adaptation of the Mathematica symbolic language to known quantum algorithms and corresponding simulations on classical computers. Concretely we represent Simon's algorithm, Deutsch-Josza algorithm, Shor's algorithm, Grover's algorithm and quantum error-correcting codes in the Mathematica symbolic language. We see that the same framework can be used for all these algorithms. This framework will contain the characteristic property of the symbolic language representation of quantum computing and it will be a straightforward matter to include future algorithms in this framework.
APA, Harvard, Vancouver, ISO, and other styles
36

Khov, Makara. "Surveillance et diagnostic des machines synchrones à aimants permanents : détection des courts-circuits par suivi paramétrique." Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT060H/document.

Full text
Abstract:
Ce travail de thèse traite du problème de surveillance en ligne de défaillances électriques dans les entrainements électriques à base de machines synchrones à aimants permanents (MSAP) par une méthode de suivi paramétrique. Les défauts de court-circuit entre spires au stator sont souvent critiques et doivent être détectés au plus tôt avec un bon taux de confiance afin d’informer un système superviseur de la présence d’une défaillance pour limiter les risques encourues par l’environnement matériel et humain situé autour de la machine en défaut. La méthode que nous proposons de mettre en œuvre pour la détection des courts-circuits statoriques est basée des techniques d’identifications récursives. Nous proposons d’identifier en ligne les paramètres d’un modèle diphasé électrique de l’actionneur synchrone et d’analyser les variations des paramètres identifiées lors de l’apparition d’un défaut. Pour assurer les performances des méthodes d’identification, il est souvent nécessaire de disposer d’un signal d’excitation additionnel pour assurer les bonnes performances des algorithmes. Ces signaux peuvent cependant perturber le fonctionnement normal de la machine et entrainer des pertes additionnelles. Dans ce contexte, nous proposons une approche par identification faisant appel à un modèle diphasé spécifique appelé « le repère de Park à courants orientés ». Ce repère permet, tout en réduisant la complexité du problème d’identification, d’obtenir des propriétés d’auto-excitation intéressantes et donc d’éviter l’utilisation d’une excitation additionnelle. Des simulations sont menées à l’aide d’un modèle fin de la machine permettant de reproduire des situations de défaillances de manière virtuelle et d’éprouver l’efficacité des algorithmes dans ces situations dégradées. Cette machine, pouvant fonctionner en générateur ou en moteur, est intégrée dans un environnement complet, incluant le cas échéant une alimentation, une charge mécanique et éventuellement une commande, ce qui permet également de tester les algorithmes pour des fonctionnements en boucle ouverte et en boucle fermée. Les résultats présentés permettent de valider les techniques proposées et montrent qu’elles permettent d’extraire automatiquement, à partir des variations des paramètres identifiés, un indicateur de défaut. Des résultats expérimentaux sont également présentés en fonctionnement générateur sur une machine spécialement re-bobinée pour permettre la réalisation de défaut statoriques. Les algorithmes sont implantés sur une cible de calcul numérique afin de démontrer la faisabilité temps réelle de la détection
This work deals with the on-line monitoring of electrical faults in permanent magnet synchronous machine (PMSM) by parameter monitoring method. The inter-turns short-circuits faults in stator are often critical and have to be detected as early as possible with a high confidence rate to inform the supervisor system of the fault presence in order to limit the risk for the material and human environment. The proposed method is focus on the detection of short-circuits in stator and based on recursive identification technique. The on-line parameter identification uses an electrical diphase model of the PMSM and the analysis of the estimated parameter variations is performed to detect the presence of stator faults. In a general way, to ensure the performance of identification algorithms, it is necessary to have additional excitation signals. Consequently, those signals could disturb the normal operation of the drive. To overcome this problem, a specific diphase model in currents oriented Park reference frame is introduced for identification process. By reducing the complexity of identification problem, this reference frame provides an interesting auto-excitation property that leads to avoid the utilisation of additional excitation signals. The simulations are performed using an accurate model of PMSM that allows reproducing the failure situation and prove the efficiency of algorithms in degraded situations. This machine, operating as generator or motor, is integrated in a complete environment, included a power supply, mechanical load and control process. The detection scheme is then tested in open and closed loop operation. The results obtained from the simulation process underline the ability of the proposed technique to detect a stator fault occurrence and show that a fault indicator can be extracted automatically from the variation of estimated parameters. Experimental results are also achieved. A PMSM, with a specific winding including additional connexion points for stator short-circuit realisation is used. The algorithms are implemented in a numerical calculator in order to demonstrate the feasibility of the real-time faults detection for a generator operation mode
APA, Harvard, Vancouver, ISO, and other styles
37

Bok, Jaromír. "Odolnost spotřebičů na krátkodobé poklesy a výpadky napětí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-233532.

Full text
Abstract:
This Ph.D. thesis deals with problems about voltage dips and short voltage interruptions, generally named as voltage events, which origin in power supply networks and have a negative influence for all connected electric appliances. In this thesis single phase appliances are considered. These problems closely relate with area of electromagnetic compatibility which solve all questions about correct operation of different types of electric appliances during electromagnetic disturbances impact. Voltage events are ones of the many types of electromagnetic disturbances. The connection between disturbance sources and sensitive electric appliances is created by power supply lines. The immunity of electric appliances to voltage dips and short interruptions is currently tested via voltage dips with strictly defined parameters which are intended by class of electromagnetic environment in which the usage of electric appliance is recommended. During immunity tests the rectangular shape of voltage dips is preferred. The main descriptive parameters of testing voltage events are the residual voltage and the event time duration. But voltage dips and short interruptions defined by this way do not closely relate with parameters of real voltage dips and interruptions occurred in public supply system where parameters of voltage dips are variable. Moreover in the power supply system there are many of others voltage parameters which can have a significant influence to immunity level of connected electric appliances. This Ph.D. thesis also deals with finding more voltage event parameters. Although the voltage events occurrence in the power supply system is not limited and voltage events are considered only as informative voltage parameter it is important to monitor voltage events occurrence. The monitoring device has to be able to operate for ling time period and it has to detect parameters of voltage events with adequate accuracy. The accuracy of detected event parameters and the detection delay depends on the detection algorithm characteristics. That is why the part of this thesis relates with a comparison of several detection algorithms and their abilities to correct detection of voltage event parameters. The main purpose of this thesis is the proposal of connection between classification of voltage dips and short interruptions occurred in power supply system with the classification of electric appliances immunity to these voltage events. On the base of many of provided electric appliances immunity tests and also on the base of long time period voltage events monitoring the special compatibility levels are proposed in this thesis. The observation of proposed compatibility levels will bring the increasing level of reliable operation of all connected electric appliances.
APA, Harvard, Vancouver, ISO, and other styles
38

Roos, Jérémy. "Prévision à court terme des flux de voyageurs : une approche par les réseaux bayésiens." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1170/document.

Full text
Abstract:
Dans ces travaux de thèse, nous proposons un modèle de prévision à court terme des flux de voyageurs basé sur les réseaux bayésiens. Ce modèle est destiné à répondre à des besoins opérationnels divers liés à l'information voyageurs, la régulation des flux ou encore la planification de l'offre de transport. Conçu pour s'adapter à tout type de configuration spatiale, il permet de combiner des sources de données hétérogènes (validations des titres de transport, comptages à bord des trains et offre de transport) et fournit une représentation intuitive des relations de causalité spatio-temporelles entre les flux. Sa capacité à gérer les données manquantes lui permet de réaliser des prédictions en temps réel même en cas de défaillances techniques ou d'absences de systèmes de collecte
In this thesis, we propose a Bayesian network model for short-term passenger flow forecasting. This model is intended to cater for various operational needs related to passenger information, passenger flow regulation or operation planning. As well as adapting to any spatial configuration, it is designed to combine heterogeneous data sources (ticket validation, on-board counts and transport service) and provides an intuitive representation of the causal spatio-temporal relationships between flows. Its ability to deal with missing data allows to make real-time predictions even in case of technical failures or absences of collection systems
APA, Harvard, Vancouver, ISO, and other styles
39

Lenain, Roland. "Amélioration des méthodes de calcul de cœurs de réacteurs nucléaires dans APOLLO3 : décomposition de domaine en théorie du transport pour des géométries 2D et 3D avec une accélération non linéaire par la diffusion." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112180/document.

Full text
Abstract:
Ce travail de thèse est consacré à la mise en œuvre d’une méthode de décomposition de domaine appliquée à l’équation du transport. L’objectif de ce travail est l’accès à des solutions déterministes haute-fidélité permettant de correctement traiter les hétérogénéités des réacteurs nucléaires, pour des problèmes dont la taille varie d’un motif d’assemblage en 3 dimensions jusqu’à celle d’un grand cœur complet en 3D. L’algorithme novateur développé au cours de la thèse vise à optimiser l’utilisation du parallélisme et celle de la mémoire. La démarche adoptée a aussi pour but la diminution de l’influence de l’implémentation parallèle sur les performances. Ces objectifs répondent aux besoins du projet APOLLO3, développé au CEA et soutenu par EDF et AREVA, qui se doit d’être un code portable (pas d’optimisation sur une architecture particulière) permettant de réaliser des modélisations haute-fidélité (best estimate) avec des ressources allant des machines de bureau aux calculateurs disponibles dans les laboratoires d’études. L’algorithme que nous proposons est un algorithme de Jacobi Parallèle par Bloc Multigroupe. Chaque sous domaine est un problème multigroupe à sources fixes ayant des sources volumiques (fission) et surfaciques (données par les flux d’interface entre les sous domaines). Le problème multigroupe est résolu dans chaque sous domaine et une seule communication des flux d’interface est requise par itération de puissance. Le rayon spectral de l’algorithme de résolution est rendu comparable à celui de l’algorithme de résolution classique grâce à une méthode d’accélération non linéaire par la diffusion bien connue nommée Coarse Mesh Finite Difference. De cette manière une scalabilité idéale est atteignable lors de la parallélisation. L’organisation de la mémoire, tirant parti du parallélisme à mémoire partagée, permet d’optimiser les ressources en évitant les copies de données redondantes entre les sous domaines. Les architectures de calcul à mémoire distribuée sont rendues accessibles par un parallélisme hybride qui combine le parallélisme à mémoire partagée et à mémoire distribuée. Pour des problèmes de grande taille, ces architectures permettent d’accéder à un plus grand nombre de processeurs et à la quantité de mémoire nécessaire aux modélisations haute-fidélité. Ainsi, nous avons réalisé plusieurs exercices de modélisation afin de démontrer le potentiel de la réalisation : calcul de cœur et de motifs d’assemblages en 2D et 3D prenant en compte les contraintes de discrétisation spatiales et énergétiques attendues
This thesis is devoted to the implementation of a domain decomposition method applied to the neutron transport equation. The objective of this work is to access high-fidelity deterministic solutions to properly handle heterogeneities located in nuclear reactor cores, for problems’ size ranging from colorsets of assemblies to large reactor cores configurations in 2D and 3D. The innovative algorithm developed during the thesis intends to optimize the use of parallelism and memory. The approach also aims to minimize the influence of the parallel implementation on the performances. These goals match the needs of APOLLO3 project, developed at CEA and supported by EDF and AREVA, which must be a portable code (no optimization on a specific architecture) in order to achieve best estimate modeling with resources ranging from personal computer to compute cluster available for engineers analyses. The proposed algorithm is a Parallel Multigroup-Block Jacobi one. Each subdomain is considered as a multi-group fixed-source problem with volume-sources (fission) and surface-sources (interface flux between the subdomains). The multi-group problem is solved in each subdomain and a single communication of the interface flux is required at each power iteration. The spectral radius of the resolution algorithm is made similar to the one of a classical resolution algorithm with a nonlinear diffusion acceleration method: the well-known Coarse Mesh Finite Difference. In this way an ideal scalability is achievable when the calculation is parallelized. The memory organization, taking advantage of shared memory parallelism, optimizes the resources by avoiding redundant copies of the data shared between the subdomains. Distributed memory architectures are made available by a hybrid parallel method that combines both paradigms of shared memory parallelism and distributed memory parallelism. For large problems, these architectures provide a greater number of processors and the amount of memory required for high-fidelity modeling. Thus, we have completed several modeling exercises to demonstrate the potential of the method: 2D full core calculation of a large pressurized water reactor and 3D colorsets of assemblies taking into account the constraints of space and energy discretization expected for high-fidelity modeling
APA, Harvard, Vancouver, ISO, and other styles
40

Kugel, Felix [Verfasser]. "Das Shor-Verfahren als stochastischer Algorithmus / Felix Kugel." 2006. http://d-nb.info/982288166/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Chang-Ting, and 李昌庭. "Short Term Generation Scheduling of Autonomous System Using Genetic Algorithms." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/49532555879343580093.

Full text
Abstract:
碩士
中原大學
電機工程研究所
93
Renewable energy including wind power and solar power has advantages, e.g., environmental protection. Developing the wind power generation and solar power generation has great significance to the protection of the environment and development of the renewable energy. However, the wind and solar power generation will cause a lot of problems after being incorporated into the power system, for example, estimating benefit of generating, commitment technique, and scheduling of power generation, etc. This thesis deals with independent generation system that includes wind power, solar power, diesel unit and rechargeable battery group. This thesis considers the uncertain factors of climate (e.g., wind and sunlight) and investigates the short-term generation scheduling of the above-mentioned independent generation system. The cost of diesel units is minimized and all operation constrains are satisfied. The wind power and solar power modeled by fuzzy theory are expressed with the Lagrange multiplier as well as membership function for reducing the computational complexity using the genetic algorithm cooperated with penalty function. The simulations of the non-fuzzy and fuzzy generation scheduling were performed individually. An independent generation system was used as a test system for showing the applicability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
42

Wei, Yang-Ting, and 魏仰廷. "Short Term Load Forecasting Using Machine Learning Algorithms for Building Automation System." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/fj9b92.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
105
According to UNEP, the energy consumption and greenhouse gas discharged by buildings are responsible for about 40% of the global energy used. Thus, the energy efficiency is an important mean of reducing greenhouse gas emission. Among the improving methods, we put our attentions on energy efficiency to cut energy waste, especially on electricity consumption. In the past decades, the rate of buildings with Building Automation System (BAS) is increasing. BAS integrates electrical consumption, temperature, humidity and so on, which depends on the building. With various kinds of record, BAS allows data mining techniques to support decision making. The first part of our research developed an approach of feature extraction and a prediction structure which will be utilized in energy forecasting. To begin with, we analyzed user behavior by data visualization. Next, we selected the appropriate sensors to obtain training data through observing the results on the last step and literature reviews. At the last, we apply support vector regression (SVR) and weighted linear regression to train a regression model. In the second part of this study, we presented some deep learning structures to forecast electricity consumption. In the last part of our research, we combined some ways to select proper sensors. In addition, we made multiple steps to train a better model. To solve difficult problems such as that features are hard to describe, we integrated Deep Learning in this chapter. To sum up, we build a flexible and accuracy architecture which different BAS data and field can be applied in. In additional, we also provide a clear method and process as an example, so that people can select the appropriate forecasting architecture based on the characteristics of their data.
APA, Harvard, Vancouver, ISO, and other styles
43

Barnard, S. J. "Short term load forecasting by a modified backpropagation trained neural network." Thesis, 2012. http://hdl.handle.net/10210/5828.

Full text
Abstract:
M. Ing.
This dissertation describes the development of a feedforwa.rd neural network, trained by means of an accelerated backpropagation algorithm, used for the short term load forecasting on real world data. It is argued that the new learning algorithm. I-Prop, - is a faster training - algorithm due to the fact that the learning rate is optimally predicted and changed according to a more efficient formula (without the need for extensive memory) which speeds up the training process. The neural network developed was tested for the month of December 1994, specifically to test the artificial neural network's ability to correctly predict the load during a Public Holiday, as well as the change over from Public Holiday to 'Normal' working day. In conclusion, suggestions are made towards further research in the improvement of the I-Prop algorithm as well as improving the load forecasting technique implemented in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
44

Raveendran, Nithin. "A Modified Sum-Product Algorithm over Graphs with Short Cycles." Thesis, 2015. http://etd.iisc.ernet.in/2005/3847.

Full text
Abstract:
We investigate into the limitations of the sum-product algorithm for binary low density parity check (LDPC) codes having isolated short cycles. Independence assumption among messages passed, assumed reasonable in all configurations of graphs, fails the most in graphical structures with short cycles. This research work is a step forward towards understanding the effect of short cycles on error floors of the sum-product algorithm. We propose a modified sum-product algorithm by considering the statistical dependency of the messages passed in a cycle of length 4. We also formulate a modified algorithm in the log domain which eliminates the numerical instability and precision issues associated with the probability domain. Simulation results show a signal to noise ratio (SNR) improvement for the modified sum-product algorithm compared to the original algorithm. This suggests that dependency among messages improves the decisions and successfully mitigates the effects of length-4 cycles in the Tanner graph. The improvement is significant at high SNR region, suggesting a possible cause to the error floor effects on such graphs. Using density evolution techniques, we analysed the modified decoding algorithm. The threshold computed for the modified algorithm is higher than the threshold computed for the sum-product algorithm, validating the observed simulation results. We also prove that the conditional entropy of a codeword given the estimate obtained using the modified algorithm is lower compared to using the original sum-product algorithm.
APA, Harvard, Vancouver, ISO, and other styles
45

Lin, Jie-Kai, and 林捷凱. "Multi-Objective Short Term Generation Scheduling of Autonomous System with Fuzzy Parameters Using Immune Algorithms." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/42393501211316495876.

Full text
Abstract:
碩士
中原大學
電機工程研究所
97
Abstract Renewable energies including wind power and solar power have advantages, e.g., environmental protection. Developing the wind power generation and solar power generation has great significance on the protection of the environment and development of the renewable energy. However, the wind and solar power generation will cause a lot of problems after being incorporated into the power system, for example, estimating benefit of generation, commitment technique, and scheduling of power generation, etc. This thesis deals with independent generation system that includes wind power, solar power, diesel unit and rechargeable battery group. This thesis considers the uncertain factors of climate (e.g., wind and sunlight) and investigates the short-term generation scheduling of the above-mentioned independent generation system. The cost of diesel units and CO2 emission are minimized and all operation constrains are satisfied. The wind power and solar power was modeled by fuzzy set. The immune algorithm incorporated with penalty function was used to solve this interactive multi-objective nonlinear programming problems. An independent generation system was used as a test system for showing the applicability of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
46

Bento, Pedro Miguel Rocha. "Hybrid artificial intelligence algorithms for short-term load and price forecasting in competitive electric markets." Master's thesis, 2017. http://hdl.handle.net/10400.6/7851.

Full text
Abstract:
The liberalization and deregulation of electric markets forced the various participants to accommodate several challenges, including: a considerable accumulation of new generation capacity from renewable sources (fundamentally wind energy), the unpredictability associated with these new forms of generation and new consumption patterns, contributing to further electricity prices volatility (e.g. the Iberian market). Given the competitive framework in which market participants operate, the existence of efficient computational forecasting techniques is a distinctive factor. Based on these forecasts a suitable bidding strategy and an effective generation systems operation planning is achieved, together with an improved installed transmission capacity exploitation, results in maximized profits, all this contributing to a better energy resources utilization. This dissertation presents a new hybrid method for load and electricity prices forecasting, for one day ahead time horizon. The optimization scheme presented in this method, combines the efforts from different techniques, notably artificial neural networks, several optimization algorithms and wavelet transform. The method’s validation was made using different real case studies. The subsequent comparison (accuracy wise) with published results, in reference journals, validated the proposed hybrid method suitability.
O processo de liberalização e desregulação dos mercados de energia elétrica, obrigou os diversos participantes a acomodar uma série de desafios, entre os quais: a acumulação considerável de nova capacidade de geração proveniente de origem renovável (fundamentalmente energia eólica), a imprevisibilidade associada a estas novas formas de geração e novos padrões de consumo. Resultando num aumento da volatilidade associada aos preços de energia elétrica (como é exemplo o mercado ibérico). Dado o quadro competitivo em que os agentes de mercado operam, a existência de técnicas computacionais de previsão eficientes, constituí um fator diferenciador. É com base nestas previsões que se definem estratégias de licitação e se efetua um planeamento da operação eficaz dos sistemas de geração que, em conjunto com um melhor aproveitamento da capacidade de transmissão instalada, permite maximizar os lucros, realizando ao mesmo tempo um melhor aproveitamento dos recursos energéticos. Esta dissertação apresenta um novo método híbrido para a previsão da carga e dos preços da energia elétrica, para um horizonte temporal a 24 horas. O método baseia-se num esquema de otimização que reúne os esforços de diferentes técnicas, nomeadamente redes neuronais artificiais, diversos algoritmos de otimização e da transformada de wavelet. A validação do método foi feita em diferentes casos de estudo reais. A posterior comparação com resultados já publicados em revistas de referência, revelou um excelente desempenho do método hibrido proposto.
APA, Harvard, Vancouver, ISO, and other styles
47

Ravi, Kumar B. "Development Of Algorithms For Power Quality Improvements In Distribution Systems." Thesis, 2004. http://etd.iisc.ernet.in/handle/2005/1170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Nien, Chun, and 粘群. "Short-Time and Low-Temperature Development for Electron-Beam Lithography and the Algorithms of Proximity Effect Correction." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/e66nvt.

Full text
Abstract:
博士
國立臺灣大學
電子工程學研究所
106
Electron-beam lithography (EBL) is one of the most popular and important techniques in manufacturing high-resolution nanopatterns without masks and enabling the fast development of electronic and photonic devices. The proximity effect is one of the most critical issue in EBL, as it can degrade the pattern quality and, thus, impact the performance of the applications greatly. Regarding the development as a static result of electron-beam exposure, most studies solving the proximity effect by focusing on the spatial distribution of electron intensity. In fact, the development is a dynamic process as a function of the development duration and temperature. The continually changing nature of development may lead to pattern deviation. This effect becomes more noticeable as the required feature size continually shrinks. Recently, some of the researches start to consider integrating the development simulation into the proximity effect correction (PEC). However, the EBL development is a complicated process, and the conventional methods are very computationally intensive and lengthy. This dissertation focuses on solving the proximity effect of the high-voltage EBL using the positive-tone resist. We first use a set of single-spot experiments to categorize the development process and establish a comprehensive differential model of EBL to describe the relation among the incident electrons, resist, and the development conditions such as durations and temperatures. This model identifies the location of exposure point as a singular point of ultra-high development rate, which, thus, can be considered as the beginning point of the development. Further, we verify the characteristic region of each incident spot induced by the point spread function of the electron-beam system. Eliminating the proximity effect effectively, we further achieve the pattern of isolated line with the line width of 8 nm and dense line array with the line width of 9 nm with the pitch size of 30 nm by utilizing the results from single-spot experiments at low development temperatures. The insights from the study of the single-spot experiments lead to the innovation of a novel short-range PEC method. Based on the 2-D development model, we propose a novel concept of the critical-development path and its usage in the evaluation of the fitness of EBL patterns. For the first time, we also transform the searching of the critical-development path into the shortest-path problem of graph theory, which enables the potential of using a more efficient algorithm in the simulation of the development path. We propose a Dijkstra-based algorithm with the data structure of priority queue to guarantee the correctness and to maintain the efficiency. Also, we investigate the optimization strategies in the PEC problems. The algorithm of swarm intelligence is introduced to the PEC and is compared with the simplex-based method. From the numerical analysis, we demonstrate that choosing of a suitable optimization scheme is important especially in minimizing a complicated function like the fitness function of EBL with pixel-based fine tuning. The PEC algorithm is applied to the fabrication of an U-shaped split-ring resonators and produces an optimized exposure pattern that shows excellent agreement with the targeted design objectives. Our work on the PEC strategy reduces the computational cost significantly and is particularly suitable for the design of complex pattern with various constraints.
APA, Harvard, Vancouver, ISO, and other styles
49

Brandão, Jorge Manuel Tavares. "Ataques quânticos e os criptossistemas de McEliece." Master's thesis, 2018. http://hdl.handle.net/10773/27194.

Full text
Abstract:
0 algoritmo de Shor e a evolução da computação quântica trouxeram grandes ameaças à segurança dos criptossistemas atuais. Pela necessidade de se encontrar novas alternativas, surgiu a criptografia pós-quântica. Esta dissertação pode ser dividida em três partes principais. Na primeira apresentamos alguns elementos de teoria dos números e descrevemos um criptossistema clássico, o RSA, cuja segurança pode ser facilmente quebrada recorrendo a ataques quânticos. Na segunda parte descrevemos a implementação do algoritmo de Shor, que é um ataque a esse e a todos os criptossistemas cuja segurança reside no problema da fatorização ou no problema do logaritmo discreto. Na implementação deste algoritmo minimizámos o número de qubits necessários. Para isso, utilizámos transformadas de Fourier quânticas para realizar operações como a adição, a multiplicação e a exponenciação modular. Assim, segundo a nossa abordagem, precisámos de apenas Li + 2L2 + 3 qubits, para implementar o algoritmo de Shor, ou seja, para encontrar o período da função /(x) = ax mod N, onde Li corresponde ao número de qubits necessários para representar x tal que N2 < 2L1 < 2N2 e L2 corresponde ao número de qubits necessários para representar N. Por fim, na última parte desta dissertação, apresentaremos elementos da criptografia pós-quântica, em particular, um protocolo proposto por nós. Este é uma nova variante do criptossistema de McEliece, no qual se propõe o uso de códigos convolucionais em vez do uso de códigos de bloco. Assim, a parte codificadora da nossa chave pública, que denotaremos por G'(D), será ela própria convolucional. Ao estudarmos várias possibilidades de ataques para este criptossistema, verificámos que, em comparação com as restantes variantes, para uma chave pública menor conseguimos ter um fator de trabalho muito maior, o que se traduz numa maior segurança
Shor’s algorithm and the evolution of quantum computing brought big threats to the security of the current cryptosystems. The need to find new alternatives created what is known as post-quantum cryptography. This dissertation can be splitted into three main parts. In the first one, we present elements of number theory and we describe a classical cryptosystem, the RSA, whose security can be easily broken using quantum attacks. In the second part, we describe the implementation of Shor’s algorithm, which is an attack to RSA but also to all the cryptosystems whose safety lies in the factorization problem or in the discrete logarithm problem. In the implementation of this algorithm we minimized the number of qubits required. For this, we used quantum Fourier transforms to perform operations such as addition, multiplication and modular exponentiation. Thus, according to our approach, we need only Li + 2L% + 3 qubits to implement the Shor’s algorithm, that is, to find the period of the function f(x) = ax mod N, where Z] is the number of bits needed to represent x such that N2 < 2L1 < 2N2, and L2 is the number of bits needed to represent N. Finally, in the last part of this dissertation, we present elements of post-quantum cryptography, in particular, a new protocol proposed by us. This is a new variant of the McEliece cryptosystem, on which we propose the use of convolutional codes instead of block codes. Thus, the encoder part of our public key, which we will denote by G'(D), will be itself convolutional. By studying several possibilities of attacks for this cryptosystem, we verified that, compared to the other variants, for a smaller public key we can have a much larger work factor, which can be translated in a greater security
Mestrado em Engenharia Física
APA, Harvard, Vancouver, ISO, and other styles
50

John, Jem Teresa. "Optical Diffraction Tomography for the Refractive Index Profiling of Objects with Large Space-Bandwidth product." Thesis, 2017. http://etd.iisc.ernet.in/2005/3617.

Full text
Abstract:
The primary goal of this work is to arrive at direction tomography (DT) algorithms freed from the severe linearization in the formulation, and as-assumptions on variation of the refractive index distribution (RID), involved in the earlier approaches based on Born and Royton approximations and the Fourier di reaction theorem (FDT). To start with, a direct single-step re-covery of RID from intensity measurements is demonstrated, replacing the common two-step procedure involving, rest the recovery of phase from in-density followed by the inversion of scattered led for the RID. The information loss, unavoidable in a two-step procedure is thus successfully addressed. Secondly, an iterative method which works with a forward model obtained directly from the Helmholtz equation is developed. This forward model, though has simplifying assumptions, is more general and can accommodate larger variations in RID than that allowed in the previous linear models. The iterative procedure has an update step which uses a linearization of the forward model and a re-linearization step at the updated RID. The procedure which directly employs the measured intensities is used as part of a deterministic Gauss-Newton algorithm and a stochastic optimization algorithm which uses the ensemble Kalman lter to arrive at the recursive update. The stochastic method is found to be more noise-tolerant and efficient to take care of process model inaccuracies. The proof is seen in better reconstructions from experimental data for two example objects, namely, a graded-index optical bre and a photonic-crystal bre. It is further ob-served that the reconstructions from photonic crystal bre are blurred, noisy and less accurate. Identifying the inaccurate implementation of the modemed Helmholtz equation for large k values employing the current sampling rate as the shortcoming, a new procedure, which splits the bandwidth into smaller components using short-time Fourier Transform is developed. The set of equations arrived at, each t for a narrow frequency band, is solved and the solutions are reassembled to obtain the scattered led for the original problem. The simulated di rated intensities so obtained are better matched to their measured experimental counterparts. However, the impel-mentation of the mode end procedure is computation-intensive, for which a parallel-processing machine can be a good solution. The recovery of RID with this mode cation is not attempted in this work and is left for future implementation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography