Academic literature on the topic 'String algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'String algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "String algorithm"

1

Bhagya Sri, Mukku, Rachita Bhavsar, and Preeti Narooka. "String Matching Algorithms." International Journal Of Engineering And Computer Science 7, no. 03 (March 23, 2018): 23769–72. http://dx.doi.org/10.18535/ijecs/v7i3.19.

Full text
Abstract:
To analyze the content of the documents, the various pattern matching algorithms are used to find all the occurrences of a limited set of patterns within an input text or input document. In order to perform this task, this research work used four existing string matching algorithms; they are Brute Force algorithm, Knuth-Morris-Pratt algorithm (KMP), Boyer Moore algorithm and Rabin Karp algorithm. This work also proposes three new string matching algorithms. They are Enhanced Boyer Moore algorithm, Enhanced Rabin Karp algorithm and Enhanced Knuth-Morris-Pratt algorithm. Findings: For experimentation, this work has used two types of documents, i.e. .txt and .docx. Performance measures used are search time, number of iterations and accuracy. From the experimental results, it is realized that the enhanced KMP algorithm gives better accuracy compared to other string matching algorithms. Application/Improvements: Normally, these algorithms are used in the field of text mining, document classification, content analysis and plagiarism detection. In future, these algorithms have to be enhanced to improve their performance and the various types of documents will be used for experimentation.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Zhaoyang. "Review on String-Matching Algorithm." SHS Web of Conferences 144 (2022): 03018. http://dx.doi.org/10.1051/shsconf/202214403018.

Full text
Abstract:
String-matching algorithm is one of the most researched algorithms in computer science which has become an important factor in many technologies. This field aims at utilizing the least time and resources to find desired sequence of character in complex data content. The most classical and famous string-search algorithms are Knuth-Morris-Pratt (KMP) algorithm and Boyer-Moore (DM) algorithm. These two algorithms provide efficient heuristic jump rules by prefix or suffix. Bitap algorithm was the first to introduce bit-parallelism into string-matching field. Backward Non-Deterministic DAWG Matching (BNDM) algorithm is a modern practical algorithm that is an outstanding combination of theoretical research and practical application. Those meaningful algorithms play a guiding role in future research in string-search algorithm to improve the average performance of the algorithm and reduce resource consumption.
APA, Harvard, Vancouver, ISO, and other styles
3

Russo, Luıs, and Alexandre Francisco. "Small Longest Tandem Scattered Subsequences." Scientific Annals of Computer Science 31, no. 1 (August 9, 2021): 79–110. http://dx.doi.org/10.7561/sacs.2021.1.79.

Full text
Abstract:
We consider the problem of identifying tandem scattered subsequences within a string. Our algorithm identifies a longest subsequence which occurs twice without overlap in a string. This algorithm is based on the Hunt-Szymanski algorithm, therefore its performance improves if the string is not self similar, which occurs naturally on strings over large alphabets. Our algorithm relies on new results for data structures that support dynamic longest increasing sub-sequences. In the process we also obtain improved algorithms for the decremental string comparison problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Jantan, Hamidah, and Nurul Aisyiah Baharudin. "Mobile-Based Word Matching Detection using Intelligent Predictive Algorithm." International Journal of Interactive Mobile Technologies (iJIM) 13, no. 09 (September 5, 2019): 140. http://dx.doi.org/10.3991/ijim.v13i09.10848.

Full text
Abstract:
Word matching is a string searching technique for information retrieval in Natural Language Processing (NLP). There are several algorithms have been used for string search and matching such as Knuth Morris Pratt, Boyer Moore, Horspool, Intelligent Predictive and many other. However, there some issues need to be considered in measuring the performance of the algorithms such as the efficiency for searching small alphabets, time taken in processing the pattern of the text and extra space to support a huge table or state machines. Intelligent Predictive (IP) algorithm capable to solve several word matching issues discovered in other string searching algorithms especially with abilities to skip the pre-processing of the pattern, uses simple rules during matching process and does not involved complex computations. Due to those reasons,<strong> </strong>IP algorithm is used in this study due to the ability of this algorithm to produce a good result in string searching process. This article aims to apply IP algorithm together with Optical Character Recognition (OCR) tool for mobile-based word matching detection. There are four phases in this study consists of data preparation, mobile based system design, algorithm implementation and result analysis. The efficiency of the proposed algorithm was evaluated based on the execution time of searching process among the selected algorithms. The result shows that the IP algorithm for string searching process is more efficient in execution time compared to well-known algorithm i.e. Boyer Moore algorithm. In future work, the performance of string searching process can be enhanced by using other suitable optimization searching techniques such as Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization and many others.
APA, Harvard, Vancouver, ISO, and other styles
5

Khadiev, Kamil, Artem Ilikaev, and Jevgenijs Vihrovs. "Quantum Algorithms for Some Strings Problems Based on Quantum String Comparator." Mathematics 10, no. 3 (January 26, 2022): 377. http://dx.doi.org/10.3390/math10030377.

Full text
Abstract:
We study algorithms for solving three problems on strings. These are sorting of n strings of length k, “the Most Frequent String Search Problem”, and “searching intersection of two sequences of strings”. We construct quantum algorithms that are faster than classical (randomized or deterministic) counterparts for each of these problems. The quantum algorithms are based on the quantum procedure for comparing two strings of length k in O(k) queries. The first problem is sorting n strings of length k. We show that classical complexity of the problem is Θ(nk) for constant size alphabet, but our quantum algorithm has O˜(nk) complexity. The second one is searching the most frequent string among n strings of length k. We show that the classical complexity of the problem is Θ(nk), but our quantum algorithm has O˜(nk) complexity. The third problem is searching for an intersection of two sequences of strings. All strings have the same length k. The size of the first set is n, and the size of the second set is m. We show that the classical complexity of the problem is Θ((n+m)k), but our quantum algorithm has O˜((n+m)k) complexity.
APA, Harvard, Vancouver, ISO, and other styles
6

Franek, Frantisek, and Michael Liut. "Computing Maximal Lyndon Substrings of a String." Algorithms 13, no. 11 (November 12, 2020): 294. http://dx.doi.org/10.3390/a13110294.

Full text
Abstract:
There are two reasons to have an efficient algorithm for identifying all right-maximal Lyndon substrings of a string: firstly, Bannai et al. introduced in 2015 a linear algorithm to compute all runs of a string that relies on knowing all right-maximal Lyndon substrings of the input string, and secondly, Franek et al. showed in 2017 a linear equivalence of sorting suffixes and sorting right-maximal Lyndon substrings of a string, inspired by a novel suffix sorting algorithm of Baier. In 2016, Franek et al. presented a brief overview of algorithms for computing the Lyndon array that encodes the knowledge of right-maximal Lyndon substrings of the input string. Among those presented were two well-known algorithms for computing the Lyndon array: a quadratic in-place algorithm based on the iterated Duval algorithm for Lyndon factorization and a linear algorithmic scheme based on linear suffix sorting, computing the inverse suffix array, and applying to it the next smaller value algorithm. Duval’s algorithm works for strings over any ordered alphabet, while for linear suffix sorting, a constant or an integer alphabet is required. The authors at that time were not aware of Baier’s algorithm. In 2017, our research group proposed a novel algorithm for the Lyndon array. Though the proposed algorithm is linear in the average case and has O(nlog(n)) worst-case complexity, it is interesting as it emulates the fast Fourier algorithm’s recursive approach and introduces τ-reduction, which might be of independent interest. In 2018, we presented a linear algorithm to compute the Lyndon array of a string inspired by Phase I of Baier’s algorithm for suffix sorting. This paper presents the theoretical analysis of these two algorithms and provides empirical comparisons of both of their C++ implementations with respect to the iterated Duval algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Tsarev, Roman Yu, Elena A. Tsareva, and Alexey S. Chernigovskiy. "Combined String Searching Algorithm." Journal of Siberian Federal University. Engineering & Technologies 10, no. 1 (February 2017): 126–35. http://dx.doi.org/10.17516/1999-494x-2017-10-1-126-135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Subada, Mhd Ali. "Comparisonal Analysis Of Even-Rodeh Algorithm Code And Fibonacci Code Algorithm For Text File Compression." Journal Basic Science and Technology 11, no. 1 (February 28, 2022): 1–7. http://dx.doi.org/10.35335/jbst.v11i1.1765.

Full text
Abstract:
The growing requirement towards bigger storage space is the main cause of new compression techniques being developed. By doing compression, big chunks of data will be lowered in size to save storage space. In this research we will be using the Even-Rodeh Code and Fibonacci Code Algorithm, the performance will be benchmarked with bitrate, compression ratio, space saving, compression and decompression time in text files. Compression is done by reading strings on text files, and then the Even-Rodeh Code and Fibonacci Code Algorithm makes a string code and performs compression. The compression results are *.erc and *.fib files containing character information and string bit which can be decompressed. The decompression result is the original text file which can be saved using the extensions *.doc, *.docx, *.pdf. In this system test the sample used are strings that contains one type of character (Homogen String) and strings that contain several types of character (heterogeneous string) saved in text files *.doc,*.docx, *.pdf. In homogeneous string compressions we can observe that the Fibonacci Code is more efficient in compression ratio, averaging at 0.25 and faster decompression time at 0.011 millisecond average compared to the Even-Rodeh Code Algorithm. In heterogeneous string compression the Fibonacci Code performs better than the Even-Rodeh Code on an average compression ratio of 0.65 and a shorter average decompression time of 0.293 milliseconds.
APA, Harvard, Vancouver, ISO, and other styles
9

Ghuman, Sukhpal, Emanuele Giaquinta, and Jorma Tarhio. "Lyndon Factorization Algorithms for Small Alphabets and Run-Length Encoded Strings." Algorithms 12, no. 6 (June 21, 2019): 124. http://dx.doi.org/10.3390/a12060124.

Full text
Abstract:
We present two modifications of Duval’s algorithm for computing the Lyndon factorization of a string. One of the algorithms has been designed for strings containing runs of the smallest character. It works best for small alphabets and it is able to skip a significant number of characters of the string. Moreover, it can be engineered to have linear time complexity in the worst case. When there is a run-length encoded string R of length ρ , the other algorithm computes the Lyndon factorization of R in O ( ρ ) time and in constant space. It is shown by experimental results that the new variations are faster than Duval’s original algorithm in many scenarios.
APA, Harvard, Vancouver, ISO, and other styles
10

Markić, Ivan, Maja Štula, Marija Zorić, and Darko Stipaničev. "Entropy-Based Approach in Selection Exact String-Matching Algorithms." Entropy 23, no. 1 (December 28, 2020): 31. http://dx.doi.org/10.3390/e23010031.

Full text
Abstract:
The string-matching paradigm is applied in every computer science and science branch in general. The existence of a plethora of string-matching algorithms makes it hard to choose the best one for any particular case. Expressing, measuring, and testing algorithm efficiency is a challenging task with many potential pitfalls. Algorithm efficiency can be measured based on the usage of different resources. In software engineering, algorithmic productivity is a property of an algorithm execution identified with the computational resources the algorithm consumes. Resource usage in algorithm execution could be determined, and for maximum efficiency, the goal is to minimize resource usage. Guided by the fact that standard measures of algorithm efficiency, such as execution time, directly depend on the number of executed actions. Without touching the problematics of computer power consumption or memory, which also depends on the algorithm type and the techniques used in algorithm development, we have developed a methodology which enables the researchers to choose an efficient algorithm for a specific domain. String searching algorithms efficiency is usually observed independently from the domain texts being searched. This research paper aims to present the idea that algorithm efficiency depends on the properties of searched string and properties of the texts being searched, accompanied by the theoretical analysis of the proposed approach. In the proposed methodology, algorithm efficiency is expressed through character comparison count metrics. The character comparison count metrics is a formal quantitative measure independent of algorithm implementation subtleties and computer platform differences. The model is developed for a particular problem domain by using appropriate domain data (patterns and texts) and provides for a specific domain the ranking of algorithms according to the patterns’ entropy. The proposed approach is limited to on-line exact string-matching problems based on information entropy for a search pattern. Meticulous empirical testing depicts the methodology implementation and purports soundness of the methodology.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "String algorithm"

1

Berry, Thomas. "Algorithm engineering : string processing." Thesis, Liverpool John Moores University, 2002. http://researchonline.ljmu.ac.uk/4973/.

Full text
Abstract:
The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text.
APA, Harvard, Vancouver, ISO, and other styles
2

MacLeod, Christopher. "The synthesis of artificial neural networks using single string evolutionary techniques." Thesis, Robert Gordon University, 1999. http://hdl.handle.net/10059/367.

Full text
Abstract:
The research presented in this thesis is concerned with optimising the structure of Artificial Neural Networks. These techniques are based on computer modelling of biological evolution or foetal development. They are known as Evolutionary, Genetic or Embryological methods. Specifically, Embryological techniques are used to grow Artificial Neural Network topologies. The Embryological Algorithm is an alternative to the popular Genetic Algorithm, which is widely used to achieve similar results. The algorithm grows in the sense that the network structure is added to incrementally and thus changes from a simple form to a more complex form. This is unlike the Genetic Algorithm, which causes the structure of the network to evolve in an unstructured or random way. The thesis outlines the following original work: The operation of the Embryological Algorithm is described and compared with the Genetic Algorithm. The results of an exhaustive literature search in the subject area are reported. The growth strategies which may be used to evolve Artificial Neural Network structure are listed. These growth strategies are integrated into an algorithm for network growth. Experimental results obtained from using such a system are described and there is a discussion of the applications of the approach. Consideration is given of the advantages and disadvantages of this technique and suggestions are made for future work in the area. A new learning algorithm based on Taguchi methods is also described. The report concludes that the method of incremental growth is a useful and powerful technique for defining neural network structures and is more efficient than its alternatives. Recommendations are also made with regard to the types of network to which this approach is best suited. Finally, the report contains a discussion of two important aspects of Genetic or Evolutionary techniques related to the above. These are Modular networks (and their synthesis) and the functionality of the network itself.
APA, Harvard, Vancouver, ISO, and other styles
3

Dubois, Simon. "Offline Approximate String Matching forInformation Retrieval : An experiment on technical documentation." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-22566.

Full text
Abstract:
Approximate string matching consists in identifying strings as similar even ifthere is a number of mismatch between them. This technique is one of thesolutions to reduce the exact matching strictness in data comparison. In manycases it is useful to identify stream variation (e.g. audio) or word declension (e.g.prefix, suffix, plural). Approximate string matching can be used to score terms in InformationRetrieval (IR) systems. The benefit is to return results even if query terms doesnot exactly match indexed terms. However, as approximate string matchingalgorithms only consider characters (nor context neither meaning), there is noguarantee that additional matches are relevant matches. This paper presents the effects of some approximate string matchingalgorithms on search results in IR systems. An experimental research design hasbeen conducting to evaluate such effects from two perspectives. First, resultrelevance is analysed with precision and recall. Second, performance is measuredthanks to the execution time required to compute matches. Six approximate string matching algorithms are studied. Levenshtein andDamerau-Levenshtein computes edit distance between two terms. Soundex andMetaphone index terms based on their pronunciation. Jaccard similarity calculatesthe overlap coefficient between two strings. Tests are performed through IR scenarios regarding to different context,information need and search query designed to query on a technicaldocumentation related to software development (man pages from Ubuntu). Apurposive sample is selected to assess document relevance to IR scenarios andcompute IR metrics (precision, recall, F-Measure). Experiments reveal that all tested approximate matching methods increaserecall on average, but, except Metaphone, they also decrease precision. Soundexand Jaccard Similarity are not advised because they fail on too many IR scenarios.Highest recall is obtained by edit distance algorithms that are also the most timeconsuming. Because Levenshtein-Damerau has no significant improvementcompared to Levenshtein but costs much more time, the last one is recommendedfor use with a specialised documentation. Finally some other related recommendations are given to practitioners toimplement IR systems on technical documentation.
APA, Harvard, Vancouver, ISO, and other styles
4

Frey, Jeffrey Daniel. "Finding Song Melody Similarities Using a DNA String Matching Algorithm." Kent State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=kent1208961242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gundu, Pavan Kumar. "Trajectory Tracking Control of Unmanned Ground Vehicles using an Intermittent Learning Algorithm." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93213.

Full text
Abstract:
Traffic congestion and safety has become a major issue in the modern world's commute. Congestion has been causing people to travel billions of hours more and to purchase billions of gallons of fuel extra which account to congestion cost of billions of dollars. Autonomous driving vehicles have been one solution to this problem because of their huge impact on efficiency, pollution, and human safety. Also, extensive research has been carried out on control design of vehicular platoons because a further improvement in traffic throughput while not compromising the safety is possible when the vehicles in the platoon are provided with better predictive abilities. Motion control is a key area of autonomous driving research that handles moving parts of vehicles in a deliberate and controlled manner. A widely worked on problem in motion control concerned with time parameterized reference tracking is trajectory tracking. Having an efficient and effective tracking algorithm embedded in the autonomous driving system is the key for better performance in terms of resources consumed and tracking error. Many tracking control algorithms in literature rely on an accurate model of the vehicle and often, it can be an intimidating task to come up with an accurate model taking into consideration various conditions like friction, heat effects, ageing processes etc. And typically, control algorithms rely on periodic execution of the tasks that update the control actions, but such updates might not be required, which result in unnecessary actions that waste resources. The main focus of this work is to design an intermittent model-free optimal control algorithm in order to enable autonomous vehicles to track trajectories at high-speeds. To obtain a solution which is model-free, a Q-learning setup with an actor-network to approximate the optimal intermittent controller and a critic network to approximate the optimal cost, resulting in the appropriate tuning laws is considered.
Master of Science
A risen research effort in the area of autonomous vehicles has been witnessed in the past few decades because these systems improve safety, comfort, transport time and energy consumption which are some of the main issues humans are facing in the modern world’s highway systems. Systems like emergency braking, automatic parking, blind angle vehicle detection are creating a safer driving environment in populated areas. Advanced driver assistance systems (ADAS) are what such kind of systems are known as. An extension of these partially automated ADAS are vehicles with fully automated driving abilities, which are able to drive by themselves without any human involvement. An extensively proposed approach for making traffic throughput more efficient on existing highways is to assemble autonomous vehicles into platoons. Small intervehicle spacing and many vehicles constituting each platoon formation improve the traffic throughput significantly. Lately, the advancements in computational capabilities, in terms of both algorithms and hardware, communications, and navigation and sensing devices contributed a lot to the development of autonomous systems (both single and multiagent) that operate with high reliability in uncertain/dynamic operating conditions and environments. Motion control is an important area in the autonomous vehicles research. Trajectory-tracking is a widely studied motion control scenario which is about designing control laws that force a system to follow some time-dependent reference path and it is important to have an effective and efficient trajectory-tracking control law in an autonomous vehicle to reduce the resources consumed and tracking error. The goal of this work is to design an intermittent model-free trajectory tracking control algorithm where there is no need of any mathematical model of the vehicle system being controlled and which can reduce the controller updates by allowing the system to evolve in an open loop fashion and close the loop only when an user defined triggering condition is satisfied. The approach is energy efficient in that the control updates are limited to instances when they are needed rather than unnecessary periodic updates. Q-learning which is a model-free reinforcement learning technique is used in the trajectory tracking motion control algorithm to make the vehicles track their respective reference trajectories without any requirement of their motion model, the knowledge of which is generally needed when dealing with a motion control problem. The testing of the designed algorithm in simulations and experiments is presented in this work. The study and development of a vehicle platform in order to perform the experiments is also discussed. Different motion control and sensing techniques are presented and used. The vehicle platform is shown to track a reference trajectory autonomously without any human intervention, both in simulations and experiments, proving the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Momeninasab, Leila. "Design and Implementation of a Name Matching Algorithm for Persian Language." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102210.

Full text
Abstract:
Name matching plays a vital and crucial role in many applications. They are for example used in information retrieval or deduplication systems to do comparisons among names to match them together or to find the names that refer to identical objects, persons, or companies. Since names in each application are subject to variations and errors that are unavoidable in any system and because of the importance of name matching, so far many algorithms have been developed to handle matching of names. These algorithms consider the name variations that may happen because of spelling, pattern or phonetic modifications. However most existing methods were developed for use with the English language and so cover the characteristics of this language. Up to now no specific one has been designed and implemented for the Persian language. The purpose of this thesis is to present a name matching algorithm for Persian. In this project, after consideration of all major algorithms in this area, we selected one of the basic methods for name matching that we then expanded to make it work particularly well for Persian names. This proposed algorithm, called Persian Edit Distance Algorithm or shortly PEDA, was built based on the characteristics of the Persian language and it compares Persian names with each other on three levels: phonetic similarity, character form similarity and keyboard distance, in order to give more accurate results for Persian names. The algorithm gets Persian names as its input and determines their similarity as a percentage in the output. In this thesis three series of experiments have been accomplished in order to evaluate the proposed algorithm. The f-measure average shows a value of 0.86 for the first series and a value of 0.80 for the second series results. The first series of experiments have been repeated with Levenshtein as well, and have 33.9% false negatives on average while PEDA has a false negative average of 6.4%. The third series of experiments shows that PEDA works well for one edit, two edits and three edits with true positive average values of 99%, 81%, and 69% respectively.
APA, Harvard, Vancouver, ISO, and other styles
7

BERNARDINI, GIULIA. "COMBINATORIAL METHODS FOR BIOLOGICAL DATA." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2021. http://hdl.handle.net/10281/305220.

Full text
Abstract:
Lo scopo di questa tesi è di elaborare e analizzare metodi rigorosi dal punto di vista matematico per l’analisi di due tipi di dati biologici: dati relativi a pan-genomi e filogenesi. Con il termine “pan-genoma” si indica, in generale, un insieme di sequenze genomiche strettamente correlate (tipicamente appartenenti a individui della stessa specie) che si vogliano utilizzare congiuntamente come sequenze di riferimento per un’intera popolazione. Una filogenesi, invece, rappresenta le relazioni evolutive in un gruppo di entità, che siano esseri viventi, geni, lingue naturali, manoscritti antichi o cellule tumorali. Con l’eccezione di uno dei risultati presentati in questa tesi, relativo all’analisi di filogenesi tumorali, il taglio della dissertazione è prevalentemente teorico: lo scopo è studiare gli aspetti combinatori dei problemi affrontati, più che fornire soluzioni efficaci in pratica. Una conoscenza approfondita degli aspetti teorici di un problema, del resto, permette un'analisi matematicamente rigorosa delle soluzioni già esistenti, individuandone i punti deboli e quelli di forza, fornendo preziosi dettagli sul loro funzionamento e aiutando a decidere quali problemi vadano ulteriormente investigati. Oltretutto, è spesso il caso che nuovi risultati teorici (algoritmi, strutture dati o riduzioni ad altri problemi più noti) si possano direttamente applicare o adattare come soluzione ad un problema pratico, o come minimo servano ad ispirare lo sviluppo di nuovi metodi efficaci in pratica. La prima parte della tesi è dedicata a nuovi metodi per eseguire delle operazioni fondamentali su un testo elastico-degenerato, un oggetto computazionale che codifica in maniera compatta un insieme di testi simili tra loro, come, ad esempio, un pan-genoma. Nello specifico, si affrontano il problema di cercare una sequenza di lettere in un testo elastico-degenerato, sia in maniera esatta che tollerando un numero prefissato di errori, e quello di confrontare due testi degenerati. Nella seconda parte si considerano sia filogenesi tumorali, che ricostruiscono per l'appunto l'evoluzione di un tumore, sia filogenesi "classiche", che rappresentano, ad esempio, la storia evolutiva delle specie viventi. In particolare, si presentano nuove tecniche per confrontare due o più filogenesi tumorali, necessarie per valutare i risultati di diversi metodi che ricostruiscono le filogenesi stesse, e una nuova e più efficiente soluzione a un problema di lunga data relativo a filogenesi "classiche", consistente nel determinare se sia possibile sistemare, in presenza di dati mancanti, un insieme di specie in un albero filogenetico che abbia determinate proprietà.
The main goal of this thesis is to develop new algorithmic frameworks to deal with (i) a convenient representation of a set of similar genomes and (ii) phylogenetic data, with particular attention to the increasingly accurate tumor phylogenies. A “pan-genome” is, in general, any collection of genomic sequences to be analyzed jointly or to be used as a reference for a population. A phylogeny, in turn, is meant to describe the evolutionary relationships among a group of items, be they species of living beings, genes, natural languages, ancient manuscripts or cancer cells. With the exception of one of the results included in this thesis, related to the analysis of tumor phylogenies, the focus of the whole work is mainly theoretical, the intent being to lay firm algorithmic foundations for the problems by investigating their combinatorial aspects, rather than to provide practical tools for attacking them. Deep theoretical insights on the problems allow a rigorous analysis of existing methods, identifying their strong and weak points, providing details on how they perform and helping to decide which problems need to be further addressed. In addition, it is often the case where new theoretical results (algorithms, data structures and reductions to other well-studied problems) can either be directly applied or adapted to fit the model of a practical problem, or at least they serve as inspiration for developing new practical tools. The first part of this thesis is devoted to methods for handling an elastic-degenerate text, a computational object that compactly encodes a collection of similar texts, like a pan-genome. Specifically, we attack the problem of matching a sequence in an elastic-degenerate text, both exactly and allowing a certain amount of errors, and the problem of comparing two degenerate texts. In the second part we consider both tumor phylogenies, describing the evolution of a tumor, and “classical” phylogenies, representing, for instance, the evolutionary history of the living beings. In particular, we present new techniques to compare two or more tumor phylogenies, needed to evaluate the results of different inference methods, and we give a new, efficient solution to a longstanding problem on “classical” phylogenies: to decide whether, in the presence of missing data, it is possible to arrange a set of species in a phylogenetic tree that enjoys specific properties.
APA, Harvard, Vancouver, ISO, and other styles
8

Moradi, Arvin. "Smart Clustering System for Filtering and Cleaning User Generated Content : Creating a profanity filter for Truecaller." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124408.

Full text
Abstract:
This thesis focuses on investigating and creating an application for filtering user-generated content. The method was to examine how profanity and racist expressions are used and manipulated to evade filtering processes in similar systems. Focus also went on to study different algorithms to get this process to be quick and efficient, i.e., to process as many names in the shortest amount of time possible. This is because the client needs to filter millions of new uploads every day. The result shows that the application detects profanity and manipulated profanity. Data from the customer’s database was also used for testing purposes, and the result showed that the application also works in practice. The performance test shows that the application has a fast execution time. We could see this by approximating it to a linear func-tion with respect to time and the number of names entered. The conclusion was that the filter works and discovers profanity not detected earlier. Future updates to strengthen the decision process could be to introduce a third-party service, or a web interface where you can manually control decisions. Execution time is good and shows that 10 million names can be pro-cessed in about 6 hours. In the future, one can parallelize queries to the database so that multiple names can be processed simultaneously.
Denna avhandling fokuserar på att utreda och skapa en applikation för filtrering av användargenererat innehåll. Metoden gick ut på att undersöka hur svordomar samt rasistiska uttryck används och manipuleras för att undgå filtrerings processer i liknande system. Fokus gick även ut på att studera olika algoritmer för att få denna process att vara snabb och effektiv, dvs kunna bearbeta så många namn på kortast möjliga tid. Detta beror på att kunden i detta sammanhang får in miljontals nya uppladdningar varje dag, som måste filtreras innan använding. Resultatet visar att applikationen upptäcker svordomar i olika former. Data från kundens databas användes också för test syfte, och resultatet visade att applikationen även fungerar i praktiken. Prestanda testet visar att applikationen har en snabb exekveringstid. Detta kunde vi se genom att estimera den till en linjär funktion med hänsyn till tid och antal namn som matats in. Slutsatsen blev att filtret fungerar och upptäcker svordomar som inte upptäckts tidigare i kundens databas. För att stärka besluten i processen kan man i framtida uppdateringar införa tredje parts tjänster, eller ett web interface där man manuelt kan styra beslut. Exekverings tiden är bra och visar att 10 miljoner namn kan bearbetas på cirka 6 timmar. I framtiden kan man parallellisera förfrågningarna till databasen så att flera namn kan bearbetas samtidigt.
APA, Harvard, Vancouver, ISO, and other styles
9

Alex, Ann Theja. "Local Alignment of Gradient Features for Face Photo and Face Sketch Recognition." University of Dayton / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1353372694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pinzon, Yoan Jose. "String algorithms on sequence comparison." Thesis, King's College London (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395648.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "String algorithm"

1

Castillo, Oscar, and Luis Rodriguez. A New Meta-heuristic Optimization Algorithm Based on the String Theory Paradigm from Physics. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-82288-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Landau, Gad M. An efficient string matching algorithm with k differences for nucleotide and amino acid sequences. New York: Courant Institute of Mathematical Sciences, New York University, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Landau, Gad M. An efficient string matching algorithm with k differences for nucleotide and amino acid sequences. New York: Courant Institute of Mathematical Sciences, New York University, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

String searching algorithms. Singapore: World Scientific, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mailund, Thomas. String Algorithms in C. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5920-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

1951-, Aoe Jun-ichi, ed. Computer algorithms: String pattern matching strategies. Los Alamitos, Calif: IEEE Computer Society Press, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smyth, Bill. Computing patterns in strings. Harlow, England: Pearson Addison-Wesley, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

United States. National Aeronautics and Space Administration., ed. An algorithm for unsteady flows with strong convection. [Washington, DC]: National Aeronautics and Space Administration, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

1948-, Apostolico Alberto, and Research Institute for Advanced Computer Science (U.S.), eds. Efficient parallel algorithms for string editing and related problems. [Moffett Field, Calif.?]: Research Institute for Advanced Computer Science, NASA Ames Research Center, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Efficient recovery algorithms with restricted access to strings. [New York, N.Y.?]: [publisher not identified], 2022.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "String algorithm"

1

Martin, Eric, Samuel Kaski, Fei Zheng, Geoffrey I. Webb, Xiaojin Zhu, Ion Muslea, Kai Ming Ting, et al. "String Matching Algorithm." In Encyclopedia of Machine Learning, 929. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Castillo, Oscar, and Luis Rodriguez. "String Theory Algorithm." In A New Meta-heuristic Optimization Algorithm Based on the String Theory Paradigm from Physics, 11–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82288-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Skiena, Steven S. "Set and String Problems." In The Algorithm Design Manual, 620–56. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-84800-070-4_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Iliopoulos, Costas S., Laurent Mouchard, and Yoan J. Pinzon. "The Max-Shift Algorithm for Approximate String Matching." In Algorithm Engineering, 13–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44688-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Allauzen, Cyril, and Mathieu Raffinot. "Simple Optimal String Matching Algorithm." In Combinatorial Pattern Matching, 364–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-45123-4_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bille, Philip, Inge Li Gørtz, Hjalte Wedel Vildhøj, and Søren Vind. "String Indexing for Patterns with Wildcards." In Algorithm Theory – SWAT 2012, 283–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-31155-0_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

He, Longtao, and Binxing Fang. "Linear Nondeterministic Dawg String Matching Algorithm (Abstract)." In String Processing and Information Retrieval, 70–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30213-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bruno, Andrea, Franco Maria Nardini, Giulio Ermanno Pibiri, Roberto Trani, and Rossano Venturini. "TSXor: A Simple Time Series Compression Algorithm." In String Processing and Information Retrieval, 217–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-86692-1_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zaslavski, Alexander J. "Dynamic String-Averaging Subgradient Projection Algorithm." In Springer Optimization and Its Applications, 243–63. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78849-0_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zaslavski, Alexander J. "Dynamic String-Averaging Proximal Point Algorithm." In Springer Optimization and Its Applications, 255–79. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77437-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "String algorithm"

1

Gupta, Aditi, Divyansh Jaiswal, Kartikeya Sinha, and Aman Duggal. "A2KD string pattern Matching Algorithm." In 2015 1st International Conference on Next Generation Computing Technologies (NGCT). IEEE, 2015. http://dx.doi.org/10.1109/ngct.2015.7375141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Arshad, Kamran. "Intelligent Analytical String Search Algorithm." In 2021 International Conference on Innovative Computing (ICIC). IEEE, 2021. http://dx.doi.org/10.1109/icic53490.2021.9692974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vilca, Omar, and Rosiane De Freitas. "An efficient algorithm for the Closest String Problem." In I Encontro de Teoria da Computação. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/etc.2016.9850.

Full text
Abstract:
The closest string problem that arises in computational molecular biology and coding theory is to find a string that minimizes the maximum Hamming distance from a given set of strings, the CSP is NP-hard problem. This article proposes an efficient algorithm for this problem with three strings. The key idea is to apply normalization for the CSP instance. This enables us to decompose the problem in five different cases corresponding to each position of the strings. Furthermore, an optimal solution can be easily obtained in linear time. A formal proof of the algorithm will be presented, also numerical experiments will show the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Babaie, Maryam, and Seyed Rasoul Mousavi. "A Memetic Algorithm for closest string problem and farthest string problem." In 2010 18th Iranian Conference on Electrical Engineering (ICEE). IEEE, 2010. http://dx.doi.org/10.1109/iraniancee.2010.5507004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dai, Liuling, and Yuning Xia. "A Lightweight Multiple String Matching Algorithm." In 2008 International Conference on Computer Science and Information Technology (ICCSIT). IEEE, 2008. http://dx.doi.org/10.1109/iccsit.2008.171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cui, Yanhong, and Renkuan Guo. "A Naïve String Algorithm." In 2008 International Workshop on Geoscience and Remote Sensing (ETT and GRS). IEEE, 2008. http://dx.doi.org/10.1109/ettandgrs.2008.231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alzoabi, Ubaid S., Naser M. Alosaimi, Abdullah S. Bedaiwi, and Abdullatif M. Alabdullatif. "Parallelization of KMP string matching algorithm." In 2013 World Congress on Computer and Information Technology (WCCIT). IEEE, 2013. http://dx.doi.org/10.1109/wccit.2013.6618720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang Wen-jian and Wu Shun-xiang. "A jumping string mode matching algorithm." In Education (ICCSE). IEEE, 2009. http://dx.doi.org/10.1109/iccse.2009.5228461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Meng, Qingduan, Xiaoling Zhang, and Dongwei Lv. "Improved AC_BMH Algorithm for String Matching." In 2010 International Conference on Internet Technology and Applications (iTAP). IEEE, 2010. http://dx.doi.org/10.1109/itapp.2010.5566604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abraham, Dona, and Nisha S. Raj. "Approximate string matching algorithm for phishing detection." In 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, 2014. http://dx.doi.org/10.1109/icacci.2014.6968578.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "String algorithm"

1

Lorenz, Markus. Auswirkungen des Decoy-Effekts auf die Algorithm Aversion. Sonderforschungsgruppe Institutionenanalyse, 2022. http://dx.doi.org/10.46850/sofia.9783947850013.

Full text
Abstract:
Limitations in the human decision-making process restrict the technological potential of algorithms, which is also referred to as "algorithm aversion". This study uses a laboratory experiment with participants to investigate whether a phenomenon known since 1982 as the "decoy effect" is suitable for reducing algorithm aversion. For numerous analogue products, such as cars, drinks or newspaper subscriptions, the Decoy Effect is known to have a strong influence on human decision-making behaviour. Surprisingly, the decisions between forecasts by humans and Robo Advisors (algorithms) investigated in this study are not influenced by the Decoy Effect at all. This is true both a priori and after observing forecast errors.
APA, Harvard, Vancouver, ISO, and other styles
2

Laub, Alan J., and Charles Kenney. Numerically Stable Algorithms in String Dynamics. Fort Belvoir, VA: Defense Technical Information Center, September 1993. http://dx.doi.org/10.21236/ada275898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Helgason, R. V., J. L. Kennington, and K. H. Lewis. Grid Free Algorithms for Strike Planning for Cruise Missiles. Fort Belvoir, VA: Defense Technical Information Center, February 1998. http://dx.doi.org/10.21236/ada338548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Belkin, Shimshon, Sylvia Daunert, and Mona Wells. Whole-Cell Biosensor Panel for Agricultural Endocrine Disruptors. United States Department of Agriculture, December 2010. http://dx.doi.org/10.32747/2010.7696542.bard.

Full text
Abstract:
Objectives: The overall objective as defined in the approved proposal was the development of a whole-cell sensor panel for the detection of endocrine disruption activities of agriculturally relevant chemicals. To achieve this goal several specific objectives were outlined: (a) The development of new genetically engineered wholecell sensor strains; (b) the combination of multiple strains into a single sensor panel to effect multiple response modes; (c) development of a computerized algorithm to analyze the panel responses; (d) laboratory testing and calibration; (e) field testing. In the course of the project, mostly due to the change in the US partner, three modifications were introduced to the original objectives: (a) the scope of the project was expanded to include pharmaceuticals (with a focus on antibiotics) in addition to endocrine disrupting chemicals, (b) the computerized algorithm was not fully developed and (c) the field test was not carried out. Background: Chemical agents, such as pesticides applied at inappropriate levels, may compromise water quality or contaminate soils and hence threaten human populations. In recent years, two classes of compounds have been increasingly implicated as emerging risks in agriculturally-related pollution: endocrine disrupting compounds (EDCs) and pharmaceuticals. The latter group may reach the environment by the use of wastewater effluents, whereas many pesticides have been implicated as EDCs. Both groups pose a threat in proportion to their bioavailability, since that which is biounavailable or can be rendered so is a priori not a threat; bioavailability, in turn, is mediated by complex matrices such as soils. Genetically engineered biosensor bacteria hold great promise for sensing bioavailability because the sensor is a live soil- and water-compatible organism with biological response dynamics, and because its response can be genetically “tailored” to report on general toxicity, on bioavailability, and on the presence of specific classes of toxicants. In the present project we have developed a bacterial-based sensor panel incorporating multiple strains of genetically engineered biosensors for the purpose of detecting different types of biological effects. The overall objective as defined in the approved proposal was the development of a whole-cell sensor panel for the detection of endocrine disruption activities of agriculturally relevant chemicals. To achieve this goal several specific objectives were outlined: (a) The development of new genetically engineered wholecell sensor strains; (b) the combination of multiple strains into a single sensor panel to effect multiple response modes; (c) development of a computerized algorithm to analyze the panel responses; (d) laboratory testing and calibration; (e) field testing. In the course of the project, mostly due to the change in the US partner, three modifications were introduced to the original objectives: (a) the scope of the project was expanded to include pharmaceuticals (with a focus on antibiotics) in addition to endocrine disrupting chemicals, (b) the computerized algorithm was not fully developed and (c) the field test was not carried out. Major achievements: (a) construction of innovative bacterial sensor strains for accurate and sensitive detection of agriculturally-relevant pollutants, with a focus on endocrine disrupting compounds (UK and HUJ) and antibiotics (HUJ); (b) optimization of methods for long-term preservation of the reporter bacteria, either by direct deposition on solid surfaces (HUJ) or by the construction of spore-forming Bacillus-based sensors (UK); (c) partial development of a computerized algorithm for the analysis of sensor panel responses. Implications: The sensor panel developed in the course of the project was shown to be applicable for the detection of a broad range of antibiotics and EDCs. Following a suitable development phase, the panel will be ready for testing in an agricultural environment, as an innovative tool for assessing the environmental impacts of EDCs and pharmaceuticals. Furthermore, while the current study relates directly to issues of water quality and soil health, its implications are much broader, with potential uses is risk-based assessment related to the clinical, pharmaceutical, and chemical industries as well as to homeland security.
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, Zhengzheng, Ralph C. Smith, and Jon Ernstberger. The Homogenized Energy Model (HEM) for Characterizing Polarization and Strains in Hysteretic Ferroelectric Materials: Implementation Algorithms and Data-Driven Parameter Estimation Techniques. Fort Belvoir, VA: Defense Technical Information Center, January 2012. http://dx.doi.org/10.21236/ada556961.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Irudayaraj, Joseph, Ze'ev Schmilovitch, Amos Mizrach, Giora Kritzman, and Chitrita DebRoy. Rapid detection of food borne pathogens and non-pathogens in fresh produce using FT-IRS and raman spectroscopy. United States Department of Agriculture, October 2004. http://dx.doi.org/10.32747/2004.7587221.bard.

Full text
Abstract:
Rapid detection of pathogens and hazardous elements in fresh fruits and vegetables after harvest requires the use of advanced sensor technology at each step in the farm-to-consumer or farm-to-processing sequence. Fourier-transform infrared (FTIR) spectroscopy and the complementary Raman spectroscopy, an advanced optical technique based on light scattering will be investigated for rapid and on-site assessment of produce safety. Paving the way toward the development of this innovative methodology, specific original objectives were to (1) identify and distinguish different serotypes of Escherichia coli, Listeria monocytogenes, Salmonella typhimurium, and Bacillus cereus by FTIR and Raman spectroscopy, (2) develop spectroscopic fingerprint patterns and detection methodology for fungi such as Aspergillus, Rhizopus, Fusarium, and Penicillium (3) to validate a universal spectroscopic procedure to detect foodborne pathogens and non-pathogens in food systems. The original objectives proposed were very ambitious hence modifications were necessary to fit with the funding. Elaborate experiments were conducted for sensitivity, additionally, testing a wide range of pathogens (more than selected list proposed) was also necessary to demonstrate the robustness of the instruments, most crucially, algorithms for differentiating a specific organism of interest in mixed cultures was conceptualized and validated, and finally neural network and chemometric models were tested on a variety of applications. Food systems tested were apple juice and buffer systems. Pathogens tested include Enterococcus faecium, Salmonella enteritidis, Salmonella typhimurium, Bacillus cereus, Yersinia enterocolitis, Shigella boydii, Staphylococus aureus, Serratiamarcescens, Pseudomonas vulgaris, Vibrio cholerae, Hafniaalvei, Enterobacter cloacae, Enterobacter aerogenes, E. coli (O103, O55, O121, O30 and O26), Aspergillus niger (NRRL 326) and Fusarium verticilliodes (NRRL 13586), Saccharomyces cerevisiae (ATCC 24859), Lactobacillus casei (ATCC 11443), Erwinia carotovora pv. carotovora and Clavibacter michiganense. Sensitivity of the FTIR detection was 103CFU/ml and a clear differentiation was obtained between the different organisms both at the species as well as at the strain level for the tested pathogens. A very crucial step in the direction of analyzing mixed cultures was taken. The vector based algorithm was able to identify a target pathogen of interest in a mixture of up to three organisms. Efforts will be made to extend this to 10-12 key pathogens. The experience gained was very helpful in laying the foundations for extracting the true fingerprint of a specific pathogen irrespective of the background substrate. This is very crucial especially when experimenting with solid samples as well as complex food matrices. Spectroscopic techniques, especially FTIR and Raman methods are being pursued by agencies such as DARPA and Department of Defense to combat homeland security. Through the BARD US-3296-02 feasibility grant, the foundations for detection, sample handling, and the needed algorithms and models were developed. Successive efforts will be made in transferring the methodology to fruit surfaces and to other complex food matrices which can be accomplished with creative sampling methods and experimentation. Even a marginal success in this direction will result in a very significant breakthrough because FTIR and Raman methods, in spite of their limitations are still one of most rapid and nondestructive methods available. Continued interest and efforts in improving the components as well as the refinement of the procedures is bound to result in a significant breakthrough in sensor technology for food safety and biosecurity.
APA, Harvard, Vancouver, ISO, and other styles
7

Alchanatis, Victor, Stephen W. Searcy, Moshe Meron, W. Lee, G. Y. Li, and A. Ben Porath. Prediction of Nitrogen Stress Using Reflectance Techniques. United States Department of Agriculture, November 2001. http://dx.doi.org/10.32747/2001.7580664.bard.

Full text
Abstract:
Commercial agriculture has come under increasing pressure to reduce nitrogen fertilizer inputs in order to minimize potential nonpoint source pollution of ground and surface waters. This has resulted in increased interest in site specific fertilizer management. One way to solve pollution problems would be to determine crop nutrient needs in real time, using remote detection, and regulating fertilizer dispensed by an applicator. By detecting actual plant needs, only the additional nitrogen necessary to optimize production would be supplied. This research aimed to develop techniques for real time assessment of nitrogen status of corn using a mobile sensor with the potential to regulate nitrogen application based on data from that sensor. Specifically, the research first attempted to determine the system parameters necessary to optimize reflectance spectra of corn plants as a function of growth stage, chlorophyll and nitrogen status. In addition to that, an adaptable, multispectral sensor and the signal processing algorithm to provide real time, in-field assessment of corn nitrogen status was developed. Spectral characteristics of corn leaves reflectance were investigated in order to estimate the nitrogen status of the plants, using a commercial laboratory spectrometer. Statistical models relating leaf N and reflectance spectra were developed for both greenhouse and field plots. A basis was established for assessing nitrogen status using spectral reflectance from plant canopies. The combined effect of variety and N treatment was studied by measuring the reflectance of three varieties of different leaf characteristic color and five different N treatments. The variety effect on the reflectance at 552 nm was not significant (a = 0.01), while canonical discriminant analysis showed promising results for distinguishing different variety and N treatment, using spectral reflectance. Ambient illumination was found inappropriate for reliable, one-beam spectral reflectance measurement of the plants canopy due to the strong spectral lines of sunlight. Therefore, artificial light was consequently used. For in-field N status measurement, a dark chamber was constructed, to include the sensor, along with artificial illumination. Two different approaches were tested (i) use of spatially scattered artificial light, and (ii) use of collimated artificial light beam. It was found that the collimated beam along with a proper design of the sensor-beam geometry yielded the best results in terms of reducing the noise due to variable background, and maintaining the same distance from the sensor to the sample point of the canopy. A multispectral sensor assembly, based on a linear variable filter was designed, constructed and tested. The sensor assembly combined two sensors to cover the range of 400 to 1100 nm, a mounting frame, and a field data acquisition system. Using the mobile dark chamber and the developed sensor, as well as an off-the-shelf sensor, in- field nitrogen status of the plants canopy was measured. Statistical analysis of the acquired in-field data showed that the nitrogen status of the com leaves can be predicted with a SEP (Standard Error of Prediction) of 0.27%. The stage of maturity of the crop affected the relationship between the reflectance spectrum and the nitrogen status of the leaves. Specifically, the best prediction results were obtained when a separate model was used for each maturity stage. In-field assessment of the nitrogen status of corn leaves was successfully carried out by non contact measurement of the reflectance spectrum. This technology is now mature to be incorporated in field implements for on-line control of fertilizer application.
APA, Harvard, Vancouver, ISO, and other styles
8

PERFORMANCE OPTIMIZATION OF A STEEL-UHPC COMPOSITE ORTHOTROPIC BRIDGE WITH INTELLIGENT ALGORITHM. The Hong Kong Institute of Steel Construction, August 2022. http://dx.doi.org/10.18057/icass2020.p.160.

Full text
Abstract:
To address the problems of pavement damage and fatigue cracking of orthotropic steel deck (OSD) in bridges, an innovative composite bridge deck composed of OSD with open ribs and ultra-high performance concrete (UHPC) layer was proposed. Firstly, the stress responses of fatigue-prone details in the composite bridge deck were investigated by refined two-scale finite element analysis. The results show that the rib-to-deck joint can achieve an infinite fatigue life, while the floorbeam detail of rib-tofloorbeam joint indicates finite fatigue life. Then, response surface models of stress ranges of fatigue details and structure weight were derived via both the central composite design and response surface method. Finally, to improve the fatigue performance for achieving an infinite fatigue life under relatively low structure weight, the multi-objective optimization was executed by an Improved Non-dominated Sorting Genetic Algorithm (NSGA-II). The obtained Pareto front shows that there is a strong competition between the stress range of fatigue-prone detail and structure weight.
APA, Harvard, Vancouver, ISO, and other styles
9

Payment Systems Report - June of 2021. Banco de la República, February 2022. http://dx.doi.org/10.32468/rept-sist-pag.eng.2021.

Full text
Abstract:
Banco de la República provides a comprehensive overview of Colombia’s finan¬cial infrastructure in its Payment Systems Report, which is an important product of the work it does to oversee that infrastructure. The figures published in this edition of the report are for the year 2020, a pandemic period in which the con¬tainment measures designed and adopted to alleviate the strain on the health system led to a sharp reduction in economic activity and consumption in Colom¬bia, as was the case in most countries. At the start of the pandemic, the Board of Directors of Banco de la República adopted decisions that were necessary to supply the market with ample liquid¬ity in pesos and US dollars to guarantee market stability, protect the payment system and preserve the supply of credit. The pronounced growth in mone¬tary aggregates reflected an increased preference for liquidity, which Banco de la República addressed at the right time. These decisions were implemented through operations that were cleared and settled via the financial infrastructure. The second section of this report, following the introduction, offers an analysis of how the various financial infrastructures in Colombia have evolved and per¬formed. One of the highlights is the large-value payment system (CUD), which registered more momentum in 2020 than during the previous year, mainly be¬cause of an increase in average daily remunerated deposits made with Banco de la República by the General Directorate of Public Credit and the National Treasury (DGCPTN), as well as more activity in the sell/buy-back market with sovereign debt. Consequently, with more activity in the CUD, the Central Securi¬ties Depository (DCV) experienced an added impetus sparked by an increase in the money market for bonds and securities placed on the primary market by the national government. The value of operations cleared and settled through the Colombian Central Counterparty (CRCC) continues to grow, propelled largely by peso/dollar non-deliverable forward (NDF) contracts. With respect to the CRCC, it is important to note this clearing house has been in charge of managing risks and clearing and settling operations in the peso/dollar spot market since the end of last year, following its merger with the Foreign Exchange Clearing House of Colombia (CCDC). Since the final quarter of 2020, the CRCC has also been re¬sponsible for clearing and settlement in the equities market, which was former¬ly done by the Colombian Stock Exchange (BVC). The third section of this report provides an all-inclusive view of payments in the market for goods and services; namely, transactions carried out by members of the public and non-financial institutions. During the pandemic, inter- and intra-bank electronic funds transfers, which originate mostly with companies, increased in both the number and value of transactions with respect to 2019. However, debit and credit card payments, which are made largely by private citizens, declined compared to 2019. The incidence of payment by check contin¬ue to drop, exhibiting quite a pronounced downward trend during the past last year. To supplement to the information on electronic funds transfers, section three includes a segment (Box 4) characterizing the population with savings and checking accounts, based on data from a survey by Banco de la República con-cerning the perception of the use of payment instruments in 2019. There also is segment (Box 2) on the growth in transactions with a mobile wallet provided by a company specialized in electronic deposits and payments (Sedpe). It shows the number of users and the value of their transactions have increased since the wallet was introduced in late 2017, particularly during the pandemic. In addition, there is a diagnosis of the effects of the pandemic on the payment patterns of the population, based on data related to the use of cash in circu¬lation, payments with electronic instruments, and consumption and consumer confidence. The conclusion is that the collapse in the consumer confidence in¬dex and the drop in private consumption led to changes in the public’s pay¬ment patterns. Credit and debit card purchases were down, while payments for goods and services through electronic funds transfers increased. These findings, coupled with the considerable increase in cash in circulation, might indicate a possible precautionary cash hoarding by individuals and more use of cash as a payment instrument. There is also a segment (in Focus 3) on the major changes introduced in regulations on the retail-value payment system in Colombia, as provided for in Decree 1692 of December 2020. The fourth section of this report refers to the important innovations and tech¬nological changes that have occurred in the retail-value payment system. Four themes are highlighted in this respect. The first is a key point in building the financial infrastructure for instant payments. It involves of the design and im¬plementation of overlay schemes, a technological development that allows the various participants in the payment chain to communicate openly. The result is a high degree of interoperability among the different payment service providers. The second topic explores developments in the international debate on central bank digital currency (CBDC). The purpose is to understand how it could impact the retail-value payment system and the use of cash if it were to be issued. The third topic is related to new forms of payment initiation, such as QR codes, bio¬metrics or near field communication (NFC) technology. These seemingly small changes can have a major impact on the user’s experience with the retail-value payment system. The fourth theme is the growth in payments via mobile tele¬phone and the internet. The report ends in section five with a review of two papers on applied research done at Banco de la República in 2020. The first analyzes the extent of the CRCC’s capital, acknowledging the relevant role this infrastructure has acquired in pro¬viding clearing and settlement services for various financial markets in Colom¬bia. The capital requirements defined for central counterparties in some jurisdic¬tions are explored, and the risks to be hedged are identified from the standpoint of the service these type of institutions offer to the market and those associated with their corporate activity. The CRCC’s capital levels are analyzed in light of what has been observed in the European Union’s regulations, and the conclusion is that the CRCC has a scheme of security rings very similar to those applied internationally and the extent of its capital exceeds what is stipulated in Colombian regulations, being sufficient to hedge other risks. The second study presents an algorithm used to identify and quantify the liquidity sources that CUD’s participants use under normal conditions to meet their daily obligations in the local financial market. This algorithm can be used as a tool to monitor intraday liquidity. Leonardo Villar Gómez Governor
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography