Dissertations / Theses on the topic 'Fast retrieval'

To see the other types of publications on this topic, follow the link: Fast retrieval.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 dissertations / theses for your research on the topic 'Fast retrieval.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pesavento, Marius. "Fast algorithms for multidimensional harmonic retrieval." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975415328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cao, Hui, Noboru Ohnishi, Yoshinori Takeuchi, Tetsuya Matsumoto, and Hiroaki Kudo. "FAST HUMAN POSE RETRIEVAL USING APPROXIMATE CHAMFER DISTANCE." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2006. http://hdl.handle.net/2237/10437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Perry, S. T. "Fast interactive object delineation in images for content based retrieval and navigation." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kuan, Joseph. "Image texture analysis and fast similarity search for content based retrieval and navigation." Thesis, University of Southampton, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lardeux, Florian. "Robust Modelling and Efficient Recognition of Quasi-Flat Objects — Application to Ancient Coins." Thesis, La Rochelle, 2022. http://www.theses.fr/2022LAROS002.

Full text
Abstract:
Les objets quasi-plans sont obtenus à partir d’une matrice qui définit des caractéristiques spécifiques observables dans leur gravure. Les exemples de ce type d’objets incluent les timbres à sec, les timbres amphoriques ou encore les pièces de monnaies anciennes. Les objetsquasi-plans sont par conséquent des formes très plates sur lesquelles un relief caractéristique est inscrit. Reconnaître de tels objets est une tâche compliquée car de nombreux verrous entrent en jeu. Le relief d’objets quasi-plans est sujet à des déformations non rigides et les conditions lumineuses influencent la façon dont ils sont perçus. De plus, ces objets ont pu subir diverses détériorations menant à l’occlusion de certaines parties de leur relief. Dans cette dissertation, nous adressons le problème de la reconnaissance d’objets quasi-plans. Cette thèse est articulée autour de trois grands axes. Le premier axe vise à créer un modèle adapté pour représenter l’objet en exposant ces caractéristiques et en prenant en compte les divers verrous précités. Dans ce but, le concept de carte d’énergie lumineuse est introduit. Les deuxième et troisième axes introduisent des stratégies de reconnaissance. D’un côté, nous proposons l’utilisation de contours de l’objet en tant que données caractéristiques. Ceux-ci sont représentés via un modèle de signature à partir duquel sont calculés des descripteurs. Afin de stocker, retrouver et comparer ces descripteurs, une structure de donnée basée sur des tableaux associatifs, les LACS, est présentée ; elle permet une reconnaissance rapide de contours similaires. D’un autre côté, l’utilisation de textures comme descripteurs de l’objet est envisagée. Dans cette partie, l’étude est centrée sur l’emploi de régions 2D et leur description comme moyen de reconnaissance. Un angle similaire est adopté pour stocker et retrouver l’information ; une structure de donnée proche de celle précédemment décrite, mais plus complexe, est introduite
Quasi-flat objects are obtained from a matrix which defines specific features observable in their engraving. Some examples of these are dry stamps, amphora stamps or ancient coins. Quasi-flat objects are subsequently understood as very flat shapes onto which a characteristic relief is inscribed. Recognizing such objects is not an easy feat as many barriers come into play. The relief of quasi-flat objects is prone to non-rigid deformations and the illumination conditions influence the perception of the object’s relief. Furthermore, these items may have undergone various deteriorations, leading to the occlusion of some parts of their relief. In this thesis, we tackle the issue of recognizing quasi-flat objects. This work is articulated around three major axes. The first one aims at creating a model to represent the objects both by highlighting their main characteristics and taking into account the afore mentioned barriers. To this aim, the concept of multi-light energy map is introduced. The second and third axes introduce strategies for the recognition. On the one hand, we propose the use of contours as main features. Contours are described via a signature model from which specific descriptors are calculated. In order to store, retrieve and match those features, a data structure based on associative arrays, the LACS system, is introduced, which enables a fast retrieval of similar contours. On the other hand, the use of textures is investigated. The scope here is centered on the use of specific 2D regions and their description in order to perform the recognition. A similar angle is taken to store and retrieve the information as a similar, yet a more complex data structure is introduced
APA, Harvard, Vancouver, ISO, and other styles
6

au, n. jackson@murdoch edu, and Natalie Deanne Jackson. "Simple arithmetic processing : fact retrieval mechanisms and the influence of individual difference, surface from, problem type and split on processing." Murdoch University, 2006. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20070717.114439.

Full text
Abstract:
Current theorising in the area of cognitive arithmetic suggests that simple arithmetic knowledge is stored in memory and accessed in the same way as word knowledge i.e., it is stored in a network of associations, with simple facts retrieved automatically from memory. However, to date, the main methodologies that have been employed to investigate automaticity in simple arithmetic processing (e.g., production and verification) have produced a wide variety of difficulties in interpretation. In an attempt to address this, the present series of investigations utilised a numerical variant of the well established single word semantic priming paradigm that involved the presentation of problems as primes (e.g., 2 + 3) and solutions as targets (e.g., 5), as they would occur in a natural setting. Adult university students were exposed to both addition and multiplication problems in each of three main prime target relationship conditions, including congruent (e.g., 2 + 3 and 5), incongruent (e.g., 2 + 3 and 13), and neutral conditions (X + Y and 5). When combined with a naming task and the use of short stimulus onset asynchronies (SOAs), this procedure enabled a more valid and reliable investigation into automaticity and the cognitive mechanisms underlying simple arithmetic processing. The first investigation in the present series addressed the question of automaticity in arithmetic fact retrieval, whilst the remaining investigations examined the main factors thought to influence simple arithmetic processing i.e., skill level, surface form, problem type and split. All factors, except for problem type, were found to influence processing in the arithmetic priming paradigm. For example, the results of all five investigations were consistent in revealing significant facilitation in naming congruent targets for skilled participants, following exposure to Arabic digit primes at the short SOA. Accordingly, the facilitation was explained in terms of the operation of an automatic spreading activation mechanism. Additionally, significant inhibitory effects in incongruent target naming were identified in skilled performance in all of the studies in the present series of investigations. Throughout the course of these investigations, these effects were found to vary with operation, surface form and SOA, and in the final investigation, the level of inhibition was found to vary with the split between the correct solution and the incongruent target. Consequently, a number of explanations were put forward to account for these effects. In the first two investigations, it was suggested that the inhibitory effects resulted from the use of a response validity checking mechanism, whilst in the final investigation, the results were more consistent with the activation of magnitude representations in memory (this can be likened to Dehaene’s, 1997, ‘number sense’). In contrast, the results of the third investigation led to the proposal that for number word primes, inhibition in processing results from the activation of phonological representations in memory, via a reading based mechanism. The present series of investigations demonstrated the utility of the numerical variant of the single word semantic priming paradigm for the investigation of simple arithmetic processing. Given its capacity to uncover the fundamental cognitive mechanisms at work in simple arithmetic operations, this methodology has many applications in future research.
APA, Harvard, Vancouver, ISO, and other styles
7

Jackson, Natalie Deanne. "Simple arithmetic processing: fact retrieval mechanisms and the influence of individual difference, surface form, problem type and split on processing." Jackson, Natalie Deanne (2007) Simple arithmetic processing: fact retrieval mechanisms and the influence of individual difference, surface form, problem type and split on processing. PhD thesis, Murdoch University, 2007. http://researchrepository.murdoch.edu.au/108/.

Full text
Abstract:
Current theorising in the area of cognitive arithmetic suggests that simple arithmetic knowledge is stored in memory and accessed in the same way as word knowledge i.e., it is stored in a network of associations, with simple facts retrieved automatically from memory. However, to date, the main methodologies that have been employed to investigate automaticity in simple arithmetic processing (e.g., production and verification) have produced a wide variety of difficulties in interpretation. In an attempt to address this, the present series of investigations utilised a numerical variant of the well established single word semantic priming paradigm that involved the presentation of problems as primes (e.g., 2 + 3) and solutions as targets (e.g., 5), as they would occur in a natural setting. Adult university students were exposed to both addition and multiplication problems in each of three main prime target relationship conditions, including congruent (e.g., 2 + 3 and 5), incongruent (e.g., 2 + 3 and 13), and neutral conditions (X + Y and 5). When combined with a naming task and the use of short stimulus onset asynchronies (SOAs), this procedure enabled a more valid and reliable investigation into automaticity and the cognitive mechanisms underlying simple arithmetic processing. The first investigation in the present series addressed the question of automaticity in arithmetic fact retrieval, whilst the remaining investigations examined the main factors thought to influence simple arithmetic processing i.e., skill level, surface form, problem type and split. All factors, except for problem type, were found to influence processing in the arithmetic priming paradigm. For example, the results of all five investigations were consistent in revealing significant facilitation in naming congruent targets for skilled participants, following exposure to Arabic digit primes at the short SOA. Accordingly, the facilitation was explained in terms of the operation of an automatic spreading activation mechanism. Additionally, significant inhibitory effects in incongruent target naming were identified in skilled performance in all of the studies in the present series of investigations. Throughout the course of these investigations, these effects were found to vary with operation, surface form and SOA, and in the final investigation, the level of inhibition was found to vary with the split between the correct solution and the incongruent target. Consequently, a number of explanations were put forward to account for these effects. In the first two investigations, it was suggested that the inhibitory effects resulted from the use of a response validity checking mechanism, whilst in the final investigation, the results were more consistent with the activation of magnitude representations in memory (this can be likened to Dehaene's, 1997, 'number sense'). In contrast, the results of the third investigation led to the proposal that for number word primes, inhibition in processing results from the activation of phonological representations in memory, via a reading based mechanism. The present series of investigations demonstrated the utility of the numerical variant of the single word semantic priming paradigm for the investigation of simple arithmetic processing. Given its capacity to uncover the fundamental cognitive mechanisms at work in simple arithmetic operations, this methodology has many applications in future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Pinheiro, Josiane Melchiori. "A influência das folksonomias na eficiência da fase inicial de modelagem conceitual." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2831.

Full text
Abstract:
Fundação Araucária
Este estudo examina a hipótese que usar folksonomias induzidas dos sistemas de tagging colaborativo em modelagem conceitual deve reduzir o número de divergências entre os atores envolvidos no processo quando eles elicitam termos para serem usados no modelo, usando-se como baseline os termos extraídos de páginas Web baseados na frequência de termos. Usa como medida de eficiência o número de divergências, pois quanto menor o número de divergências, menor o tempo e o esforço necessários para criar o modelo conceitual. Descreve os experimentos controlados de modelagem conceitual que foram realizados com grupos experimentais que receberam a folksonomia e com grupos de controle que receberam termos extraídos de páginas Web. Os resultados descritos mostram que grupos experimentais e de controle obtiveram números similares de divergências. Outras medidas de eficiências, assim como o reuso dos termos nos artefatos da modelagem e a facilidade percebida ao realizar a tarefa de modelagem confirmaram os resultados obtidos pelo número de divergências, com uma eficiência ligeiramente maior entre os grupos experimentais.
This study examines the hypothesis that using folksonomies induced from collaborative tagging systems in conceptual modeling should reduce the number of divergences between actors when they elicit terms to be used in a model, using as baseline terms extracted from webpages based on term frequency. It uses as efficiency measure the number of divergences, because the fewer the divergences, the less time and effort required to create a conceptual model. It describes the controlled conceptual modeling experiments that were performed using experimental groups that received a folksonomy and control groups that received terms extracted from webpages. The results show that the experimental and control groups obtained similar numbers of divergences. Other efficiency measures, such as reuse of terms in the phases of conceptual modeling and perceived ease of performing the modeling task, confirmed the results obtained by the number of divergences, with slightly greater efficiency among the experimental groups.
APA, Harvard, Vancouver, ISO, and other styles
9

Ho, Chia-Lin, and 何佳霖. "Compression and Fast Retrieval for Digital Waveform." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/73261928713281426620.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
90
Among VLSI circuit design, functional verification has become an important part due to rapid extension of circuits functionalities in many consumer and industry products. During simulation for digital circuits, the waveforms are stored on disk for future investigation and will finally fill up huge amounts of disk space. Beside of disk consumption, browsing the waveform becomes difficult because the required data is distributed over a large file space. Hence, We developed a set of algorithms and techniques that can be used to compress digital waveform, and we also define a new waveform data format that provides random access to improve the retrievable speed. Experimental result show that the retrievable time can be increased above 100 times and we could almost achieve 10%~35% compression ration comparing to the size of traditional VCD format waveform.
APA, Harvard, Vancouver, ISO, and other styles
10

Shieh, Wann-Yun, and 謝萬雲. "Fast Information Retrieval in Incremental Web Databases." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/92235606317310550552.

Full text
Abstract:
博士
國立交通大學
資訊工程系
92
This dissertation proposes methodologies to (1) speedup information retrieval, (2) perform most-correlated-document-first retrieval, and (3) perform incremental index-update, for dynamically growing Web databases. This dissertation focuses on the indexing structure refinements by dealing with a most-widely-used one, called the inverted file. First, to speedup information retrieval, the dissertation deals with the inverted file by index compression, query-result caching. The objective is to minimize the query response time for database scale and current user behavior. Second, to perform most-correlated-document-first retrieval, the dissertation deals with the inverted file by tree-based index structuring. The objective is to retrieve the most correlated documents for user queries as soon as possible. Finally, to provide incremental index-update, the dissertation deals with the inverted file by spare-space allocation. The objective is to best guarantee that the index has sufficient reserved space to amortize update costs, and also to keep high space efficiency. Research topics in the dissertation are (1) Inverted file compression through document identifier reassignment The first topic is to propose a document identifier reassignment algorithm to compress the ever-increasing inverted files. Conventionally, the d-gap technique is used in the inverted file compression by replacing document identifiers with usually much smaller gap values. The objective of this topic is to reassign document identifiers, based on the document similarity evaluation, to smoothen and reduce the gap values in an inverted file. A Web database can take benefits from it in terms of the saved storage space, and fast file look-up time. (2) Inverted file caching for fast information retrieval The second topic is to propose an inverted file caching mechanism to exploit the locality of user queries in a Web database. In this mechanism, we enhance the indexing speed by a linked-list-based probing process, and enhance memory efficiency by a chunk-based space management. A Web database can take benefits from it in terms of the fast popular data’s response. (3) Tree-based inverted file structuring for most-correlated-document-first retrieval The third topic is to propose an n-key-heap posting-tree structure to preserve the identifier numerical orders and ranking information simultaneously in an index file. The objective of this topic is to reconstruct an inverted file to store important and most-correlated data efficiently such that they can be retrieved without time-consuming ranking or sorting. A Web database can take benefits from it in terms of the fast most-correlated-data’s response. (4) Statistics-based spare-space allocation for incremental inverted file updates The fourth topic is to propose a statistics-based approach to allocate the free space in an inverted file for future updates. The approach estimates the space requirements for an inverted file by collecting only a little most-recent statistical data. The objective is to incrementally update an inverted file without complex file reorganization or expensive free-space management, as the database expands. A Web database can take benefits from it in terms of the fast index updates. The results of this dissertation include: (1)For inverted file compression, the proposed approach improves the compression rate by 18%, and improves the query response time by 15% on average. (2)For inverted file caching, the proposed approach takes only about 7% additional space costs to outperform the conventional caching mechanisms by 20% indexing speed improvements on average. (3)For tree-based inverted file structuring, the time to retrieve the most-correlated documents is improved by 8%~45%, compared with the conventional link-list-based index structure. (4)For incremental inverted file updates, the proposed approach outperforms the conventional approaches by 16% space utilization improvements, and 15% index-updating speed improvements on average.
APA, Harvard, Vancouver, ISO, and other styles
11

"Fast algorithms for sequence data searching." 1997. http://library.cuhk.edu.hk/record=b5889114.

Full text
Abstract:
by Sze-Kin Lam.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.
Includes bibliographical references (leaves 71-76).
Abstract --- p.i
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Related Work --- p.6
Chapter 2.1 --- Sequence query processing --- p.8
Chapter 2.2 --- Text sequence searching --- p.8
Chapter 2.3 --- Numerical sequence searching --- p.11
Chapter 2.4 --- Indexing schemes --- p.17
Chapter 3 --- Sequence Data Searching using the Projection Algorithm --- p.21
Chapter 3.1 --- Sequence Similarity --- p.21
Chapter 3.2 --- Searching Method --- p.24
Chapter 3.2.1 --- Sequential Algorithm --- p.24
Chapter 3.2.2 --- Projection Algorithm --- p.25
Chapter 3.3 --- Handling Scaling Problem by the Projection Algorithm --- p.33
Chapter 4 --- Sequence Data Searching using Hashing Algorithm --- p.37
Chapter 4.1 --- Sequence Similarity --- p.37
Chapter 4.2 --- Hashing algorithm --- p.39
Chapter 4.2.1 --- Motivation of the Algorithm --- p.40
Chapter 4.2.2 --- Hashing Algorithm using dynamic hash function --- p.44
Chapter 4.2.3 --- Handling Scaling Problem by the Hashing Algorithm --- p.47
Chapter 5 --- Comparisons between algorithms --- p.50
Chapter 5.1 --- Performance comparison with the sequence searching algorithms --- p.54
Chapter 5.2 --- Comparison between indexing structures --- p.54
Chapter 5.3 --- Comparison between sequence searching algorithms in coping some deficits --- p.55
Chapter 6 --- Performance Evaluation --- p.58
Chapter 6.1 --- Performance Evaluation using Projection Algorithm --- p.58
Chapter 6.2 --- Performance Evaluation using Hashing Algorithm --- p.61
Chapter 7 --- Conclusion --- p.66
Chapter 7.1 --- Motivation of the thesis --- p.66
Chapter 7.1.1 --- Insufficiency of Euclidean distance --- p.67
Chapter 7.1.2 --- Insufficiency of orthonormal transforms --- p.67
Chapter 7.1.3 --- Insufficiency of multi-dimensional indexing structure --- p.68
Chapter 7.2 --- Major contribution --- p.68
Chapter 7.2.1 --- Projection algorithm --- p.68
Chapter 7.2.2 --- Hashing algorithm --- p.69
Chapter 7.3 --- Future work --- p.70
Bibliography --- p.71
APA, Harvard, Vancouver, ISO, and other styles
12

Tsai, Tienwei, and 蔡殿偉. "Fast Content-Based Image Retrieval in Discrete Cosine Transform Domain." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/28015857696067434299.

Full text
Abstract:
博士
大同大學
資訊工程學系(所)
94
As vastly increasing amount of digital images, rapidly declining cost of storage, and explosive growth of the Internet, content-based image retrieval (CBIR) has been intensively studied in the last decades. Though a number of image features based on color, texture, and shape attributes in various domains have been reported in the literature, it is still a rigorous challenge to select a good feature set for image classification. In this thesis, some famous CBIR systems are reviewed, and related issues in the retrieval strategy are addressed. The effective indexing and efficient retrieval are identified as a problem, which serves as the most important criterion in choosing the feature set. Our work mainly focuses on the use of discrete cosine transform (DCT) as a contribution to fast indexing and retrieval in a CBIR system. We will first show the effective representation of images in DCT domain. Then, to further improve the retrieval speed, a two-stage approach based on DCT is proposed. As the character can be regarded as a gray image, the concept of the two-stage approach is also successfully applied to the recognition of Chinese characters. In addition, a set of weights are used to characterize the relative importance of the features in a query image, which plays an important role in the multiple passes of refining the retrieval. An intensive study of such flexible retrieval, called the fuzzy semantic information retrieval model, is realized in a bird searching system. Finally, the prospects of further work based on the findings of the study are given as a conclusion.
APA, Harvard, Vancouver, ISO, and other styles
13

xiao, Sheng-wen, and 蕭聖文. "Feature Extraction of Visualized Genomic Sequences for Fast Database Retrieval." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/20071949675011369929.

Full text
Abstract:
碩士
國立雲林科技大學
電機工程系碩士班
93
Genomic signal processing has become a more significant research. The symbolic genomic sequences are translated numerical sequence in different methods. For visualization purpose DNA sequences can be mapped to a series of coordinate numbers. The sequence can be represented by a 3-D curve, which is depicted by the use of the accumulated 3-D coordinates. In this thesis, we propose two methods to extract the features of the 3-D curve. Then we can construct a database for the features of DNA sequences. For an unknown sequence, we can plot a 3-D curve and extract the corresponding features. By searching the feature database of the DNA sequences, an identical if exist sequence or similar sequences can be efficiently retrieved. In this thesis, visualized curve can be extract features of shape and twist points of curve. Alignment sequences use NNIC and RDCSW algorithms. Using this method, we can decrease quantity of the database and increase the speed of retrieval.
APA, Harvard, Vancouver, ISO, and other styles
14

Santos, Joaquim Miguel Nunes dos. "Accelerating Digitisation of Biological Collections for Fast Ecological Information Retrieval." Master's thesis, 2018. http://hdl.handle.net/10316/86223.

Full text
Abstract:
Dissertação de Mestrado em Ecologia apresentada à Faculdade de Ciências e Tecnologia
Herbários são colecções biológicas de plantas, algas, fungos e líquenes preservados para fins científicos. A troca de informação e uma rápida comunicação são fundamentais para acelerar a investigação em biodiversidade. Os principais herbários do mundo estão a concentrar esforços para digitalizar as suas colecções e tornar disponível a informação online.Ao longo da última década, o Herbário da Universidade de Coimbra (COI – acrónimo no Index Herbariorum) tem envidado esforços para disponibilizar online a informação da sua colecção de plantas que totaliza cerca de 800.000 exemplares. Contudo, apenas cerca de 10% foi processado até à data, em parte devido à morosidade dos métodos geralmente utilizados em herbários. Este trabalho pretende contribuir para acelerar o processo de digitalização, quer pela melhoria do processo de digitalização, quer permitindo aos cidadãos o preenchimento da base de dados. Pretende ainda acelerar a obtenção de informação com valor ecológico a partir da base de dados.Para conseguir isso, um novo fluxo de trabalho foi desenvolvido de modo a criar registos na base de dados de forma automática a partir de lotes de imagens digitais, foi desenvolvido um novo catálogo online, e foi criada uma plataforma colaborativa para permitir, em ambiente web, a transcrição das etiquetas de herbário com base nas imagens digitais.Demonstra-se que este trabalho fornece um aumento substancial na quantidade de exemplares digitalizados, e reduz o tempo necessário à obtenção de informação de modo a que possa ser usada não apenas por cientistas, mas também por decisores, outras partes interessadas e público em geral.Apesar de colateral, há ainda uma vantagem considerável e única neste projecto. A aplicação colaborativa pode ser usada como uma ferramenta para fazer correcções ao Catálogo directamente online e de uma forma fácil. Isto melhora de uma forma rápida a base de dados, uma vez que este procedimento fácil potencia este tipo de contribuições.
Herbaria are biological collections of preserved plants, algae, fungi and lichens used for scientific purposes. Fast communication and information exchange are fundamental to accelerate the investigation on biodiversity. The major world herbaria are concentrating efforts to digitise their collections and making available the information online.Over the last decade, the Herbarium of the University of Coimbra (COI – acronym in Index Herbariorum) has made efforts to make available online the information of its plant collection of c. 800.000 specimens. However, only c. 10% is processed to this date, in part due to the slowness of the methods generally used in herbaria. This work aims to contribute to accelerate the digitising process, both by improving digitising procedures and allowing citizen partnership to populate COI database. It also aims to accelerate information retrieval with ecological value from database.To accomplish that, a new workflow was developed to automatically create records in the database from batches of digital images, a new user-friendly online catalogue was developed, and a collaborative platform was developed to allow transcription of specimen labels based on digital images in a web environment.It is demonstrated that this work provides a substantial increase in the amount of digitised specimens, and also reduces the time to retrieve precise information in a way that can be used not only by scientists, but also by decision makers, stake holders and public in general.Although collateral, there is a major, and unique, advantage to this project. The Collaborative application can be used as a tool to make corrections to the Catalogue, easily and directly online. This quickly improves the database as such effortless procedure increases this kind of contributions.
APA, Harvard, Vancouver, ISO, and other styles
15

Chang, Yu-ruey, and 張育瑞. "Fast Cover Song Retrieval in AAC Domain based on Deep Learning." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16552950178279839832.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
104
With the increasing of multimedia data, it becomes more and more important to quickly search the interests from large databases. Keyword annotation is the traditional approach, but it needs large amount of manual effort to annotate the keyword. As the size of data increases, the keyword annotation approach becomes infeasible. Content-based retrieval is more natural, it extracts features from music content to create a representation that overcomes human labeling errors. This thesis focuses on the AAC file which is widely used by streaming internet sources. Here, the proposed system directly maps the modified discrete cosine transform coefficients (MDCT) into a 12-dimensional chroma feature. We combine frames to a segment as the input of deep learning, deep learning can automatically find more meaningful features of music data. We also applied sparse autoencoder to reduce dimensionality of songs. With these efforts, significant matching time can be saved. The experimental results show that the proposed method can reach 0.505 of mean reciprocal rank (MRR) and save over 70% matching time compared with conventional approaches.
APA, Harvard, Vancouver, ISO, and other styles
16

Tsai, Wei-Chang, and 蔡維昌. "Non-linear Motion Blurred Image Reconstruction based on Fast PSF Retrieval." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/7y5fg3.

Full text
Abstract:
碩士
國立虎尾科技大學
資訊工程研究所
101
In everyday life, when people take a photograph by any kinds of cameras, motion blurred image often happens caused by camera shake. Owing to exposures when captured an image within camera shake, it will make a kind of motion blurred image, this kind of motion blurred phenomenon is often a non-liner motion action. Motion blur always causes the degradation of image quality, as long as the users using the hand-held camera often have a similar experience. For this reason, reconstructing a blurred image into a sharp image will be the main objective in this thesis. In the past studies, nonlinear motion blur will be modeled as point spread function(PSF) which called blur kernel. This thesis aims to the reconstruction of the global motion blur image caused by a single blur kernel. Secondary, the proposed method is further extended to reconstruct a multi-blurred image caused by the multiple kernels. However, reconstruction a motion blurred image is an ill-pose problem. In state-of-the-art motion blurred estimation methods, these algorithms usually use the recursive method to estimate motion blur kernel. But, the recursive process is quite time-consuming. In order to reduce the execution time, based on iterative phase retrieval algorithm and normalized sparsity measure, we propose a fast best kernel retrieval algorithm based on fast point spread function (PSF) involved iterative phase retrieval method and normalized sparsity measure, which can find the best kernel in a short computing time. Experiment results verify that the proposed method can effectively reduce the execution time and obtain the best motion blur kernel and maintain a high quality of image deblurring. Finally, this proposed algorithm also applies to deblur multiple blurred cases. The deblurring results are acceptable.
APA, Harvard, Vancouver, ISO, and other styles
17

Grauman, Kristen, and Trevor Darrell. "Fast Contour Matching Using Approximate Earth Mover's Distance." 2003. http://hdl.handle.net/1721.1/30438.

Full text
Abstract:
Weighted graph matching is a good way to align a pair of shapesrepresented by a set of descriptive local features; the set ofcorrespondences produced by the minimum cost of matching features fromone shape to the features of the other often reveals how similar thetwo shapes are. However, due to the complexity of computing the exactminimum cost matching, previous algorithms could only run efficientlywhen using a limited number of features per shape, and could not scaleto perform retrievals from large databases. We present a contourmatching algorithm that quickly computes the minimum weight matchingbetween sets of descriptive local features using a recently introducedlow-distortion embedding of the Earth Mover's Distance (EMD) into anormed space. Given a novel embedded contour, the nearest neighborsin a database of embedded contours are retrieved in sublinear time viaapproximate nearest neighbors search. We demonstrate our shapematching method on databases of 10,000 images of human figures and60,000 images of handwritten digits.
APA, Harvard, Vancouver, ISO, and other styles
18

Lai, Wei-Ta, and 賴威達. "A Study of Typhoon Satellite Image Database Fast Retrieval and Path Reconstruction." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/91190173381521897091.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
96
Pacific typhoon refers to tropical cyclone which is a storm system that produces violent winds and flooding rains. Typhoon inflicts terrible damage due to thunderstorms, violent winds, torrential rain, flooding and extremely high tides. Improving the early typhoon forecast capability is a key to the disaster prevention. In this thesis, we implemented a system allows general public or meteorologist check the continuous or cumulative movement of past typhoon. To make such scenario possible, we extract typhoon image features one by one and store as descriptor in XML syntax. XML is heavily used as format for document storage and sharing through internet because of its characteristic. Further more, in order to present the desire typhoon image of user’s selection without transmission of the entire satellite image, we used block patterns to reconstruct typhoon image to reduce transmission time. Many scholars made efforts in locating typhoon center and developed some reliable typhoon fast search and path reconstruction mechanisms. Since typhoon location is an important influencing factor, we propose a method that applies MPEG-7 edge histogram descriptor to extract texture feature for the enhancement of typhoon locating.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Shien-Cheng, and 吳信誠. "A Fast Image Retrieval System Using High Order Fuzzy Statistics Model Parameters." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/88961391965509188780.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
90
With the growing sizes of today’s digital image database, fast retrieval methods are mandatory. Though shape and color are the most popular features in many retrieval systems, it is possible that other abstract representations can also be used to extract additional useful information for this application. For example, quite a few high order statistics methods were successfully used in texture image classification. In this paper, we present a fast retrieval system using a maximally simplified fuzzy parametrical statistic representation and a color region representation, and combine them be with a 2-level matching strategy based on an input reference image. The proposed statistics model is used to explore the spatial relationship among the neighboring image pixels. Among the retrieval images, some of them are highly related to the input reference image even in a human point of view.
APA, Harvard, Vancouver, ISO, and other styles
20

Tu, Yu-Ming, and 凃昱銘. "Query-by-humming Retrieval of Songs Based on Fast Pitch Sequence Matching." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/4yepbe.

Full text
Abstract:
碩士
國立臺北科技大學
電腦與通訊研究所
100
As the concrete descriptions, such as title, singer or lyrics, cannot fully represent the abstract content of music, such as melody or emotion, it is often the case that people know what the song they want sounds like, but just cannot recall its title or lyrics. To overcome this problem, a promising solution is the so-called query-by- singing/humming (QBSH), which allows users to retrieve a song by simply singing or humming a fragment of the song. Although techniques on QBSH have been studied for more than one decade, they are still far from popular in real applications. This thesis investigates a QBSH method that enables fast melody comparison. The basic idea is to measure the distances between note sequences in the frequency domain instead of time domain. Thanks to the merit of fast Fourier transform, we can convert different-length note sequences into equal-dimension vectors via zero padding. The equal dimensionality allows us to compare the vectors using Euclidean distance directly, which avoids performing time-consuming alignment between sequences. To take both efficiency and effectiveness into account, the proposed fast melody comparison method is combined with dynamic time warping technique into a two-stage sequence matching system. Our experiments conducted using the MIREX 2006 database demonstrate the superiority of the proposed system over other existing systems.
APA, Harvard, Vancouver, ISO, and other styles
21

Cakir, Fatih. "Online hashing for fast similarity search." Thesis, 2017. https://hdl.handle.net/2144/27360.

Full text
Abstract:
In this thesis, the problem of online adaptive hashing for fast similarity search is studied. Similarity search is a central problem in many computer vision applications. The ever-growing size of available data collections and the increasing usage of high-dimensional representations in describing data have increased the computational cost of performing similarity search, requiring search strategies that can explore such collections in an efficient and effective manner. One promising family of approaches is based on hashing, in which the goal is to map the data into the Hamming space where fast search mechanisms exist, while preserving the original neighborhood structure of the data. We first present a novel online hashing algorithm in which the hash mapping is updated in an iterative manner with streaming data. Being online, our method is amenable to variations of the data. Moreover, our formulation is orders of magnitude faster to train than state-of-the-art hashing solutions. Secondly, we propose an online supervised hashing framework in which the goal is to map data associated with similar labels to nearby binary representations. For this purpose, we utilize Error Correcting Output Codes (ECOCs) and consider an online boosting formulation in learning the hash mapping. Our formulation does not require any prior assumptions on the label space and is well-suited for expanding datasets that have new label inclusions. We also introduce a flexible framework that allows us to reduce hash table entry updates. This is critical, especially when frequent updates may occur as the hash table grows larger and larger. Thirdly, we propose a novel mutual information measure to efficiently infer the quality of a hash mapping and retrieval performance. This measure has lower complexity than standard retrieval metrics. With this measure, we first address a key challenge in online hashing that has often been ignored: the binary representations of the data must be recomputed to keep pace with updates to the hash mapping. Based on our novel mutual information measure, we propose an efficient quality measure for hash functions, and use it to determine when to update the hash table. Next, we show that this mutual information criterion can be used as an objective in learning hash functions, using gradient-based optimization. Experiments on image retrieval benchmarks confirm the effectiveness of our formulation, both in reducing hash table recomputations and in learning high-quality hash functions.
APA, Harvard, Vancouver, ISO, and other styles
22

Lee, Ping-Huang, and 李炳煌. "Design of An Efficient Object-based Image Retrieval System Using A Fast K-NNR Search Technique." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/10153426625814643471.

Full text
Abstract:
碩士
國立高雄第一科技大學
電腦與通訊工程所
90
As the advances of the Internet, the demand of storing multimedia information (such as text, image, audio, and video) has increased. And the multimedia retrieval and search is more and more important. Traditionally, textual features such as filenames, captions, and keywords have been used to annotate and retrieve images. As it is applied to a large database, the use of keywords becomes not only cumbersome but also inadequate to represent the image content. Therefore, many content-based image retrieval system have been proposed to solve this problem. In this thesis, we propose an efficient object-based image retrieval method by using a fast K-NNR search algorithm which is designed according to the triangle inequality principle. The computational complexity of the traditional histogram-based image retrieval which is also improved by fast K-NNR search method is high due to the usage of the high-dimensional histogram and the lack of the indexing structure. Furthermore, a new indexing structure for the proposed object-based image retrieval technique is also proposed in this study. A special attention to object segmentation which combines the moment-preserving edge detection and the region-growing techniques is paid in this thesis. Finally, an object-based similarity metric is also proposed for query processing. Experimental results show that the proposed image retrieval method is effective and superior to other methods in terms of overall computational complexity. Applying to a very large image database, the performance of the proposed method can be sustained.
APA, Harvard, Vancouver, ISO, and other styles
23

Pesavento, Marius [Verfasser]. "Fast algorithms for multidimensional harmonic retrieval = Schnelle Algorithmen zur Erkennung von mehrdimensionalen Harmonischen / by Marius Pesavento." 2005. http://d-nb.info/975415328/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Smith, Nadia. "Air quality monitoring with polar-orbiting hyperspectral infrared sounders : a fast retrieval scheme for carbon monoxide." Thesis, 2014. http://hdl.handle.net/10210/12282.

Full text
Abstract:
D.Phil. (Geography)
The Infrared Atmospheric Sounding Interferometer (lASI), operational in polar-orbit since 2006 on the European MetOp-A satellite, is the most advanced of its kind in space. It has been designed to provide soundings of the troposphere and lower stratosphere at nadir in a spectral interval of 0.25 em" across the range 645-2 760 em". Fine spectral sampling such as this is imperative in the sounding of trace gases. Since its launch, the routine retrievals of greenhouse, species from IASI measurements have made a valuable contribution to atmospheric chemistry studies at a global scale. The main contribution of this thesis is the development of a new trace gas retrieval scheme for IASI measurements. The goal was to improve on the global operational scheme in terms of the algorithm complexity, speed of calculation and spatial resolution achieved in the final solution. This schemedirectly retrieves column integrated trace gas densities at single field-of-view (FOV) from IASI measurements within a 10% accuracy limit. The scheme is built on the Bayesian framework of probability and based on the assumption that the inversion of total column values, as apposed to gas profiles, is a near-linear problem. Performance of the retrieval scheme is demonstrated on simulated noisy measurements for carbon monoxide (CO). Being a linear solution, the scheme is'highly dependent on the accuracy of the a priori. A statistical estimate of the a priori was computed using a principal component regression analysis with 50 eigenvectors. The corresponding root-mean-square (RMS) error of the a priori was calculated to be 9.3%. In general terms, the physical retrieval improved on the a priori, and sensitivity studies were performed to demonstrate the accuracy and stability of the retrieval scheme under a numberof perturbations. A full system characterization and error analysis is additionally preformed to elicidate the nature of this complex problem. The hyperspectral IASI measurements introduce a significant correlation error in the retrieval. The Absorption Line Cluster (ALC) channel selection method was developed in this thesis, to address the correlation error explicitly. When a first neighbour correlation factor of 0.71 is assumed in the measurement error covariance for the clusters of ALC channels, then most of the correlation error is removed in the retrieval. In conclusion, the total column trace gas retrieval scheme developed here is fast, simple, intuitive, transparent and robust. These characteristics together make it highly suitable for implementation in an operational environment intended for air quality monitoring on a regional scale.
APA, Harvard, Vancouver, ISO, and other styles
25

Yen, Shuo-Fu, and 顏碩甫. "A Fast Cloud Large-Scale Image Retrieval System Using Weighted-Inverted Index and Database Filtering Algorithm." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/n5r2t2.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
104
With the advance of multimedia technology and communications, images and videos become the major streaming information through the Internet. How to fast retrieve desired similar images precisely from the Internet scale image/video databases (Big Data) is the most important retrieval control target. In this paper, a cloud based content-based image retrieval (CBIR) scheme is presented. To speed up the features matching process for large scale CBIR, we proposed to perform Database-Categorizing based on Weighted-Inverted Index (DCWII) and Database Filtering Algorithm (DFA). In the DCWII, it assigns weights to DCT coefficients histograms and categorizes the database by weighted features. In addition, the DFA filters out irrelevant image in the database to reduce unnecessary computation loading for features matching. Experiments showed that the proposed CBIR scheme outperforms previous works in the Precision-Recall performance and maintains mean average precision (mAP) about 0.678 in the large-scale database comprising one mega images. Our scheme also can reduce about 55%~70% retrieval time by pre-filtering the database, which helps to improve efficiency of retrieval system.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Xin. "Retrieval of Non-Spherical Dust Aerosol Properties from Satellite Observations." Thesis, 2013. http://hdl.handle.net/1969.1/151193.

Full text
Abstract:
An accurate and generalized global retrieval algorithm from satellite observations is a prerequisite to understand the radiative effect of atmospheric aerosols on the climate system. Current operational aerosol retrieval algorithms are limited by the inversion schemes and suffering from the non-uniqueness problem. In order to solve these issues, a new algorithm is developed for the retrieval of non-spherical dust aerosol over land using multi-angular radiance and polarized measurements of the POLDER (POLarization and Directionality of the Earth’s Reflectances) and wide spectral high-resolution measurements of the MODIS (MODerate resolution Imaging Spectro-radiometer). As the first step to account for the non-sphericity of irregularly shaped dust aerosols in the light scattering problem, the spheroidal model is introduced. To solve the basic electromagnetic wave scattering problem by a single spheroid, we developed an algorithm, by transforming the transcendental infinite-continued-fraction-formeigen equation into a symmetric tri-diagonal linear system, for the calculation of the spheroidal angle function, radial functions of the first and second kind, as well as the corresponding first order derivatives. A database is developed subsequently to calculate the bulk scattering properties of dust aerosols for each channel of the satellite instruments. For the purpose of simulation of satellite observations, a code is developed to solve the VRTE (Vector Radiative Transfer Equation) for the coupled atmosphere-surface system using the adding-doubling technique. An alternative fast algorithm, where all the solid angle integrals are converted to summations on an icosahedral grid, is also proposed to speed-up the code. To make the model applicable to various land and ocean surfaces, a surface BRDF (Bidirectional Reflectance Distribution Function) library is embedded into the code. Considering the complimentary features of the MODIS and the POLDER, the collocated measurements of these two satellites are used in the retrieval process. To reduce the time spent on the simulation of dust aerosol scattering properties, a single-scattering property database of tri-axial ellipsoid is incorporated. In addition, atmospheric molecule correction is considered using the LBLRTM (Line-By-Line Ra- diative Transfer Model). The Levenberg-Marquardt method was employed to retrieve all the interested dust aerosol parameters and surface parameters simultaneously. As an example, dust aerosol properties retrieved over the Sahara Desert are presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Hsieh, Shang-Wei, and 謝尚偉. "An Information Retrieval System for Fast Exploration of Proprietary Experimental Data via Searching and Mining the Biomedical Entities in Related Public Literatures." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/48936134422468587956.

Full text
Abstract:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
102
In the beginning of biomedical research works, mapping researchers’ proprietary experiment data to public research literatures is an important work. In this paper, a search engine is proposed to retrieve large scale biomedical literatures which are collected from PubMed in an efficient way. Moreover, we apply a name entity recognition tool which is a kind of text-mining technique to extract protein names from the biomedical literatures. Afterward, the protein names are normalized to IDs which can be linked to the researchers’ proprietary experiment databases and using web techniques automatically plot the charts for the relevant proprietary data; through these processes, the researchers can efficiently get the relevance between their proprietary data and the public papers also can help them to find more available research works.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Chenxi. "Investigation of Thin Cirrus Cloud Optical and Microphysical Properties on the Basis of Satellite Observations and Fast Radiative Transfer Models." Thesis, 2013. http://hdl.handle.net/1969.1/151213.

Full text
Abstract:
This dissertation focuses on the global investigation of optically thin cirrus cloud optical thickness (tau) and microphysical properties, such as, effective particle size (D_(eff)) and ice crystal habits (shapes), based on the global satellite observations and fast radiative transfer models (RTMs). In the first part, we develop two computationally efficient RTMs simulating satellite observations under cloudy-sky conditions in the visible/shortwave infrared (VIS/SWIR) and thermal inferred (IR) spectral regions, respectively. To mitigate the computational burden associated with absorption, thermal emission and multiple scattering, we generate pre-computed lookup tables (LUTs) using two rigorous models, i.e., the line-by-line radiative transfer model (LBLRTM) and the discrete ordinates radiative transfer model (DISORT). The second part introduces two methods (i.e., VIS/SWIR- and IR-based methods) to retrieve tau and D_(eff) from satellite observations in corresponding spectral regions of the two RTMs. We discuss the advantages and weakness of the two methods by estimating the impacts from different error sources on the retrievals through sensitivity studies. Finally, we develop a new method to infer the scattering phase functions of optically thin cirrus clouds in a water vapor absorption channel (1.38-µm). We estimate the ice crystal habits and surface structures by comparing the inferred scattering phase functions and numerically simulated phase functions calculated using idealized habits.
APA, Harvard, Vancouver, ISO, and other styles
29

Lin, Ci-Jie, and 林祺傑. "Aspect Retrieval and Integration for News Fact." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/46225273449222520120.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊工程學系
104
Internet speeds up the flow of information. News media has replaced traditional newspaper and magazines to spread information online in recent years. However, users have to take much time and effort to get exact fact information from the news documents because the news documents collected from different news media have similar content but may also provide additional facts specifically. For solving this problem, we propose a method to automatically extract and integrate fact information of news documents. The candidates of fact sentences are picked out by extracting the keywords of topics from news contents. Then, various features of the candidate sentences are used to perform classification to identify the fact sentences. In order to provide fact information, the triples consisting of facet term, relation term, and description term, are extracted by using a natural language tool on the topic sentences. Then the similarity of the facet terms between two triples is used to cluster the extracted triples by agglomerative hierarchical clustering. For each cluster of triples, we use the incremental method to combine each pair of triples which have similar facet or description terms in order to provide integrated fact information. The result of performance evaluation shows that the methods of fact sentences extraction, triple extraction and combination all get good performance. The proposed approach can effectively integrate facet information from different news documents, which provides users a comprehensive understanding of news documents.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Ying-Hui, and 吳盈慧. "Unsupervised Fact-checking Retrieval Model : A Real Case Study." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/egw43g.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
107
As the trend of Internet and social media goes, the way for people to gain access to information has been entirely evolved. People nowadays deliver and receive messages through online platforms anytime and anywhere. However, the convenience also causes severe problems about fake news and the rapid spread of misinformation. Such transmits are harmful to the society. While inaccurate health care tips and rumors trouble personal lives, misinterpret assertions and fabricate claims obstruct communication about public issues, leading to national security risks. In order to decrease the overspreading of fake news on the Internet and telecommunication platforms, the study attempted to discover the characteristic of fact-checking, design a high accurate unsupervised information retrieval system which can be applied in practice. The source of dataset refered to two major fact-checking organizations in Taiwan, ”Cofacts” and ”Taiwan Fact Check Center”. The study functioned word embedding model to attain query expansion. Chinese text segmentation optimizing and keyword weight tuning were implemented by applying named entity recognition on Wikipedia titles. The final fact-checking retrieval model was developed based on Okapi BM25, word embedding and named entity recognition keyword weighting. After experiments and parameter optimization, the result shows that the mixture model performs better and the design is practical for real cases.
APA, Harvard, Vancouver, ISO, and other styles
31

Walles, Rena L. "Effects of web-based tutoring software on math test performance : a look at gender, math-fact retrieval ability, spatial ability and type of help." 2005. https://scholarworks.umass.edu/theses/2425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Cuello, Eliana Marysel. "Algoritmos y análisis de imágenes no convencionales de rayos X." Bachelor's thesis, 2011. http://hdl.handle.net/11086/72.

Full text
Abstract:
Tesis (Lic. en Física)--Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física, 2011.
En este trabajo se estudian diferentes algoritmos de reconstrucción de imágenes basadas en un analizador de rayos X, con el objeto de separar los efectos de absorción, refracción y dispersión a ultra bajo ángulo, como producto de la interacción del haz con la muestra. Se implementaron estos métodos sobre imágenes medidas en el Laboratorio Nacional de Luz Sincrotrón de Campinas, Brasil, de una muestra especialmente diseñada para realzar los efectos de interacción anteriormente mencionados. Se analizaron los resultados cualitativa y cuantitativamente, mostrando que dependiendo del efecto realzado, existe un método más eficiente respecto de los otros.
Eliana Marysel Cuello.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography