Дисертації з теми "Extraction de dates"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-19 дисертацій для дослідження на тему "Extraction de dates".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Al-Jasser, Mohammed S. "The feasibility of date processing Phoenix dactylifera L. var Sufri components using physical and pectolytic enzyme treatments." Thesis, Loughborough University, 1990. https://dspace.lboro.ac.uk/2134/6928.
Повний текст джерелаPoulain, d'Andecy Vincent. "Système à connaissance incrémentale pour la compréhension de document et la détection de fraude." Thesis, La Rochelle, 2021. http://www.theses.fr/2021LAROS025.
Повний текст джерелаThe Document Understanding is the Artificial Intelligence ability for machines to Read documents. In a global vision, it aims the understanding of the document function, the document class, and in a more local vision, it aims the understanding of some specific details like entities. The scientific challenge is to recognize more than 90% of the data. While the industrial challenge requires this performance with the least human effort to train the machine. This thesis defends that Incremental Learning methods can cope with both challenges. The proposals enable an efficient iterative training with very few document samples. For the classification task, we demonstrate (1) the continue learning of textual descriptors, (2) the benefit of the discourse sequence, (3) the benefit of integrating a Souvenir of few samples in the knowledge model. For the data extraction task, we demonstrate an iterative structural model, based on a star-graph representation, which is enhanced by the embedding of few a priori knowledges. Aware about economic and societal impacts because the document fraud, this thesis deals with this issue too. Our modest contribution is only to study the different fraud categories to open further research. This research work has been done in a non-classic framework, in conjunction of industrial activities for Yooz and collaborative research projects like the FEDER Securdoc project supported by la région Nouvelle Aquitaine, and the Labcom IDEAS supported by the ANR
Weingessel, Andreas, Martin Natter, and Kurt Hornik. "Using independent component analysis for feature extraction and multivariate data projection." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/1424/1/document.pdf.
Повний текст джерелаSeries: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Akasha, Ibrahim Abdurrhman Mohamed. "Extraction and characterisation of protein fraction from date palm (Phoenix dactylifera L.) seeds." Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2771.
Повний текст джерелаAl, Bulushi Karima. "Supercritical CO2 extraction of waxes from date palm (Phoenix dactylifera) leaves : optimisation, characterisation, and applications." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/21257/.
Повний текст джерелаBosch, Vicente Juan José. "From heuristics-based to data-driven audio melody extraction." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/404678.
Повний текст джерелаLa identificación de la melodía en una grabación musical es una tarea relativamente fácil para seres humanos, pero muy difícil para sistemas computacionales. Esta tarea se conoce como "extracción de melodía", más formalmente definida como la estimación automática de la secuencia de alturas correspondientes a la melodía de una grabación de música polifónica. Esta tesis investiga los beneficios de utilizar conocimiento derivado automáticamente de datos para extracción de melodía, combinando procesado digital de la señal y métodos de aprendizaje automático. Ampliamos el alcance de la investigación en este campo, al trabajar con un conjunto de datos variado y múltiples definiciones de melodía. En primer lugar presentamos un extenso análisis comparativo del estado de la cuestión y realizamos una evaluación en un contexto de música sinfónica. A continuación, proponemos métodos de extracción de melodía basados en modelos de fuente-filtro y la caracterización de contornos tonales, y los evaluamos en varios géneros musicales. Finalmente, investigamos la caracterización de contornos con información de timbre, tonalidad y posición espacial, y proponemos un método para la estimación de múltiples líneas melódicas. La combinación de enfoques supervisados y no supervisados lleva a mejoras en la extracción de melodía y muestra un camino prometedor para futuras investigaciones y aplicaciones.
Rosenthal, Paul, Vladimir Molchanov, and Lars Linsen. "A Narrow Band Level Set Method for Surface Extraction from Unstructured Point-based Volume Data." Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-70373.
Повний текст джерелаLópez, Massaguer Oriol 1972. "Development of informatic tools for extracting biomedical data from open and propietary data sources with predictive purposes." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/471540.
Повний текст джерелаWe developed new software tools to obtain information from public and private data sources to develop in silico toxicity models. The first of these tools is Collector, an Open Source application that generates “QSAR-ready” series of compounds annotated with bioactivities, extracting the data from the Open PHACTS platform using semantic web technologies. Collector was applied in the framework of the eTOX project to develop predictive models for toxicity endpoints. Additionally, we conceived, designed, implemented and tested a method to derive toxicity scorings suitable for predictive modelling starting from in vivo preclinical repeated-dose studies generated by the pharmaceutical industry. This approach was tested by generating scorings for three hepatotoxicity endpoints: ‘degenerative lesions’, ‘inflammatory liver changes’ and ‘non-neoplasic proliferative lesions’. The suitability of these scores was tested by comparing them with experimentally obtained point of departure doses as well as by developing tentative QSAR models, obtaining acceptable results. Our method relies on ontology-based inference to extract information from our ontology annotated data stored in a relational database. Our method, as a whole, can be applied to other preclinical toxicity databases to generate toxicity scorings. Moreover, the ontology-based inference method on its own is applicable to any relational databases annotated with ontologies.
Luis, Peña Christian Jair. "Diseño de la arquitectura de un extractor de endmembers de imágenes hiperespectrales sobre un FPGA en tiempo real." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2018. http://tesis.pucp.edu.pe/repositorio/handle/123456789/13046.
Повний текст джерелаTesis
Danilova, Vera. "Linguistic support for protest event data collection." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/374232.
Повний текст джерелаThis thesis addresses the problem of automatic protest event collection quality and proposes the tools for multilingual protest feature extraction to improve the quality of analysis unit. This work includes the exploration of the state of the art in protest event data collection and multilingual event extraction. In the absence of a multilingual training dataset for supervised learning we focus on the rule-based approach to multilingual event extraction and connection of a domain concept hierarchy. Grammars and gazetteers have been elaborated in accordance with the standards of GATE 8.0, and the protest event hierarchy has been formalized using Protégé - 4.3. The present work contributes to the automatic protest event data collection and coding by the following: construction of a multilingual corpus of texts related to protest events; a formalized description of the protest event concept on the basis of a detailed examination of a multilingual corpus of news headlines (Bulgarian, French, Polish, Russian, Spanish, Swedish); elaboration of generic patterns and gazetteers for multilingual text processing, which helps to deal with the absence of a multilingual training set. The obtained data can be applied among others for the monitoring and analysis of event-specific social networks’ response.
Basurto, Contreras César Marino, and Contreras César Marino Basurto. "Modelo “Cebaco” aplicado al control de procesos en el circuito de Molienda-Clasificación en una planta concentradora de minerales mediante el software LabVIEW." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2011. http://cybertesis.unmsm.edu.pe/handle/cybertesis/377.
Повний текст джерела-- The process control in a concentrating plant appears like a problem of imponderable from the moment that the gross mineral falls to the tubes of the mill, the process is continuous and this continuity only stops when the product finally emerges to rest in the fields of storage of the concentrates and the fields of re-wash of the tails. The material in process cannot be weighed without interrupting such continuity; consequently, the control of the plant will depend much on the suitable sampling of the treated material that is like pulp. From these samples the essential and useful information of the analysis will be obtained, as far as the content of the metal, distribution as large as particles and water Content or other ingredients of the mineral pulp. With such information by hand, we can calculate the efficiency and the effectiveness of the work that takes place by means of the use of formulas and tabulations. But by the tedious that is the work to obtain information of the fact, more than everything by delay of the tests in laboratory, so, decisions cannot be taken right away to improve the production process. in the present work a methodology of process control, sets out called “cebaco” that consists basically of three parts main, first, the control method that set out, these are the operative variables in the circuit of milling-classification in a concentrating plant that is based on a humid malla in the same site of sampling, for which 2 are only needed, these can be No.60 and 200 that are the most recommendable and also to have a Densimeter (balance MARCY) with their respective monogram, we can weight pulp of 1litro of Capacity, knowing the mineral specific gravity, the pulp density with these data the solids percentage can be calculated in an instant in each flow of the filing cabinet, and with this data it calculates the negative accumulated percentage and positive of you enmesh mentioned, with these data collected using the mathematical models Gaudin-Schuhmann[64] and Rosin-Rammler [61], but used in mineral processing, with which the grain sized profiles are obtained almost right away and in an instant, the second contribution is the proposal of the mathematical model that is based on the density of the pulp taken these on each flow from the filing cabinet and of the density obtained when doing the humid mallaje respectively in you enmesh already mentioned; and the third party is I elaboration of software Goliat 0,2 in the programming language LabVIEW with the mathematical models that sets out where the obtained results is possible to be visualized in a computer. The tests of application and verification were realized, in Plants Concentrating of “Austria Duvaz” in Morococha, “Crown” of Chumpe in Yauricocha, Yauyos and “Huari” located in UNCP. Whish results were very flattering, agreeing almost in total with the proposition hypothesis. Like demonstration of which the methodology works optimally, as much in the process control like in the simulation, we can verify with the following obtained results, the size of cuts (d50) calculated [51]; with the data taken from the book of Juan Rivera Zevallos[58] in pages 307-324, is 77,31 microns and the classification efficiency is of 42,32 and making the calculations with the methodology, it gives that d50 is 77,32 microns and the classification efficiency gives a value of 42,71, is possible to be clearly noticed that almost there is no difference, In the second case in the comparison of the results of the data collected in the concentrating plant of “Chumpe” Yauricocha, when doing the process control and the respective calculations by the traditional methodology, as it is realized in the majority of the plants, it gives a value of d50 of 81,31 microns and the classification efficiency gives a value of 57,93% and when doing the control and its respective calculations with the methodology “Cebaco” give as result d50 of 83,56 microns and the efficiency of classification of 53,18% can also be noticed mainly in this completes that the difference is not significant, being the level of significance of 0.08 and with a approximation of 92%, in the third case also when evaluating another test of the data taken from the concentrating plant mentioned when doing the calculations with the traditional method, it gives an efficiency of classification of 48,49% and with the methodology “Cebaco” the classification efficiency is about 48,02%, also almost are similar, in the fourth case when doing the evaluation of the results obtained in the concentrating plant “Austria located Duvaz” in Morococha, the d50 calculated by the traditional methodology gives a value of 88,3061 and d50 calculated with the methodology “Cebaco” gives a value of 85,2559 existing a difference of so only 3,45 and the value of the efficiency of classification by the traditional methodology gives a value of 51,8102 and the calculated with the methodology “Cebaco” gives a value of 51,6255 existing an insignificant difference of 0,36 and the reason of circulating load calculated by the traditional methodology gives a value of 1.903 and the calculated one with the methodology “Cebaco” gives a value of 1,887 whose difference is of so only 0.84. So, the demonstration more important, where it is verified that the method “Cebaco” is an suitable proposal, occurs with the increase of the utility for the concentrating plant and this was obtained from valuations of the concentrates in both cases, first with the obtained metallurgical balance with the traditional method and the other with the metallurgical balance obtained of the works carried out with the proposed method, this is based on a better time of control, as for the time with the traditional method in the best cases is of 4 hours and with the proposed method is of so only 5 minutes, this implies that the corrections are made right away and in an instant, so it brings consequently theimprovement of the recoveries laws of the concentrates and therefore increase of the productivity. By the calculations done I determine a utility of 11. 53 dollars American more by ton of mineral treated, when realizing the control with the proposed method.
Tesis
Basurto, Contreras César Marino. "Modelo “Cebaco” aplicado al control de procesos en el circuito de Molienda-Clasificación en una planta concentradora de minerales mediante el software LabVIEW." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2011. https://hdl.handle.net/20.500.12672/377.
Повний текст джерела-- The process control in a concentrating plant appears like a problem of imponderable from the moment that the gross mineral falls to the tubes of the mill, the process is continuous and this continuity only stops when the product finally emerges to rest in the fields of storage of the concentrates and the fields of re-wash of the tails. The material in process cannot be weighed without interrupting such continuity; consequently, the control of the plant will depend much on the suitable sampling of the treated material that is like pulp. From these samples the essential and useful information of the analysis will be obtained, as far as the content of the metal, distribution as large as particles and water Content or other ingredients of the mineral pulp. With such information by hand, we can calculate the efficiency and the effectiveness of the work that takes place by means of the use of formulas and tabulations. But by the tedious that is the work to obtain information of the fact, more than everything by delay of the tests in laboratory, so, decisions cannot be taken right away to improve the production process. in the present work a methodology of process control, sets out called “cebaco” that consists basically of three parts main, first, the control method that set out, these are the operative variables in the circuit of milling-classification in a concentrating plant that is based on a humid malla in the same site of sampling, for which 2 are only needed, these can be No.60 and 200 that are the most recommendable and also to have a Densimeter (balance MARCY) with their respective monogram, we can weight pulp of 1litro of Capacity, knowing the mineral specific gravity, the pulp density with these data the solids percentage can be calculated in an instant in each flow of the filing cabinet, and with this data it calculates the negative accumulated percentage and positive of you enmesh mentioned, with these data collected using the mathematical models Gaudin-Schuhmann[64] and Rosin-Rammler [61], but used in mineral processing, with which the grain sized profiles are obtained almost right away and in an instant, the second contribution is the proposal of the mathematical model that is based on the density of the pulp taken these on each flow from the filing cabinet and of the density obtained when doing the humid mallaje respectively in you enmesh already mentioned; and the third party is I elaboration of software Goliat 0,2 in the programming language LabVIEW with the mathematical models that sets out where the obtained results is possible to be visualized in a computer. The tests of application and verification were realized, in Plants Concentrating of “Austria Duvaz” in Morococha, “Crown” of Chumpe in Yauricocha, Yauyos and “Huari” located in UNCP. Whish results were very flattering, agreeing almost in total with the proposition hypothesis. Like demonstration of which the methodology works optimally, as much in the process control like in the simulation, we can verify with the following obtained results, the size of cuts (d50) calculated [51]; with the data taken from the book of Juan Rivera Zevallos[58] in pages 307-324, is 77,31 microns and the classification efficiency is of 42,32 and making the calculations with the methodology, it gives that d50 is 77,32 microns and the classification efficiency gives a value of 42,71, is possible to be clearly noticed that almost there is no difference, In the second case in the comparison of the results of the data collected in the concentrating plant of “Chumpe” Yauricocha, when doing the process control and the respective calculations by the traditional methodology, as it is realized in the majority of the plants, it gives a value of d50 of 81,31 microns and the classification efficiency gives a value of 57,93% and when doing the control and its respective calculations with the methodology “Cebaco” give as result d50 of 83,56 microns and the efficiency of classification of 53,18% can also be noticed mainly in this completes that the difference is not significant, being the level of significance of 0.08 and with a approximation of 92%, in the third case also when evaluating another test of the data taken from the concentrating plant mentioned when doing the calculations with the traditional method, it gives an efficiency of classification of 48,49% and with the methodology “Cebaco” the classification efficiency is about 48,02%, also almost are similar, in the fourth case when doing the evaluation of the results obtained in the concentrating plant “Austria located Duvaz” in Morococha, the d50 calculated by the traditional methodology gives a value of 88,3061 and d50 calculated with the methodology “Cebaco” gives a value of 85,2559 existing a difference of so only 3,45 and the value of the efficiency of classification by the traditional methodology gives a value of 51,8102 and the calculated with the methodology “Cebaco” gives a value of 51,6255 existing an insignificant difference of 0,36 and the reason of circulating load calculated by the traditional methodology gives a value of 1.903 and the calculated one with the methodology “Cebaco” gives a value of 1,887 whose difference is of so only 0.84. So, the demonstration more important, where it is verified that the method “Cebaco” is an suitable proposal, occurs with the increase of the utility for the concentrating plant and this was obtained from valuations of the concentrates in both cases, first with the obtained metallurgical balance with the traditional method and the other with the metallurgical balance obtained of the works carried out with the proposed method, this is based on a better time of control, as for the time with the traditional method in the best cases is of 4 hours and with the proposed method is of so only 5 minutes, this implies that the corrections are made right away and in an instant, so it brings consequently theimprovement of the recoveries laws of the concentrates and therefore increase of the productivity. By the calculations done I determine a utility of 11. 53 dollars American more by ton of mineral treated, when realizing the control with the proposed method.
Tesis
Diedhiou, Djibril. "Fractionnement analytique de la graine de neem (Azadirachta indica A. Juss.) et de la graine de dattier du désert (Balanites aegyptiaca L.) - Valorisation des constituants de la graine de neem par bioraffinage." Thesis, Toulouse, INPT, 2017. http://www.theses.fr/2017INPT0135/document.
Повний текст джерелаNeem and desert date seeds were characterized and their fractionation perspectives oriented. A process of fractionation of neem seeds in twin-screw extruder has been studied for the purpose of production and integrated valorization of its fractions: oil, co-extract of azadirachtin, proteins and lipids, and extrusion raffinate. The use of water and water/ethanol mixtures (up to 75% ethanol) with a twin-screw extruder configuration defining four zones (a feed zone, a grinding zone, a solidliquid extraction zone and a solid / liquid separation zone), allows to extract from the filtrate 83 to 86% of the azadirachtin, 86 to 92% of the lipids and 44 to 74% of the proteins of the seed thereby producing a raffinate essentially fibrous containing at most 8% lipids, 12% proteins and 0.82 g/kg azadirachtin. One of the best ways of processing the suspension that is the crude filtrate, is a solid-liquid separation by centrifugation. This separation process makes it possible to obtain a diluted emulsion containing 42 to 64% of the lipids and up to 41% of the proteins of the seed. A centrifugation achieves it effectively, but this separation process can have disadvantages in the treatment of large volumes. Considered as a by-product of the treatment of crude filtrate, the insoluble phase can contain 42 to 64% of the lipids, 32.9 to 47% of the proteins and 10 to 13% of the azadirachtin of the seed. Water has proven to be the best solvent in this fractionation process. The pressing of the neem seeds followed by the aqueous or hydroalcoholic extraction in the same twin-screw extruder makes it possible to extract up to 32% of the oil of the seeds and to recover 20% of the seed oil in clear form, with very little azadirachtin, ensuring better extraction yields of azadirachtin and proteins to the crude filtrate. Two treatment pathways of the filtrates were studied: one leading to an emulsion of azadirachtin and another to a freeze-dried powder of azadirachtin. The valorization of the fibrous extrusion raffinate has been oriented towards the production of agromaterials by thermopressing. A biorefinery scheme of the neem seed for the valorization of its constituents has thus be implemented
Şentürk, Sertan. "Computational analysis of audio recordings and music scores for the description and discovery of Ottoman-Turkish Makam music." Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/402102.
Повний текст джерелаEsta tesis aborda varias limitaciones de las metodologías más avanzadas en el campo de recuperación de información musical (MIR por sus siglas en inglés). En particular, propone varios métodos computacionales para el análisis y la descripción automáticas de partituras y grabaciones de audio de música de makam turco-otomana (MMTO). Las principales contribuciones de la tesis son el corpus de música que ha sido creado para el desarrollo de la investigación y la metodología para alineamiento de audio y partitura desarrollada para el análisis del corpus. Además, se presentan varias metodologías nuevas para análisis computacional en el contexto de las tareas comunes de MIR que son relevantes para MMTO. Algunas de estas tareas son, por ejemplo, extracción de la melodía predominante, identificación de la tónica, estimación de tempo, reconocimiento de makam, análisis de afinación, análisis estructural y análisis de progresión melódica. Estas metodologías constituyen las partes de un sistema completo para la exploración de grandes corpus de MMTO llamado Dunya-makam. La tesis comienza presentando el corpus de música de makam turcootomana de CompMusic. El corpus incluye 2200 partituras, más de 6500 grabaciones de audio, y los metadatos correspondientes. Los datos han sido recopilados, anotados y revisados con la ayuda de expertos. Utilizando criterios como compleción, cobertura y calidad, validamos el corpus y mostramos su potencial para investigación. De hecho, nuestro corpus constituye el recurso de mayor tamaño y representatividad disponible para la investigación computacional de MMTO. Varios conjuntos de datos para experimentación han sido igualmente creados a partir del corpus, con el fin de desarrollar y evaluar las metodologías específicas propuestas para las diferentes tareas computacionales abordadas en la tesis. La parte dedicada al análisis de las partituras se centra en el análisis estructural a nivel de sección y de frase. Los márgenes de frase son identificados automáticamente usando uno de los métodos de segmentación existentes más avanzados. Los márgenes de sección son extraídos usando una heurística específica al formato de las partituras. A continuación, se emplea un método de nueva creación basado en análisis gráfico para establecer similitudes a través de estos elementos estructurales en cuanto a melodía y letra, así como para etiquetar relaciones semióticamente. La sección de análisis de audio de la tesis repasa el estado de la cuestión en cuanto a análisis de los aspectos melódicos en grabaciones de MMTO. Se proponen modificaciones de métodos existentes para extracción de melodía predominante para ajustarlas a MMTO. También se presentan mejoras de metodologías tanto para identificación de tónica basadas en distribución de alturas, como para reconocimiento de makam. La metodología para alineación de audio y partitura constituye el grueso de la tesis. Aborda los retos específicos de esta cultura según vienen determinados por las características musicales, las representaciones relacionadas con la teoría musical y la praxis oral de MMTO. Basada en varias técnicas tales como deformaciones dinámicas de tiempo subsecuentes, transformada de Hough y modelos de Markov de longitud variable, la metodología de alineamiento de audio y partitura está diseñada para tratar las diferencias estructurales entre partituras y grabaciones de audio. El método es robusto a la presencia de expresiones melódicas no anotadas, desviaciones de tiempo en las grabaciones, y diferencias de tónica y afinación. La metodología utiliza los resultados del análisis de partitura y audio para enlazar el audio y los datos simbólicos. Además, la metodología de alineación se usa para obtener una descripción informada por partitura de las grabaciones de audio. El análisis de audio informado por partitura no sólo simplifica los pasos para la extracción de características de audio que de otro modo requerirían sofisticados métodos de procesado de audio, sino que también mejora sustancialmente su rendimiento en comparación con los resultados obtenidos por los métodos más avanzados basados únicamente en datos de audio. Las metodologías analíticas presentadas en la tesis son aplicadas al corpus de música de makam turco-otomana de CompMusic e integradas en una aplicación web dedicada al descubrimiento culturalmente específico de música. Algunas de las metodologías ya han sido aplicadas a otras tradiciones musicales, como música indostaní, carnática y griega. Siguiendo las mejores prácticas de investigación en abierto, todos los datos creados, las herramientas de software y los resultados de análisis está disponibles públicamente. Las metodologías, las herramientas y el corpus en sí mismo ofrecen grandes oportunidades para investigaciones futuras en muchos campos tales como recuperación de información musical, musicología computacional y educación musical.
Aquesta tesi adreça diverses deficiències en l’estat actual de les metodologies d’extracció d’informació de música (Music Information Retrieval o MIR). En particular, la tesi proposa diverses estratègies per analitzar i descriure automàticament partitures musicals i enregistraments d’actuacions musicals de música Makam Turca Otomana (OTMM en les seves sigles en anglès). Les contribucions principals de la tesi són els corpus musicals que s’han creat en el context de la tesi per tal de dur a terme la recerca i la metodologia de alineament d’àudio amb la partitura que s’ha desenvolupat per tal d’analitzar els corpus. A més la tesi presenta diverses noves metodologies d’anàlisi computacional d’OTMM per a les tasques més habituals en MIR. Alguns exemples d’aquestes tasques són la extracció de la melodia principal, la identificació del to musical, l’estimació de tempo, el reconeixement de Makam, l’anàlisi de la afinació, l’anàlisi de la estructura musical i l’anàlisi de la progressió melòdica. Aquest seguit de metodologies formen part del sistema Dunya-makam per a la exploració de grans corpus musicals d’OTMM. En primer lloc, la tesi presenta el corpus CompMusic Ottoman- Turkish makam music. Aquest inclou 2200 partitures musicals, més de 6500 enregistraments d’àudio i metadata complementària. Les dades han sigut recopilades i anotades amb ajuda d’experts en aquest repertori musical. El corpus ha estat validat en termes de d’exhaustivitat, cobertura i qualitat i mostrem aquí el seu potencial per a la recerca. De fet, aquest corpus és el la font més gran i representativa de OTMM que pot ser utilitzada per recerca computacional. També s’han desenvolupat diversos subconjunts de dades per al desenvolupament i evaluació de les metodologies específiques proposades per a les diverses tasques computacionals que es presenten en aquest tesi. La secció de la tesi que tracta de l’anàlisi de partitures musicals se centra en l’anàlisi estructural a nivell de secció i de frase musical. Els límits temporals de les frases musicals s’identifiquen automàticament gràcies a un metodologia de segmentació d’última generació. Els límits de les seccions s’extreuen utilitzant un seguit de regles heurístiques determinades pel format de les partitures musicals. Posteriorment s’utilitza un nou mètode basat en anàlisi gràfic per establir semblances entre aquest elements estructurals en termes de melodia i text. També s’utilitza aquest mètode per etiquetar les relacions semiòtiques existents. La següent secció de la tesi tracta sobre anàlisi d’àudio i en particular revisa les tecnologies d’avantguardia d’anàlisi dels aspectes melòdics en OTMM. S’hi proposen adaptacions dels mètodes d’extracció de melodia existents que s’ajusten a OTMM. També s’hi presenten millores en metodologies de reconeixement de makam i en identificació de tònica basats en distribució de to. La metodologia d’alineament d’àudio amb partitura és el nucli de la tesi. Aquesta aborda els reptes culturalment específics imposats per les característiques musicals, les representacions de la teoria musical i la pràctica oral particulars de l’OTMM. Utilitzant diverses tècniques tal i com Dynamic Time Warping, Hough Transform o models de Markov de durada variable, la metodologia d’alineament esta dissenyada per enfrontar les diferències estructurals entre partitures musicals i enregistraments d’àudio. El mètode és robust inclús en presència d’expressions musicals no anotades en la partitura, desviacions de tempo ocorregudes en les actuacions musicals i diferències de tònica i afinació. La metodologia aprofita els resultats de l’anàlisi de la partitura i l’àudio per enllaçar la informació simbòlica amb l’àudio. A més, la tècnica d’alineament s’utilitza per obtenir descripcions de l’àudio fonamentades en la partitura. L’anàlisi de l’àudio fonamentat en la partitura no només simplifica les fases d’extracció de característiques d’àudio que requeririen de mètodes de processament d’àudio sofisticats, sinó que a més millora substancialment els resultats comparat amb altres mètodes d´ultima generació que només depenen de contingut d’àudio. Les metodologies d’anàlisi presentades s’han utilitzat per analitzar el corpus CompMusic Ottoman-Turkish makam music i s’han integrat en una aplicació web destinada al descobriment musical de tradicions culturals específiques. Algunes de les metodologies ja han sigut també aplicades a altres tradicions musicals com la Hindustani, la Carnàtica i la Grega. Seguint els preceptes de la investigació oberta totes les dades creades, eines computacionals i resultats dels anàlisis estan disponibles obertament. Tant les metodologies, les eines i el corpus en si mateix proporcionen àmplies oportunitats per recerques futures en diversos camps de recerca tal i com la musicologia computacional, la extracció d’informació musical i la educació musical. Traducció d’anglès a català per Oriol Romaní Picas.
Tai, Wen-Hui, and 邰文暉. "The Study of Auto Extraction of Dates from Chinese Web Pages in Taiwan Area." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/68609583336853599211.
Повний текст джерела輔仁大學
圖書資訊學系
98
With the popularization of Internet services, the online resource from the Internet is more plentiful nowadays. ‘Date’ is one of the most important fields of metadata in web pages. Due to the special date displaying formats using in Taiwan, it has made the automatic cataloging on date for webpage more difficult. The major purpose of this research is to thoroughly analyze the different types of date displaying format using in Chinese webpage. These findings will be used to increase the precision on the date auto extraction of webpage. The procedures of experiment are as follows. Firstly, the sample is randomly from Internet. Secondly, the statistic analysis on the date displaying format of each webpage is conducted. Lastly, Regular Expression is used to abstract the dates of each webpage and the accuracy ratio is calculated. The difficulties and feasibility of auto date extraction are discussed in the end of this work. The results of the experiment suggest the accuracy ratio of web pages with date information is 61%. On the other hand, the accuracy ratio of web pages without date information is 62%. The average error of those web pages with date information is 0.62 year. The results of this research suggest that the auto date extraction mechanism can be used to improve the efficiency on webpage information retrieval.
Chen, Mei-Fen, and 陳梅芬. "The Extraction of Characters on Dated Color Postcards." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/02708156348565075519.
Повний текст джерела淡江大學
資訊工程學系
91
In this study, we propose a scheme to extract characters from color postcards published around 1930 during the time Japan occupied Taiwan as one of its colonies. These postcards, reproduced in the book “台灣紀行” [1], possibly the earliest postcards in Taiwan, contained photographs of beautiful mountains, rivers, farms, country view, famous buildings…etc of earlier Taiwan. As they are precious but fragile historical documents, it is very meaningful if these postcards can be kept in a digital library via key words searching engine, with helps of characters extraction, then these documents not only are available to researchers as well as to the general public. The illustrations printed on postcards usually are located either on the upper or lower positions of the postcards with various backgrounds. Characters may appear in black, white, or red with underneath backgrounds of blue sky and/or white clouds, dark mountains, brownish rocks, trees with leaves and/or branches, rocky or muddy country roads, ponds, etc. These characters also suffered problem of uneven illuminations. It will be a very difficult task if we want to find a global criterion for locating these characters for all postcards. To solve this problem, we use a morphological operation, reconstruction by erosion or dilation, followed by image subtraction to remove backgrounds connecting to borders of the postcards. As a result, characters become the most obvious objects in the image for most of irrelevant backgrounds being removed. Next, by vertical and horizontal projections and connected components analysis to remove noises, we can find the exact locations of the characters. Finally, back to the corresponding positions of the characters on color postcards, color contrast enhancement and sharpening are done as the post processing. The proposed algorithm has been executed on a set of color images of postcards and proved its efficacy.
Montamat, Ignacio Adolfo. "Combinador de información primaria y secundaria para extractor digital de datos de radar en sistemas de vigilancia." Master's thesis, 2015. http://hdl.handle.net/11086/3282.
Повний текст джерелаMaestría conjunta con el Instituto Universitario Aeronáutico
Un extractor digital de datos de radar (E.D.D.R.) es un subsistema -de hardware y software- del Sistema de Radar, que permite la representación visual de las aero-naves detectadas. A nivel de software, puede considerarse al E.D.D.R. compuesto de una serie de aplicaciones interactuantes: los procesadores de información primaria y secundaria, la aplicación de asociación primaria y secundaria (o Combinador) y la aplicación de seguimiento de objetivos o Tracker, en inglés. En el presente trabajo se muestra el proceso de rediseño, implementación y validación de la aplicación de software de asociación de información primaria y secundaria de un E.D.D.R. perteneciente a un Sistema de Radar militar de vigilancia de largo alcance de la Fuerza Aérea Argentina (FAA).
Ould, Biha Ahmedou. "Le choix de la date optimale des investissements irréversibles dans les projets pétroliers avec asymétrie d'information et incertitude : l'approche des options réelles." Mémoire, 2006. http://www.archipel.uqam.ca/3169/1/M9455.pdf.
Повний текст джерелаIglesias, Martínez Miguel Enrique. "Development of algorithms of statistical signal processing for the detection and pattern recognitionin time series. Application to the diagnosis of electrical machines and to the features extraction in Actigraphy signals." Doctoral thesis, 2020. http://hdl.handle.net/10251/145603.
Повний текст джерела[CAT] En l'actualitat, el desenvolupament i aplicació d'algoritmes per al reconeixement de patrons que milloren els nivells de rendiment, detecció i processament de dades en diferents àrees del coneixement és un tema de gran interés. En aquest context, i específicament en relació amb l'aplicació d'aquests algoritmes a la monitorització i diagnòstic de màquines elèctriques, l'ús de senyals de flux és una alternativa molt interessant per tal de detectar les diferents avaries. Així mateix, i en relació amb l'ús de senyals biomèdics, és de gran interés extraure característiques rellevants en els senyals d'actigrafia per a la identificació de patrons que poden estar associats amb una patologia específica. En aquesta tesi, s'han desenvolupat i aplicat algoritmes basats en el processament estadístic i espectral de senyals per a la detecció i diagnòstic d'avaries en màquines elèctriques, així com la seua aplicació al tractament de senyals d'actigrafia. Amb el desenvolupament dels algoritmes proposats, es pretén obtindre un sistema dinàmic d'indicació i identificació per a detectar l'avaria o la patologia associada, el qual no depenga de paràmetres o informació externa que puga condicionar els resultats, només de la informació primària que inicialment presenta el senyal a tractar (com la periodicitat, amplitud, freqüència i fase de la mostra). A partir de l'ús dels algoritmes desenvolupats per a la detecció i diagnòstic d'avaries en màquines elèctriques, basats en el processament estadístic i espectral de senyals, es pretén avançar, en relació amb els models actualment existents, en la identificació de avaries mitjançant l'ús de senyals de flux. A més, i d'altra banda, mitjançant l'ús d'estadístics d'ordre superior, per a l'extracció d'anomalies en els senyals d'actigrafía, s'han trobat paràmetres alternatius per a la identificació de processos que poden estar relacionats amb patologies específiques.
[EN] Nowadays, the development and application of algorithms for pattern recognition that improve the levels of performance, detection and data processing in different areas of knowledge is a topic of great interest. In this context, and specifically in relation to the application of these algorithms to the monitoring and diagnosis of electrical machines, the use of stray flux signals is a very interesting alternative to detect the different faults. Likewise, and in relation to the use of biomedical signals, it is of great interest to extract relevant features in actigraphy signals for the identification of patterns that may be associated with a specific pathology. In this thesis, algorithms based on statistical and spectral signal processing have been developed and applied to the detection and diagnosis of failures in electrical machines, as well as to the treatment of actigraphy signals. With the development of the proposed algorithms, it is intended to have a dynamic indication and identification system for detecting the failure or associated pathology that does not depend on parameters or external information that may condition the results, but only rely on the primary information that initially presents the signal to be treated (such as the periodicity, amplitude, frequency and phase of the sample). From the use of the algorithms developed for the detection and diagnosis of failures in electrical machines, based on the statistical and spectral signal processing, it is intended to advance, in relation to the models currently existing, in the identification of failures through the use of stray flux signals. In addition, and on the other hand, through the use of higher order statistics for the extraction of anomalies in actigraphy signals, alternative parameters have been found for the identification of processes that may be related to specific pathologies.
Iglesias Martínez, ME. (2020). Development of algorithms of statistical signal processing for the detection and pattern recognitionin time series. Application to the diagnosis of electrical machines and to the features extraction in Actigraphy signals [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/145603
TESIS