Siga este enlace para ver otros tipos de publicaciones sobre el tema: Toric domains.

Tesis sobre el tema "Toric domains"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 23 mejores tesis para su investigación sobre el tema "Toric domains".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Dardennes, Julien. "Non-convexité symplectique des domaines toriques". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES102.

Texto completo
Resumen
La convexité joue un rôle particulier en géométrie symplectique, pourtant ce n'est pas une notion invariante par symplectomorphisme. Dans un article fondateur, Hofer, Wysocki et Zehnder ont montré que tout domaine fortement convexe est dynamiquement convexe, une notion, qui elle, est invariante par symplectomorphisme. Depuis plus de vingt ans, l'existence ou non de domaines dynamiquement convexes qui ne sont pas symplectomorphes à un convexe est restée une question ouverte. Récemment, Chaidez et Edtmair ont répondu à cette question en dimension 4. Ils ont établi un critère "quantitatif" de convexité symplectique puis ont construit des domaines dynamiquement convexes qui ne vérifient pas ce critère. Dans cette thèse, nous utilisons ce critère pour construire de nouveaux exemples de tels domaines en dimension 4, qui ont la propriété additionnelle d'être torique. De plus, nous estimons les constantes intervenant dans ce critère. Ce travail en collaboration avec Jean Gutt et Jun Zhang a été ensuite utilisé par Chaidez et Edtmair pour résoudre la question initiale en toute dimension. Dans un second temps, en collaboration avec Jean Gutt, Vinicius G.B.Ramos et Jun Zhang, nous étudions la distance des domaines dynamiquement convexes aux domaines symplectiquement convexes. Nous montrons qu'en dimension 4, celle-ci est arbitrairement grande aux yeux d'un analogue symplectique de la distance de Banach-Mazur. Au passage, nous reprouvons de manière indépendante l'existence de domaines dynamiquement convexes non symplectiquement convexes en dimension 4
Convexity plays a special role in symplectic geometry, but it is not a notion that is invariant by symplectomorphism. In a seminal work, Hofer, Wysocki and Zehnder showed that any strongly convex domain is dynamically convex, a notion that is invariant by symplectomorphism. For more than twenty years, the existence or not of dynamically convex domains that are not symplectomorphic to a convex domain has remained an open question. Recently, Chaidez and Edtmair answered this question in dimension 4. They established a "quantitative" criterion of symplectic convexity and constructed dynamically convex domains that do not satisfy this criterion. In this thesis, we use this criterion to construct new examples of such domains in dimension 4, which have the additional property of being toric. Moreover, we estimate the constants involved in this criterion. This work in collaboration with Jean Gutt and Jun Zhang was later used by Chaidez and Edtmair to solve the initial question in all dimensions. Furthermore, in collaboration with Jean Gutt, Vinicius G.B.Ramos and Jun Zhang, we study the distance from dynamically convex domains to symplectically convex domains. We show that in dimension 4, this distance is arbitrarily large with respect to a symplectic analogue of the Banach-Mazur distance. Additionally, we independently reprove the existence of dynamically convex domains that are not symplectically convex in dimension 4
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Yang, Li. "Improving Topic Tracking with Domain Chaining". Thesis, University of North Texas, 2003. https://digital.library.unt.edu/ark:/67531/metadc4274/.

Texto completo
Resumen
Topic Detection and Tracking (TDT) research has produced some successful statistical tracking systems. While lexical chaining, a non-statistical approach, has also been applied to the task of tracking by Carthy and Stokes for the 2001 TDT evaluation, an efficient tracking system based on this technology has yet to be developed. In thesis we investigate two new techniques which can improve Carthy's original design. First, at the core of our system is a semantic domain chainer. This chainer relies not only on the WordNet database for semantic relationships but also on Magnini's semantic domain database, which is an extension of WordNet. The domain-chaining algorithm is a linear algorithm. Second, to handle proper nouns, we gather all of the ones that occur in a news story together in a chain reserved for proper nouns. In this thesis we also discuss the linguistic limitations of lexical chainers to represent textual meaning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ahn, Kisuh. "Topic indexing and retrieval for open domain factoid question answering". Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3794.

Texto completo
Resumen
Factoid Question Answering is an exciting area of Natural Language Engineering that has the potential to replace one major use of search engines today. In this dissertation, I introduce a new method of handling factoid questions whose answers are proper names. The method, Topic Indexing and Retrieval, addresses two issues that prevent current factoid QA system from realising this potential: They can’t satisfy users’ demand for almost immediate answers, and they can’t produce answers based on evidence distributed across a corpus. The first issue arises because the architecture common to QA systems is not easily scaled to heavy use because so much of the work is done on-line: Text retrieved by information retrieval (IR) undergoes expensive and time-consuming answer extraction while the user awaits an answer. If QA systems are to become as heavily used as popular web search engines, this massive process bottle-neck must be overcome. The second issue of how to make use of the distributed evidence in a corpus is relevant when no single passage in the corpus provides sufficient evidence for an answer to a given question. QA systems commonly look for a text span that contains sufficient evidence to both locate and justify an answer. But this will fail in the case of questions that require evidence from more than one passage in the corpus. Topic Indexing and Retrieval method developed in this thesis addresses both these issues for factoid questions with proper name answers by restructuring the corpus in such a way that it enables direct retrieval of answers using off-the-shelf IR. The method has been evaluated on 377 TREC questions with proper name answers and 41 questions that require multiple pieces of evidence from different parts of the TREC AQUAINT corpus. With regards to the first evaluation, scores of 0.340 in Accuracy and 0.395 in Mean Reciprocal Rank (MRR) show that the Topic Indexing and Retrieval performs well for this type of questions. A second evaluation compares performance on a corpus of 41 multi-evidence questions by a question-factoring baseline method that can be used with the standard QA architecture and by my Topic Indexing and Retrieval method. The superior performance of the latter (MRR of 0.454 against 0.341) demonstrates its value in answering such questions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Selcuk, Dogan Gonca Hulya. "Expert Finding In Domains With Unclear Topics". Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614259/index.pdf.

Texto completo
Resumen
Expert finding is an Information Retrieval (IR) task that is used to find the needed experts. To find the needed experts is a noticeable problem in many commercial, educational or governmental organizations. It is highly crucial to find the appropriate experts, when seeking referees for a paper submitted to a conference or when looking for a consultant for a software project. It is also important to find the similar experts in case of the absence or the inability of the selected expert. Traditional expert finding methods are modeled based on three components which are a supporting document collection, a list of candidate experts and a set of pre-defined topics. In reality, most of the time pre-defined topics are not available. In this study, we propose an expert finding system which generates a semantic layer between domains and experts using Latent Dirichlet Allocation (LDA). A traditional expert finding method (voting approach) is used in order to match the domains and the experts as the baseline method. In case similar experts are needed, the system recommends experts matching the qualities of the selected experts. The proposed model is applied to a semi-synthetic data set to prove the concept and it performs better than the baseline method. The proposed model is also applied to the projects of the Technology and Innovation Funding Programs Directorate (TEYDEB) of The Scientific and Technological Research Council of Turkey (TÜ
B
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lane, Ian Richard. "Flexible spoken language understanding based on topic classification and domain detection". 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/143888.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Arbogast, Matthew S. "Egos Gone Wild: Threat Detection and the Domains Indicative of Toxic Leadership". Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7664.

Texto completo
Resumen
Toxic leaders are a serious problem, but shockingly, there is no standard detection tool that is both efficient and accurate. Compounding the problem are the various definitions and descriptions used to operationalize toxic leadership. This research sought to align the literature, offer a concise definition, and assess the domains indicative of toxic leadership through two conceptually compatible studies. Study 1 involved development of a toxic leader threat detection scale. Results using a variable-centered approach indicated that follower perceptions (n = 357) of leader empathy (4-item scale; α = .93) and the need for achievement recognition (4-item scale; α = .83) significantly predicted the egoistic dominance behaviors (5-item scale; α = .93) employed by toxic leaders (R2 = .647, p < .001). Using a person-centered approach, the scale scores also revealed latent clusters of distinct behavioral patterns, representing significantly different toxic leader threat levels (low, medium, and high). Study 2 assessed whether followers (n = 357), without access to behavioral information, would infer toxic characteristics simply from a leader’s physical appearance. Participants perceived images of male leaders (η2 = .131) with masculine facial structures (η2 = .596) as most likely to behave aggressively, while feminine facial structures (η2 = .400) and female images (η2 = .104) created the highest perceptions of empathy. The subjects also selected male leaders with masculine faces (η2 = .044; η2 = .015) as more likely to desire recognition, but with an inverse relationship (η2 = .073) such that feminine looking males earned the lowest scores. Overall, these results supported the idea that empathy and the need for achievement recognition create an “ego gone wild” condition and, not only can we measure the behavioral tendencies of toxic leaders, but perhaps we can “see” them as well.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zhang, Tong. "A topic model-based approach for ontology extension in the computational materials science domain". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-172281.

Texto completo
Resumen
With the continuous development and progress of human society, the demand for advanced materials in all walks of life is increasing day by day. No matter in the agrarian age or the information age, human beings have always been tireless in the study of materials science, and the field of computational materials science has been the exploration of computational methods in materials science. However, with the deepening of the research, the scale of research data related to materials science is getting larger and larger, and each research institution establishes their own material information management system. The diversity of the materials data structure and storage form causes the fuzziness of the data structure and the complexity of the integrated data. In order to make data findable and reusable, scientists introduce the concept of ontology in philosophy to generalize the context and structure of data. An ontology is mainly by the field representative extremely, including meaningful concepts and the relationship between concepts. There are a few ontologies found in the computational materials science domain, called Materials Design Ontology (MDO). This thesis mined the representative concepts and relations to extend the MDO. In order to achieve this goal, an improved Topmine framework was deployed, containing a new frequent phrase mining algorithm and an improved phrase-based Latent Dirichlet Allocation (LDA) topic model. The improved Topmine framework introduced the Part-of-Speech Tagging and defined weighted coefficients. The time and space complexity had been reduced from quadratic to linear. And the perplexity of the phrase-based LDA was reduced 26.7%, which means the results are more concentrated and accurate. Meanwhile, the concept lattice is constructed with the idea of formal concept analysis to extend the relations of the domain ontology. In brief, this paper studied the titles and abstracts of more than 9000 pieces of field literature collected to extend MDO, and demonstrate the practicability and practicality of this framework by comparing the experimental results with the existing algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Netterberg, Max y Simon Wahlström. "Design and training of a recommendersystem on an educational domain using Topic & Term-Frequency modeling". Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-445986.

Texto completo
Resumen
This thesis investigates the possibility to create a machine learning powered recommendersystem from educational material supplied by a media provider company. By limiting theinvestigation to a single company's data the thesis provides insights into how a limited datasupply can be utilized in creating a first iteration recommender system. The methods includesemi structured interviews with system experts, constructing a model-building pipeline andtesting the models on system experts via a web interface. The study paints a good picture ofwhat kind of actions you can take when designing content based filtering recommender systemand what actions to take when moving on to further iterations. The study showed that userpreferences may be decisive for the relevancy of the provided recommendations for a specificmedia content. Furthermore, the study showed that Term Frequency Inverse DocumentFrequency modeling was significantly better than using an Elasticsearch database to serverecommendations. Testing also indicated that using term frequency document inversefrequency created a better model than using topic modeling techniques such as latent dirichletallocation. However as testing was only conducted on system experts in a controlledenvironment, further iterations of testing is necessary to statistically conclude that these modelswould lead to an increase in user experience.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ellouze, Nebrasse. "Approche de recherche intelligente fondée sur le modèle des Topic Maps : application au domaine de la construction durable". Phd thesis, Conservatoire national des arts et metiers - CNAM, 2010. http://tel.archives-ouvertes.fr/tel-00555929.

Texto completo
Resumen
Cette thèse aborde les problématiques liées à la construction de Topic Maps et à leur utilisation pour la recherche d'information dans le cadre défini par le Web sémantique (WS). Le WS a pour objectif de structurer les informations disponibles sur le Web. Pour cela, les ressources doivent être sémantiquement étiquetées par des métadonnées afin de permettre d'optimiser l'accès à ces ressources. Ces métadonnées sont actuellement spécifiées à l'aide des deux standards qui utilisent le langage XML : RDF et les Topic Maps. Un contenu à organiser étant très souvent volumineux et sujet à enrichissement perpétuel, il est pratiquement impossible d'envisager une création et gestion d'une Topic Map, le décrivant, de façon manuelle. Plusieurs travaux de recherche ont concerné la construction de Topic Maps à partir de documents textuels [Ellouze et al. 2008a]. Cependant, aucune d'elles ne permet de traiter un contenu multilingue. De plus, bien que les Topic Maps soient, par définition, orientées utilisation (recherche d'information), peu d'entre elles prennent en compte les requêtes des utilisateurs.Dans le cadre de cette thèse, nous avons donc conçu une approche que nous avons nommée ACTOM pour " Approche de Construction d'une TOpic Map Multilingue ". Cette dernière sert à organiser un contenu multilingue composé de documents textuels. Elle a pour avantage de faciliter la recherche d'information dans ce contenu. Notre approche est incrémentale et évolutive, elle est basée sur un processus automatisé, qui prend en compte des documents multilingues et l'évolution de la Topic Map selon le changement du contenu en entrée et l'usage de la Topic Map. Elle prend comme entrée un référentiel de documents que nous construisons suite à la segmentation thématique et à l'indexation sémantique de ces documents et un thésaurus du domaine pour l'ajout de liens ontologiques. Pour enrichir la Topic Map, nous nous basons sur deux ontologies générales et nous explorons toutes les questions potentielles relatives aux documents sources. Dans ACTOM, en plus des liens d'occurrences reliant un Topic à ses ressources, nous catégorisons les liens en deux catégories: (a) les liens ontologiques et (b) les liens d'usage. Nous proposons également d'étendre le modèle des Topic Maps défini par l'ISO en rajoutant aux caractéristiques d'un Topic des méta-propriétés servant à mesurer la pertinence des Topics plus précisément pour l'évaluation de la qualité et l'élagage dynamique de la Topic Map.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ellouze, Nebrasse. "Approche de recherche intelligente fondée sur le modèle des Topic Maps : application au domaine de la construction durable". Electronic Thesis or Diss., Paris, CNAM, 2010. http://www.theses.fr/2010CNAM0736.

Texto completo
Resumen
Cette thèse aborde les problématiques liées à la construction de Topic Maps et à leur utilisation pour la recherche d’information dans le cadre défini par le Web sémantique (WS). Le WS a pour objectif de structurer les informations disponibles sur le Web. Pour cela, les ressources doivent être sémantiquement étiquetées par des métadonnées afin de permettre d'optimiser l'accès à ces ressources. Ces métadonnées sont actuellement spécifiées à l'aide des deux standards qui utilisent le langage XML : RDF et les Topic Maps. Un contenu à organiser étant très souvent volumineux et sujet à enrichissement perpétuel, il est pratiquement impossible d’envisager une création et gestion d’une Topic Map, le décrivant, de façon manuelle. Plusieurs travaux de recherche ont concerné la construction de Topic Maps à partir de documents textuels [Ellouze et al. 2008a]. Cependant, aucune d’elles ne permet de traiter un contenu multilingue. De plus, bien que les Topic Maps soient, par définition, orientées utilisation (recherche d’information), peu d’entre elles prennent en compte les requêtes des utilisateurs.Dans le cadre de cette thèse, nous avons donc conçu une approche que nous avons nommée ACTOM pour « Approche de Construction d’une TOpic Map Multilingue ». Cette dernière sert à organiser un contenu multilingue composé de documents textuels. Elle a pour avantage de faciliter la recherche d’information dans ce contenu. Notre approche est incrémentale et évolutive, elle est basée sur un processus automatisé, qui prend en compte des documents multilingues et l’évolution de la Topic Map selon le changement du contenu en entrée et l’usage de la Topic Map. Elle prend comme entrée un référentiel de documents que nous construisons suite à la segmentation thématique et à l’indexation sémantique de ces documents et un thésaurus du domaine pour l’ajout de liens ontologiques. Pour enrichir la Topic Map, nous nous basons sur deux ontologies générales et nous explorons toutes les questions potentielles relatives aux documents sources. Dans ACTOM, en plus des liens d’occurrences reliant un Topic à ses ressources, nous catégorisons les liens en deux catégories: (a) les liens ontologiques et (b) les liens d’usage. Nous proposons également d’étendre le modèle des Topic Maps défini par l’ISO en rajoutant aux caractéristiques d’un Topic des méta-propriétés servant à mesurer la pertinence des Topics plus précisément pour l’évaluation de la qualité et l’élagage dynamique de la Topic Map
The research work in this thesis is related to Topic Map construction and their use in semantic annotation of web resources in order to help users find relevant information in these resources. The amount of information sources available today is very huge and continuously increasing, for that, it is impossible to create and maintain manually a Topic Map to represent and organize all these information. Many Topic Maps building approaches can be found in the literature [Ellouze et al. 2008a]. However, none of these approaches takes as input multilingual document content. In addition, although Topic Maps are basically dedicated to users navigation and information search, no one approach takes into consideration users requests in the Topic Map building process. In this context, we have proposed ACTOM, a Topic Map building approach based on an automated process taking into account multilingual documents and Topic Map evolution according to content and usage changes. To enrich the Topic Map, we are based on a domain thesaurus and we propose also to explore all potential questions related to source documents in order to represent usage in the Topic Map. In our approach, we extend the Topic Map model that already exists by defining the usage links and a list of meta-properties associated to each Topic, these meta-properties are used in the Topic Map pruning process. In our approach ACTOM, we propose also to precise and enrich semantics of Topic Map links so, except occurrences links between Topics and resources, we classify Topic Map links in two different classes, those that we have called “ontological links” and those that we have named “usage links”
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Leshi, Olumide. "An Approach to Extending Ontologies in the Nanomaterials Domain". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170255.

Texto completo
Resumen
As recently as the last decade or two, data-driven science workflows have become increasingly popular and semantic technology has been relied on to help align often parallel research efforts in the different domains and foster interoperability and data sharing. However, a key challenge is the size of the data and the pace at which it is being generated, so much that manual procedures lag behind. Thus, eliciting automation of most workflows. In this study, the effort is to continue investigating ways by which some tasks performed by experts in the nanotechnology domain, specifically in ontology engineering, could benefit from automation. An approach, featuring phrase-based topic modelling and formal topical concept analysis is further motivated, together with formal implication rules, to uncover new concepts and axioms relevant to two nanotechnology-related ontologies. A corpus of 2,715 nanotechnology research articles helps showcase that the approach can scale, as seen in a number of experiments conducted. The usefulness of document text ranking as an alternative form of input to topic models is highlighted as well as the benefit of implication rules to the task of concept discovery. In all, a total of 203 new concepts are uncovered by the approach to extend the referenced ontologies
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Herner, William y Edward Leiman. "How does toxicity change depending on rank in League of Legends?" Thesis, Uppsala universitet, Institutionen för speldesign, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-389338.

Texto completo
Resumen
This thesis aims to investigate toxic remarks in three different ranks in League of Legends, Bronze, Gold, and Diamond. The purpose is to understand how toxic communication between players would change depending on rank. A framework from Neto, Alvino and Becker (2018) was adopted to define and count toxic remarks. The method relied on participant observation to gather data; three different ranks were specified for data collection. Fifteen games were played in each of the ranks; Bronze, Gold, and Diamond. Each game was recorded, transcribed and analyzed by dividing each toxic remark registered into Neto, Alvino and Becker’s predetermined categories. The study concluded that domain language is more often used by players with a higher rank, meaning that high ranked players tend to use toxicity that requires previous game knowledge to understand. On the contrary, low ranked players tend to stick to basic complaints and insults when using toxicity to remark teammates while playing.
Syftet med detta examensarbete är att undersöka förekomsten av toxiska yttranden i tre olika ranger i League of Legends: Brons, Guld och Diamant. Målet är att försöka förstå hur toxiska yttranden spelarna emellan ändras beroende på rang. För att kunna definiera och räkna toxiska yttranden användes ett ramverk som utformats av Neto, Alvino och Becker (2018). Som metod för insamlingen av data från de tre olika rangerna användes deltagarobservationer. Femton matcher spelades i var och en av rangerna Brons, Guld och Diamant. Varje match spelades in, transkriberades och analyserades och de toxiska yttrandena delades upp i Neto och Beckers olika kategorier. Utifrån studien kan slutsatsen dras att domänspråk är oftare använt av spelare i högre ranger och att domänspråk är kopplat till slang inom spel som kräver tidigare kunskap i spelet för att förstå. I motsats till detta använder spelare i lägre ranger mer basala klagomål och förolämpningar när toxiska yttranden riktas mot andra spelare.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Levy-Minzie, Kori. "Authorship attribution in the e-mail domain a study of the effect of size of author corpus and topic on accuracy of identification". Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5780.

Texto completo
Resumen
Approved for public release; distribution is unlimited.
We determined that it is possible to achieve authorship attribution in the e-mail domain when training on "ersonal" e-mails and testing on "work" e-mails and vice versa. These results are unique since they simulate two different e-mail addresses belonging to the same person where the topic of the e-mails from the two different addresses do not intersect. As we only used one classification technique, these results are preliminary and may serve as a baseline for future work in this area. The corpus of data was the entirety of the Enron corpus as well as a subsection of hand-annotated work and personal e-mails. We discovered that there is enough author signal in each class to identify an author in a sea of noise. We included suggestions for future work in the areas of expanding feature selection, increasing corpus size, and including more classification methods. Advancement in this area will contribute to increasing cyber security by identifying the senders of anonymous derogatory e-mails and reducing cyber bullying.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bäckström, Nils, Hanna Egeman y Hanna Mattsson. "Why do companies produce vegan and vegetarian products imitated with real meat products? : Exploring a virgin topic on the Swedish market". Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-40016.

Texto completo
Resumen
With the support of four vegetarian and vegan companies established on the Swedish market, Astrid och Aporna, Ekko Gourmet, Tzay and Quorn, the objective of this study is to understand why companies produce vegan and vegetarian products imitated with real meat products as well as how these companies market these products. The data was collected through interviews with suitable representatives from respective company. The empirical data collected from the interviews have further been analysed together with theories from past researches. The research approaches of this study has been a mixture of inductive and deductive when handling our data. The results from this thesis enlightens that there are contrasting strategies behind the products’ visual appearance, chosen target group and marketing among the different vegetarian and vegan companies on the Swedish market. We have discovered patterns between the companies’ target audiences and how these companies have designed their products depending of target audience Due to time limitations and companies’ unwillingness to participate in interviews, a broader perspective on the topic could not be given. Also, this study only looks at vegan and vegetarian companies operating in Sweden. A suggestion for future research is to investigate the consumer’s perspective and perceptions of vegan and vegetarian products by conducting a quantitative research to distinguish if the companies’ strategies are consistent with the perceptions of consumers on the Swedish market.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Marinone, Emilio. "Evaluation of New Features for Extractive Summarization of Meeting Transcripts : Improvement of meeting summarization based on functional segmentation, introducing topic model, named entities and domain specific frequency measure". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249560.

Texto completo
Resumen
Automatic summarization of meeting transcripts has been widely stud­ied in last two decades, achieving continuous improvements in terms of the standard summarization metric (ROUGE). A user study has shown that people noticeably prefer abstractive summarization rather than the extractive approach. However, a fluent and informative ab­stract depends heavily on the performance of the Information Extrac­tion method(s) applied. In this work, basic concepts useful for understanding meeting sum­marization methods like Parts-of-Speech (POS), Named Entity Recog­nition (NER), frequency and similarity measure and topic models are introduced together with a broad literature analysis. The proposed method takes inspiration from the current unsupervised extractive state of the art and introduces new features that improve the baseline. It is based on functional segmentation, meaning that it first aims to divide the preprocessed source transcript into monologues and dialogues. Then, two different approaches are used to extract the most impor­tant sentences from each segment, whose concatenation together with redundancy reduction creates the final summary. Results show that a topic model trained on an extended corpus, some variations in the proposed parameters and the consideration of word tags improve the performance in terms of ROUGE Precision, Re­call and F-measure. It outperforms the currently best performing un­supervised extractive summarization method in terms of ROUGE-1 Precision and F-measure. A subjective evaluation of the generated summaries demonstrates that the current unsupervised framework is not yet accurate enough for commercial use, but the new introduced features can help super­vised methods to achieve acceptable performance. A much larger, non-artificially constructed meeting dataset with reference summaries is also needed for training supervised methods as well as a more accu­rate algorithm evaluation. The source code is available on GitHub: https://github.com/marinone94/ThesisMeetingSummarization
Automatgenererade textsammanfattningar av mötestranskript har varit ett allmänt studerat område de senaste två decennierna där resultatet varit ständiga förbättringar mätt mot standardsammanfattningsvärdet (ROUGE). En studie visar att människor märkbart föredrar abstraherade sammanfattningar gentemot omfattande sammanfattningar. En informativ och flytande textsammanfattning förlitar sig däremot mycket på informationsextraheringsmetoden som används. I det har arbetet presenteras grundläggande koncept som är användbara för att förstå textsammanfattningar så som: Parts-of-Speech (POS), Named Entity Recognition (NER), frekvens och likhetsvärden, och ämnesmodeller. Även en bred litterär analys ingår i arbetet. Den föreslagna metoden tar inspiration från de nuvarande främsta omfattande metoderna och introducerar nya egenskaper som förbättrar referensmodellen. Det är helt oövervakat och baseras på funktionell segmentering vilket betyder att den i först fallet försöker dela upp den förbehandlade källtexten i monologer och dialoger. Därefter används två metoder for att extrahera de mest betydelsefulla meningarna ur varje segment vilkas sammanbindning, tillsammans med redundansminskning, bildar den slutliga textsammanfattningen. Resultaten visar att en ämnesmodell, tränad på ett omfattande korpus med viss variation i de föreslagna parametrarna och med ordmärkning i åtanke, förbättrar prestandan mot ROUGE, precision, Recall och F-matning. Den överträffar den nuvarande bästa Rouge-1 precision och F-matning. En subjektiv utvärdering av de genererade textsammanfattningarna visar att det nuvarande, oövervakade ramverket inte är exakt nog for kommersiellt bruk än men att de nyintroducerade egenskaperna kan hjälpa oövervakade metoder uppnå acceptabla resultat. En mycket större, icke artificiellt skapad, datamängd bestående utav textsammanfattningar av möten krävs för att träna de övervakade, metoderna så väl som en mer noggrann utvärdering av de utvalda algoritmerna. Nya och existerande sammanfattningsmetoder kan appliceras på meningar extraherade ur den föreslagna metoden.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Désoyer, Adèle. "Appariement de contenus textuels dans le domaine de la presse en ligne : développement et adaptation d'un système de recherche d'information". Thesis, Paris 10, 2017. http://www.theses.fr/2017PA100119/document.

Texto completo
Resumen
L'objectif de cette thèse, menée dans un cadre industriel, est d'apparier des contenus textuels médiatiques. Plus précisément, il s'agit d'apparier à des articles de presse en ligne des vidéos pertinentes, pour lesquelles nous disposons d'une description textuelle. Notre problématique relève donc exclusivement de l'analyse de matériaux textuels, et ne fait intervenir aucune analyse d'image ni de langue orale. Surviennent alors des questions relatives à la façon de comparer des objets textuels, ainsi qu'aux critères mobilisés pour estimer leur degré de similarité. L'un de ces éléments est selon nous la similarité thématique de leurs contenus, autrement dit le fait que deux documents doivent relater le même sujet pour former une paire pertinente. Ces problématiques relèvent du domaine de la recherche d'information (ri), dans lequel nous nous ancrons principalement. Par ailleurs, lorsque l'on traite des contenus d'actualité, la dimension temporelle est aussi primordiale et les problématiques qui l'entourent relèvent de travaux ayant trait au domaine du topic detection and tracking (tdt) dans lequel nous nous inscrivons également.Le système d'appariement développé dans cette thèse distingue donc différentes étapes qui se complètent. Dans un premier temps, l'indexation des contenus fait appel à des méthodes de traitement automatique des langues (tal) pour dépasser la représentation classique des textes en sac de mots. Ensuite, deux scores sont calculés pour rendre compte du degré de similarité entre deux contenus : l'un relatif à leur similarité thématique, basé sur un modèle vectoriel de ri; l'autre à leur proximité temporelle, basé sur une fonction empirique. Finalement, un modèle de classification appris à partir de paires de documents, décrites par ces deux scores et annotées manuellement, permet d'ordonnancer les résultats.L'évaluation des performances du système a elle aussi fait l'objet de questionnements dans ces travaux de thèse. Les contraintes imposées par les données traitées et le besoin particulier de l'entreprise partenaire nous ont en effet contraints à adopter une alternative au protocole classique d'évaluation en ri, le paradigme de Cranfield
The goal of this thesis, conducted within an industrial framework, is to pair textual media content. Specifically, the aim is to pair on-line news articles to relevant videos for which we have a textual description. The main issue is then a matter of textual analysis, no image or spoken language analysis was undertaken in the present study. The question that arises is how to compare these particular objects, the texts, and also what criteria to use in order to estimate their degree of similarity. We consider that one of these criteria is the topic similarity of their content, in other words, the fact that two documents have to deal with the same topic to form a relevant pair. This problem fall within the field of information retrieval (ir) which is the main strategy called upon in this research. Furthermore, when dealing with news content, the time dimension is of prime importance. To address this aspect, the field of topic detection and tracking (tdt) will also be explored.The pairing system developed in this thesis distinguishes different steps which complement one another. In the first step, the system uses natural language processing (nlp) methods to index both articles and videos, in order to overcome the traditionnal bag-of-words representation of texts. In the second step, two scores are calculated for an article-video pair: the first one reflects their topical similarity and is based on a vector space model; the second one expresses their proximity in time, based on an empirical function. At the end of the algorithm, a classification model learned from manually annotated document pairs is used to rank the results.Evaluation of the system's performances raised some further questions in this doctoral research. The constraints imposed both by the data and the specific need of the partner company led us to adapt the evaluation protocol traditionnal used in ir, namely the cranfield paradigm. We therefore propose an alternative solution for evaluating the system that takes all our constraints into account
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Bose, Tulika. "Transfer learning for abusive language detection". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0019.

Texto completo
Resumen
La prolifération des médias sociaux, malgré ses nombreux avantages, a entraîné une augmentation des propos injurieux. Ces propos, qui sont généralement blessants, toxiques ou empreints de préjugés à l'encontre d'individus ou de groupes, doivent être détectés et modérés rapidement par les plateformes en ligne. Les modèles d'apprentissage profond pour la détection de propos abusifs ont montré des niveaux de performance élevé quand ils sont évalués sur des données similaires à celles qui ont servi à entraîner les modèles, mais sont nettement moins performants s'ils sont évalués sur des données dont la distribution est différente. En outre, ils nécessitent une quantité considérable de données étiquetées coûteuses pour l'apprentissage. C'est pour cela qu'il est intéressant d'étudier le transfert efficace de connaissances à partir de corpus annotés existants de propos abusifs. Cette thèse étudie le problème de l'apprentissage par transfert pour la détection de propos abusifs et explore diverses solutions pour améliorer le transfert de connaissances dans des scénarios inter corpus.Tout d'abord, nous analysons la généralisabilité inter-corpus des modules de détection de propos abusifs sans accéder à des données cibles pendant le processus d'apprentissage. Nous examinons si la combinaison des représentations issues du thème (topic) avec des représentations contextuelles peut améliorer la généralisabilité. Nous montrons que l'association de commentaires du corpus cible avec des thèmes du corpus d'entraînement peut fournir des informations complémentaires pour un meilleur transfert inter-corpus.Ensuite, nous explorons l'adaptation au domaine non supervisée (UDA, Unsupervised Domain Adaptation), un type d'apprentissage par transfert transductif, avec accès au corpus cible non étiqueté. Nous explorons certaines approches UDA populaires dans la classification des sentiments pour la détection de propos abusifs dans le cadre de corpus croisés. Nous adaptons ensuite une variante du modèle BERT au corpus cible non étiqueté en utilisant la technique du modèle de langue avec masques (MLM Masked Language Model). Alors que cette dernière améliore les performances inter-corpus, les autres approches UDA ont des performances sous-optimales. Notre analyse révèle leurs limites et souligne le besoin de méthodes d'adaptation efficaces pour cette tâche.Comme troisième contribution, nous proposons deux approches d'adaptation au domaine utilisant les attributions de caractéristiques (feature attributions), qui sont des explications a posteriori du modèle. En particulier, nous étudions le problème des corrélations erronées (spurious correlations) spécifiques à un corpus qui limitent la généralisation pour la détection des discours de haine, un sous-ensemble des propos abusifs. Alors que les approches de la littérature reposent sur une liste de termes établie manuellement, nous extrayons et pénalisons automatiquement les termes qui causent des corrélations erronées. Nos approches dynamiques améliorent les performances dans le cas de corpus croisés par rapport aux travaux précédents, à la fois indépendamment et en combinaison avec des dictionnaires prédéfinis.Enfin, nous considérons le transfert de connaissances d'un domaine source avec beaucoup de données étiquetées vers un domaine cible, où peu d'instances étiquetées sont disponibles. Nous proposons une nouvelle stratégie d'apprentissage, qui permet une modélisation flexible de la proximité relative des voisins récupérés dans le corpus source pour apprendre la quantité de transfert utile. Nous incorporons les informations de voisinage avec une méthode de transport optimal (Optimal Transport ) qui exploite la géométrie de l'espace de représentation (embedding space) . En alignant les distributions conjointes de l'embedding et des étiquettes du voisinage, nous montrons des améliorations substantielles dans des corpus de discours haineux de taille réduite
The proliferation of social media, despite its multitude of benefits, has led to the increased spread of abusive language. Such language, being typically hurtful, toxic, or prejudiced against individuals or groups, requires timely detection and moderation by online platforms. Deep learning models for detecting abusive language have displayed great levels of in-corpus performance but underperform substantially outside the training distribution. Moreover, they require a considerable amount of expensive labeled data for training.This strongly encourages the effective transfer of knowledge from the existing annotated abusive language resources that may have different distributions to low-resource corpora. This thesis studies the problem of transfer learning for abusive language detection and explores various solutions to improve knowledge transfer in cross-corpus scenarios.First, we analyze the cross-corpus generalizability of abusive language detection models without accessing the target during training. We investigate if combining topic model representations with contextual representations can improve generalizability. The association of unseen target comments with abusive language topics in the training corpus is shown to provide complementary information for a better cross-corpus transfer.Secondly, we explore Unsupervised Domain Adaptation (UDA), a type of transductive transfer learning, with access to the unlabeled target corpus. Some popular UDA approaches from sentiment classification are analyzed for cross-corpus abusive language detection. We further adapt a BERT model variant to the unlabeled target using the Masked Language Model (MLM) objective. While the latter improves the cross-corpus performance, the other UDA methods perform sub-optimally. Our analysis reveals their limitations and emphasizes the need for effective adaptation methods suited to this task.As our third contribution, we propose two DA approaches using feature attributions, which are post-hoc model explanations. Particularly, the problem of spurious corpus-specific correlations is studied that restrict the generalizability of classifiers for detecting hate speech, a sub-category of abusive language. While the previous approaches rely on a manually curated list of terms, we automatically extract and penalize the terms causing spurious correlations. Our dynamic approaches improve the cross-corpus performanceover previous works both independently and in combination with pre-defined dictionaries.Finally, we consider transferring knowledge from a resource-rich source to a low-resource target with fewer labeled instances, across different online platforms. A novel training strategy is proposed, which allows flexible modeling of the relative proximity of neighbors retrieved from the resource-rich corpus to learn the amount of transfer. We incorporate neighborhood information with Optimal Transport that permits exploitingthe embedding space geometry. By aligning the joint embedding and label distributions of neighbors, substantial improvements are obtained in low-resource hate speech corpora
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Das, Manirupa. "Neural Methods Towards Concept Discovery from Text via Knowledge Transfer". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1572387318988274.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lee, Sanghoon. "Multi Domain Semantic Information Retrieval Based on Topic Model". 2016. http://scholarworks.gsu.edu/cs_diss/104.

Texto completo
Resumen
Over the last decades, there have been remarkable shifts in the area of Information Retrieval (IR) as huge amount of information is increasingly accumulated on the Web. The gigantic information explosion increases the need for discovering new tools that retrieve meaningful knowledge from various complex information sources. Thus, techniques primarily used to search and extract important information from numerous database sources have been a key challenge in current IR systems. Topic modeling is one of the most recent techniquesthat discover hidden thematic structures from large data collections without human supervision. Several topic models have been proposed in various fields of study and have been utilized extensively for many applications. Latent Dirichlet Allocation (LDA) is the most well-known topic model that generates topics from large corpus of resources, such as text, images, and audio.It has been widely used in many areas in information retrieval and data mining, providing efficient way of identifying latent topics among document collections. However, LDA has a drawback that topic cohesion within a concept is attenuated when estimating infrequently occurring words. Moreover, LDAseems not to consider the meaning of words, but rather to infer hidden topics based on a statisticalapproach. However, LDA can cause either reduction in the quality of topic words or increase in loose relations between topics. In order to solve the previous problems, we propose a domain specific topic model that combines domain concepts with LDA. Two domain specific algorithms are suggested for solving the difficulties associated with LDA. The main strength of our proposed model comes from the fact that it narrows semantic concepts from broad domain knowledge to a specific one which solves the unknown domain problem. Our proposed model is extensively tested on various applications, query expansion, classification, and summarization, to demonstrate the effectiveness of the model. Experimental results show that the proposed model significantly increasesthe performance of applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Lin, Shunn-der y 林順得. "A Planning Study Of Applying Topic Maps on Engineering Management Domain-Construction Management". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/h6guk3.

Texto completo
Resumen
碩士
國立臺灣科技大學
機械工程系
94
The domain knowledge applied in a new engineering project is quite extensive. The execution quality and management of the engineering would give significant effects to the success of the whole project plan. Under the professional disciplines,the knowlege workers in every relevant organ's units produce indepentently diverse ocuements to meet the project management needs. In fact, all of these docuements are interelated with others in different departments. Most of critical points revealed in these project exection are aimed at the control effectivity, project progress, quality,the state of execution. Poor qualities of the communication channels and weak connections between porject files among working partners may turn in the undertaker's burden and task faults. Enabling work together efficitively in mangagement of the engineering documents and tasks connectivities beomes the important subjects. This research work is inspired by the success stories built on the Stanford's Protege project and TMtab plugin toolkits, a great Ontology framework to construct shareable knowledge skelton. Topic maps(TM), represented by ISO/IEC 13250:1999(and 2002) standard, describe what an information set is about, by formally declaring topics and associations, and by linking the relevant parts of the information set to the appropriate topics. Topic Maps can help organize and retrieve online information in a way that it can be mastered by information owners and information users. Our work make carefully a TAO (Topic-Association-Occurrenc) model to simulate most of the project management ativities, including document classifications and man-and-task relationships. Finally, a XTM-based website built under OKS(Ontopia Knowledge Suite), an ontology-driven editor environment,is presented in visual way to demostrate the power of topic-maps applied to engineering management domain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Jian, Ciao-Ting y 簡巧婷. "Online Movie Recommendation Approach based on Collaborative Topic Modeling and Cross-Domain Analysis". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/atphhc.

Texto completo
Resumen
碩士
國立交通大學
資訊管理研究所
105
With the rapid development of the Internet and the rise of new types of news websites with e-commerce portals, more and more users obtain specific topics online information. Successfully information recommendation to users by analyzing users’ browsing behaviors and preferences in the web-based platform can attract more users and enhance the information flow of platform, which is an important trend of the current online worlds. However, information provided by news websites is exploding and becoming more complicated. Therefore, it is an indispensable part of IT technology for e-commerce platforms to deploy appropriate online recommendation methods to improve the users’ click-through rates. In this research, we conduct cross-domain and diversity analysis of user preferences to develop novel online movie recommendation methods and evaluated online recommendation results. Specifically, association rule mining is conducted on user browsing news and moves to find the latent associations between news and movies. A novel online recommendation approach is proposed to predict user preferences for movies based on Latent Dirichlet Allocation, Collaborative Topic Modeling and the diversity of recommendations. The experimental results show that the proposed approach can improve the cold-start problem and enhance the click-through rate of movies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yang, Ming-te y 楊銘德. "A Planning Study Of Applying Topic Maps on Engineering Design Domain and SWOT Analysis-Steel Tower Project of Transmission Line". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/49681947075388820078.

Texto completo
Resumen
碩士
國立臺灣科技大學
機械工程系
96
The engineers need widespread knowledge to plan the projects but also the engineering design quality effects to the results of the projects, it’s no doubts that a good engineering design promises a successful future. The new era gives us the opportunities to find out the information through internet but the data is not equal to the information neither to the knowledge. How can the managers to help the engineers to transfer the data to be information and combine the experienced judges from the decision makers to sublimate the information becomes the useful knowledge is critical for the companies and countries at present. It has become an important lesson to design the engineering design management platform to increase the work efficiency. The platform should provide a method which can help to analyze the strength, weakness, opportunity and threat. The analysis results should also produce some relative strategies for managers’ reference. Recently the information technology and internet development are speedy. To use the knowledge management to compete with the global competitors is popular. To use the IT decision system to establish a well organized and logical method will certainly improve the efficiency of decision making. The thesis used the “MindManager” software which developed by Mindjet Company to establish the engineering design system analysis and planning with topic maps. The thesis also used the tool of “oks samplers” which developed by Ontopia Company to build the topic maps and the Omnigator knowledge express tool sorts the relative documentation to assist project design relative staffs to guide and visualized the engineering knowledge and documents. The thesis will give the most complete example of the different projects’ decision advises expects to save the time and cost of the projects in order to increase the design and construction quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Mooman, Abdelniser. "Multi-Agent User-Centric Specialization and Collaboration for Information Retrieval". Thesis, 2012. http://hdl.handle.net/10012/6991.

Texto completo
Resumen
The amount of information on the World Wide Web (WWW) is rapidly growing in pace and topic diversity. This has made it increasingly difficult, and often frustrating, for information seekers to retrieve the content they are looking for as information retrieval systems (e.g., search engines) are unable to decipher the relevance of the retrieved information as it pertains to the information they are searching for. This issue can be decomposed into two aspects: 1) variability of information relevance as it pertains to an information seeker. In other words, different information seekers may enter the same search text, or keywords, but expect completely different results. It is therefore, imperative that information retrieval systems possess an ability to incorporate a model of the information seeker in order to estimate the relevance and context of use of information before presenting results. Of course, in this context, by a model we mean the capture of trends in the information seeker's search behaviour. This is what many researchers refer to as the personalized search. 2) Information diversity. Information available on the World Wide Web today spans multitudes of inherently overlapping topics, and it is difficult for any information retrieval system to decide effectively on the relevance of the information retrieved in response to an information seeker's query. For example, the information seeker who wishes to use WWW to learn about a cure for a certain illness would receive a more relevant answer if the search engine was optimized into such domains of topics. This is what is being referred to in the WWW nomenclature as a 'specialized search'. This thesis maintains that the information seeker's search is not intended to be completely random and therefore tends to portray itself as consistent patterns of behaviour. Nonetheless, this behaviour, despite being consistent, can be quite complex to capture. To accomplish this goal the thesis proposes a Multi-Agent Personalized Information Retrieval with Specialization Ontology (MAPIRSO). MAPIRSO offers a complete learning framework that is able to model the end user's search behaviour and interests and to organize information into categorized domains so as to ensure maximum relevance of its responses as they pertain to the end user queries. Specialization and personalization are accomplished using a group of collaborative agents. Each agent employs a Reinforcement Learning (RL) strategy to capture end user's behaviour and interests. Reinforcement learning allows the agents to evolve their knowledge of the end user behaviour and interests as they function to serve him or her. Furthermore, REL allows each agent to adapt to changes in an end user's behaviour and interests. Specialization is the process by which new information domains are created based on existing information topics, allowing new kinds of content to be built exclusively for information seekers. One of the key characteristics of specialization domains is the seeker centric - which allows intelligent agents to create new information based on the information seekers' feedback and their behaviours. Specialized domains are created by intelligent agents that collect information from a specific domain topic. The task of these specialized agents is to map the user's query to a repository of specific domains in order to present users with relevant information. As a result, mapping users' queries to only relevant information is one of the fundamental challenges in Artificial Intelligent (AI) and machine learning research. Our approach employs intelligent cooperative agents that specialize in building personalized ontology information domains that pertain to each information seeker's specific needs. Specializing and categorizing information into unique domains is one of the challenge areas that have been addressed and various proposed solutions were evaluated and adopted to address growing information. However, categorizing information into unique domains does not satisfy each individualized information seeker. Information seekers might search for similar topics, but each would have different interests. For example, medical information of a specific medical domain has different importance to both the doctor and patients. The thesis presents a novel solution that will resolve the growing and diverse information by building seeker centric specialized information domains that are personalized through the information seekers' feedback and behaviours. To address this challenge, the research examines the fundamental components that constitute the specialized agent: an intelligent machine learning system, user input queries, an intelligent agent, and information resources constructed through specialized domains. Experimental work is reported to demonstrate the efficiency of the proposed solution in addressing the overlapping information growth. The experimental work utilizes extensive user-centric specialized domain topics. This work employs personalized and collaborative multi learning agents and ontology techniques thereby enriching the queries and domains of the user. Therefore, experiments and results have shown that building specialized ontology domains, pertinent to the information seekers' needs, are more precise and efficient compared to other information retrieval applications and existing search engines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía