Dissertationen zum Thema „Graphic quality“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Graphic quality.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Graphic quality" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Johannessen, Lindsey. „Incorporating graphic novels into social studies based instruction an effective means of determining quality graphic novels“. Honors in the Major Thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/442.

Der volle Inhalt der Quelle
Annotation:
It is becoming increasing important that teachers educate students about social studies in such a way so that students are interested and motivated by what they read. So often the curriculum is bombarded with physically heavy, incomprehensible, and traditional textbooks. Based upon the need for extensions to the social studies textbooks, my goal to establish a guideline for selecting quality graphic novels fitted for elementary social studies instruction. Therefore, my study will attempt to answer the question: What is an effective means of determining quality graphic novels? Following my adaptation and creation of rubrics established for determining the needs and qualities of graphic novels, I was able to establish and analyze several social studies content related graphic novels appropriate for the elementary social studies curriculum. This investigation into social studies graphic novels provided 18 graphic novels for possible use in the elementary social studies curriculum, 5 of which were deemed quality via the established rubrics. Furthermore, the investigation proved that the books deemed quality provided more than what was established as necessary within the rubrics. The additional information found within those texts was referred to as a postlude. One strong conclusion from this study is the large void of graphic novels that teachers might link with the social studies curriculum so as to enhance elementary social studies instruction.
B.S.
Bachelors
Education
Elementary Education
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Young, Jeffry R. (Jeffry Ray). „An Investigation of Young Children's Awareness of Line and Line Quality in Art and Graphic Reproductions“. Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278901/.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study was to determine whether kindergarten children possess the ability to recognize, match, and discuss lines and line qualities. Using graphics and art reproductions, three matching tasks were constructed which examined young children's awareness of the line qualities of length, width, straightness, direction, movement, and uniformity. Graphics and art reproductions were also used to construct two tracing tasks employed to examine young children's awareness of actual and implied lines. The tasks were administered to 69 kindergarten students from four elementary schools in a public school district in the north central Texas area.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bengtsson, Lisa, und Karin Hägglund. „"Comic Sans might get you killed"– how values are created and used in the evaluation of graphic quality“. Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93172.

Der volle Inhalt der Quelle
Annotation:
This is a bachelor thesis on how a group of Graphic Design and Communications (GDK) students at Linköping University evaluates aesthetic graphic quality. The aim of the thesis is to study how the students express themselves, what values the evaluation is based upon and the origin of these values. The purpose is also to look into the significance of possible mutual values, from a sociologic perspective. This essay is based upon sociological theories concerning group socialization and good taste. Further, theory on the subject of aesthetics is presented, and how the terms of good and bad taste have been used in art and design. The data, which make up the result, is mainly from two focus groups, but also from a participant observation and the follow-up questions which were sent out to the participants of the focus groups. The result shows that the participants often primarly judge the aestethic quality on whether the typography, colours and images of the material sends out a clear message. The aesthetic impression is therefor affected by function. Though, this is not the case when the participants leave their role as a designer. With a position taken outside the education, they can justify their aestethic opinion differently. The result also shows that a clean idiom with its roots in the functionalistic aesthetic is a norm, and that this partly descend from an influental guest lecturer. The participants also have the opinion that they, as designers, is expected to appreciate a certain aesthetic but also to know what is ”ugly”. They mean that these are common opinions among designers generally and not only within GDK. The students sometimes consider these opinions on aesthetics as an obstacle when it comes to develop their own manner, but at the same time they believe it is good for the unity of the group. To be able to motivate an aesthetic position with arguments based on function is also important in the contact of clients, which indicates that the dominance of the functionalistic aesthetic has to do with the economic aspect of graphic design.
Denna kandidatuppsats handlar om hur en grupp studenter på programmet Grafisk Design och Kommunikation (GDK) på Linköpings Universitet bedömer estetisk grafisk kvalitet. Syftet är att undersöka hur studenterna uttrycker sig, vilka värderingar som bedömningen grundar sig i och varifrån dessa kommer. Syftet är också att undersöka vad eventuella kollektiva värderingar har för betydelse ur ett sociologiskt perspektiv. Uppsatsen utgår ifrån sociologiska teorier kring gruppsocialisation och god smak. Vidare presenteras teori kring ämnet estetik, liksom hur begreppen god och dålig smak har används inom konst och design. Det insamlade materialet som utgör resultatet är huvudsakligen hämtat från två fokusgrupper, men också från en deltagande observation samt ett antal följdfrågor till deltagarna i fokusgrupperna. Resultatet visar att deltagarna ofta bedömer den estetiska kvaliteten i första hand efter huruvida det grafiska materialets typografi, färger och bilder gör det lätt att ta till sig budskapet det är avsett att förmedla. Det estetiska intrycket är således påverkat av funktionen. Detta gäller dock inte om deltagarna kliver ur sin roll som designers, då de kan motivera sitt estetiska ställningstagande med en annan position de kan inta utanför utbildningen. Resultatet visar också att deltagarna anser att ett avskalat formspråk med funktionalistiska rötter har blivit norm på utbildningen, och att detta till viss del beror på en inflytelserik gästföreläsare de haft under årskurs ett. Deltagarna anser också att de som designers förväntas uppskatta en viss sorts estetik men också veta vad som är ”fult”. De menar att detta är uppfattningar som finns bland designers i allmänhet och inte bara inom utbildningen. Dessa uppfattningar kring estetik anser studenterna ibland är ett hinder för att utveckla en egen stil som formgivare, samtidigt som de skapar en sammanhållning inom gruppen. Att kunna motivera ett estetiskt ställningstagande med funktionella argument är dessutom viktigt vid kundkontakter, vilket tyder på att den funktionalistiska estetikens dominans kan ha att göra med den ekonomiska aspekten av grafisk design.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rabelo, Emival Borges. „Estudo da formação e implantação de equipes em celulas autogerenciaveis numa industria grafica“. [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264400.

Der volle Inhalt der Quelle
Annotation:
Orientador: Sergio Tonini Button
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-06T08:23:45Z (GMT). No. of bitstreams: 1 Rabelo_EmivalBorges_M.pdf: 247064 bytes, checksum: 5a89a14abdeaff250825bb42aeb61243 (MD5) Previous issue date: 2004
Resumo: O presente trabalho é fruto de pesquisa focada na implantação e desenvolvimento de equipe em células autogerenciáveis da Editora Gráfica Terra localizada em Goiânia-GO, a empresa tem uma expressiva representatividade no setor gráfico do centro-oeste brasileiro. A indústria gráfica vem passando por importantes mudanças; crescem os riscos, aumentam os desafios. A indústria gráfica do centro-oeste precisa criar estratégias de competitividade, para permitir ao setor enfrentar a concorrência acirrada de grandes empresas do sul e sudeste do país, utilizando métodos que não sejam intuitivos ou comparativos. A Metodologia utilizada foi uma pesquisa qualitativa e quantitativa, do tipo estudo de caso com apoio da pesquisa documental e bibliográfica. O século XXI começa com novos modelos de gestão de pessoas que passam a ser o ativo de maior valor na sociedade. As equipes em células fazem parte desta nova maneira de gerir a empresa, dando ênfase ao trabalho em equipe, a transferência do poder para os trabalhadores que executam seus próprios trabalhos e a busca contínua da qualidade conquistada através de planejamentos, processos, métodos
Abstract: The present work is resulted of research based on the implantation and development of teams in cells automanager of the Publisher Graphic Terra, located in Goiânia-GO, the company has an expressive representation in the graphic section of the Brazilian center-west. The graphic industry come been going by important changes; increase the risks, increase the challenges. The graphic industry of the center-west needs to create strategies of competitiveness, to allow to the section to face the obstinate competition of great companies of the south and southeast of the country, using methods that are not intuitive or comparative. The Methodology used was a qualitative and quantitative research, of the type case study with support of the documental and bibliographical research. The XXI century begins with new models of people's management that become the assets of larger value in the society. The teams in cells are part of this new way of managing the company, giving emphasis to the work in team, the transfer of the power for the workers that execute your own works and the continues search for the quality conquered through planning¿s, processes, methods
Mestrado
Gestão da Qualidade Total
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kallioinen, Lundgren Sara. „Cheap Quality & Urban Unrest : The prettiest words are the ones we don't say“. Thesis, Konstfack, Textil, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:konstfack:diva-7816.

Der volle Inhalt der Quelle
Annotation:
With a background in ceramics and graphic designI have developed my thoughts about craft as a combination of verbal and nonverbal communication, but in textiles. With the written word as one of my main materials this project looks into class and material hierarchies filtered through autofictivestories from my life. This paper explores themes that impact my decisions in the making process, choosing materials, motifs, texts and words, politics and poetry. It deals with all the information I push into patchworking, shirring, tufting and sculpting textiles, with the goal to paint a picture of an often unwanted section of society. To discuss this I have chosen references dealing with sloppy craft, text based art, graffiti and craft traditions, in a mix with news articles and economy. Through all parts of the project I am on balancing line between chaos and perfection, truth and fiction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Байдак, С. Н., und Ю. С. Михайленко. „Повышение качества графических изображений, внедренных в документ Microsoft Excel“. Thesis, Сумский государственный университет, 2016. http://essuir.sumdu.edu.ua/handle/123456789/47899.

Der volle Inhalt der Quelle
Annotation:
Иллюстрации дают общее представление о содержании материала и помогают быстрее схватывать сложные идеи. Проведенные исследования показали, что большинство специалистов, работающих с большими объѐмами данных, используют Microsoft Excel для выполнения расчетных работ, построения таблиц, сложных графиков, создания и анализа баз данных.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bothorel, Gwenael. „Algorithmes automatiques pour la fouille visuelle de données et la visualisation de règles d’association : application aux données aéronautiques“. Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/13783/1/bothorel.pdf.

Der volle Inhalt der Quelle
Annotation:
Depuis quelques années, nous assistons à une véritable explosion de la production de données dans de nombreux domaines, comme les réseaux sociaux ou le commerce en ligne. Ce phénomène récent est renforcé par la généralisation des périphériques connectés, dont l'utilisation est devenue aujourd'hui quasi-permanente. Le domaine aéronautique n'échappe pas à cette tendance. En effet, le besoin croissant de données, dicté par l'évolution des systèmes de gestion du trafic aérien et par les événements, donne lieu à une prise de conscience sur leur importance et sur une nouvelle manière de les appréhender, qu'il s'agisse de stockage, de mise à disposition et de valorisation. Les capacités d'hébergement ont été adaptées, et ne constituent pas une difficulté majeure. Celle-ci réside plutôt dans le traitement de l'information et dans l'extraction de connaissances. Dans le cadre du Visual Analytics, discipline émergente née des conséquences des attentats de 2001, cette extraction combine des approches algorithmiques et visuelles, afin de bénéficier simultanément de la flexibilité, de la créativité et de la connaissance humaine, et des capacités de calculs des systèmes informatiques. Ce travail de thèse a porté sur la réalisation de cette combinaison, en laissant à l'homme une position centrale et décisionnelle. D'une part, l'exploration visuelle des données, par l'utilisateur, pilote la génération des règles d'association, qui établissent des relations entre elles. D'autre part, ces règles sont exploitées en configurant automatiquement la visualisation des données concernées par celles-ci, afin de les mettre en valeur. Pour cela, ce processus bidirectionnel entre les données et les règles a été formalisé, puis illustré, à l'aide d'enregistrements de trafic aérien récent, sur la plate-forme Videam que nous avons développée. Celle-ci intègre, dans un environnement modulaire et évolutif, plusieurs briques IHM et algorithmiques, permettant l'exploration interactive des données et des règles d'association, tout en laissant à l'utilisateur la maîtrise globale du processus, notamment en paramétrant et en pilotant les algorithmes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kakvic, Martin. „Možnosti simulace a optimalizace systémů hromadné obsluhy v prostředí MATLAB“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221139.

Der volle Inhalt der Quelle
Annotation:
This master thesis deals with the possibilities of simulation and optimalization of ethernet networks in Matlab interface. Ethernet technology, quality of service and architecture of differentiated services are described in this thesis. Based on these facts, there was created a toolbox. This toolbox allows to create a dynamic network, which can be configured via using GUI. In the end of the thesis, there are shown results of the simulation of networks, which is running in created program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ramousse, Florian. „Contributions à l’utilisation de la réalité virtuelle pour la thérapie des troubles du comportement alimentaire“. Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2024. https://bibli.ec-lyon.fr/exl-doc/TH_2024ECDL0023.pdf.

Der volle Inhalt der Quelle
Annotation:
L’utilisation des technologies immersives destinées à des fins thérapeutiques est pratiquée depuis plusieurs années. Si ces techniques furent d’abord appliquées aux troubles phobiques, elles se sont peu à peu étendues vers d’autres pathologies telles que la schizophrénie et les troubles alimentaires. Les travaux existants concernant l’utilisation de la réalité virtuelle (RV ) pour le traitement de troubles alimentaires se concentrent sur deux problématiques : (1) la correction de la distorsion de la représentation que le patient se fait de lui-même et pour laquelle la RV l’aide à corriger cette représentation erronée via l’incarnation ou la visualisation d’un avatar ; (2) l’utilisation de l’environnement avec des éléments déclencheurs de la pathologie (ex., nourriture) dans le but de mieux caractériser les symptômes et de réaliser une thérapie d’exposition aux signaux. Le premier objectif de la thèse est de proposer et d’évaluer un environnement immersif induisant les condition de craving alimentaire (besoin irrépressible de consommation d’un produit associé à une recherche compulsive), le tout chez les personnes atteintes de bulimia nervosa ou d’hyperphagie boulimique, comparativement à des personnes saines appariées. Le développement de cet environnement repose sur un travail de design collaboratif, dans lequel l’utilisation d’un environnement immersif avec un scénario scripté et semi-guidé, avec des stimuli multi-modaux, constitue un élément novateur. La caractérisation de l’environnement s’effectue sur l’actuelle mesure de référence du craving alimentaire en RV soit une auto-évaluation par échelle verbale simple dont nous étudions les variations durant l’exploration du scénario avant et après chaque pièce virtuelle d’exposition, ainsi que son association à l’anxiété induite par l’exploration aux mêmes moments. De plus, certains paramètres physiologiques ayant pu être associés au craving des troubles addictifs sont mesurés aux différents moments d’évaluation (variabilité de fré- quence cardiaque (variations des intervalles entre chaque battement de coeur) et activité électrodermale (activité bioélectrique cutanée physiologique)). Enfin, nous utilisons également des méthodes de phénotypage, basées sur des questionnaires d’auto-évaluation, qui visent à mettre en évidence des dimensions comportementales et émotionnelles qui peuvent être des facteurs propices au déclenchement des crises. Par ailleurs, dans le cadre des études sur l’envie de manger, la qualité visuelle apparaît comme un paramètre majeur qu’il faut contrôler afin de proposer des environnements iv adaptés aux contraintes d’expérience utilisateur ainsi qu’aux contraintes techniques. Le second objectif de la thèse est d’étudier comment la qualité visuelle des stimuli alimentaires influence le désir de manger dans un environnement en réalité virtuelle. Cette évaluation s’effectue sur des personnes non pathologiques, avec des visuels alimentaires de qualité graphique variable et classés au préalable selon une métrique entraînée sur un apprentissage profond capable de délivrer un score moyen de qualité graphique
The use of immersive technologies for therapeutic purposes has been practiced for several years. While these techniques were initially applied to phobic disorders, they have gradually expanded to other disorders such as anxiety, schizophrenia and eating disorders. Existing research on the use of virtual reality (VR) for the treatment of eating disorders focuses on two issues : (1) correcting the distortion of the patient’s self-representation, where VR helps correct this erroneous representation through embodiment or visualization of an avatar (2) using the environment with triggering elements of the pathology (e.g., food) to better characterize symptoms and conduct exposure therapy to these cues. The first objective of the thesis is to propose and evaluate an immersive environment inducing conditions of food craving (irresistible urge to consume a product associated with compulsive seeking) in individuals with bulimia nervosa or binge-eating disorder, compared to matched healthy subjects. The development of this environment is based on collaborative design work, in which the use of multi-modal stimuli is an innovative element. The characterization of the environment is based on the current reference measure of food craving in VR, which is a self-assessment using a simple verbal scale. We study variations during the exploration of the scenario before and after each virtual exposure, as well as its association with anxiety induced by the exploration at the same moments. Additionally, certain physiological parameters previously associated with cravings in addictive disorders are measured at different evaluation points (heart rate variability and electrodermal activity). Finally, we also use phenotyping methods based on self-assessment questionnaires to highlight behavioral and emotional dimensions that may contribute to triggering episodes. Moreover, in the context of studies on the desire to eat, visual quality emerges as a major parameter that needs to be controlled to offer environments suitable for user experience constraints and technical limitations. The second objective of the thesis is to study how the visual quality of food stimuli influences the desire to eat in a virtual reality environment. This evaluation is performed on non-pathological individuals, with food visuals of varying graphic quality, pre-classified according to a deep learning-trained metric capable of delivering an average graphic quality score
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Santos, Sergio Rodrigues dos. „Proposta metodologica utilizando ferramentas de qualidade na avaliação do processo de pulverização“. [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/257202.

Der volle Inhalt der Quelle
Annotation:
Orientador: Antonio Jose da Silva Maciel
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Agricola
Made available in DSpace on 2018-08-05T18:51:09Z (GMT). No. of bitstreams: 1 Santos_SergioRodriguesdos_D.pdf: 706421 bytes, checksum: 4c9b0d692a736516619c26ec58521e8e (MD5) Previous issue date: 2005
Resumo: O objetivo deste trabalho foi gerar uma metodologia para avaliar o processo de pulverização com a utilização das ferramentas da qualidade. Para tanto foram listados os fatores primários, secundários e terciários e com auxílio da ferramenta ¿check list¿ foram elaboradas as listas de checagem. Foram avaliados os fatores mão-de-obra, máquina, material, meio e método de 32 processos de pulverização antes da aplicação de defensivos agrícolas. Nesta avaliação cada fator recebeu uma pontuação onde a somatória foi de 750 pontos. Desta amostra foi avaliado a aplicação de herbicida de dez processos participantes do Programa Agrária de Qualidade Total Rural ¿ PAQTRural. Os itens de controle avaliados nos dez processos foram a qualidade de distribuição das gotas, o controle das plantas daninhas, falhas entre os rastros de pulverização e fitotoxidez causado às culturas. A qualidade de distribuição das gotas foi avaliada no momento da execução da aplicação do defensivo onde se posicionou vinte papéis hidrossensíveis na superfície do solo. Para avaliar a qualidade distribuição de gotas foi considerado o potencial risco de deriva (PRD), a densidade de gotas (N cm-2), o diâmetro mediano volumétrico (DMV) e a amplitude relativa (AR). Considerou-se também um total de 750 pontos para os itens de controle se todos estivessem em conformidade. Os resultados mostram que a pontuação média dos fatores mão-de-obra, máquina, material, meio e método foram 78, 211, 49, 20 e 94 pontos, respectivamente. Considerando a somatória dos pontos dos fatores para os 32 processos, o valor mínimo encontrado foi de 230 e o máximo de 620 pontos. Para os processos participantes do programa de qualidade pode-se notar uma menor amplitude onde a variação foi entre 410 e 620 pontos. A somatória dos fatores avaliados dos dez processos com os pontos obtidos nos itens de controle variou de 812 até 1263 pontos de um total de 1500. Com a metodologia pode-se identificar quais as causas comuns dos processos que podem afetar o seu resultado
Abstract: The purpose this work was bring a methodologist to evaluate the process of pulverization with the utilization of the quality assistance, then it were list the primary factors, second factors, third factors and with the support ¿check list¿ so it were elaborate the list. They were evaluate this factors like factor, machine agriculture, material and method of 32 process of the pulverization before application of defense agriculture, that the soma would be 750 points. This sample was evaluate the application of herbicide of the 10 participates process at Programa Agrária de Qualidade Total Rural ¿ PAQTRural. The items of evaluate in the Process were the quality of distribution of the drops controls on the weed and imperfect between sign of pulverization and phytoxicologist that causes for the culture. The quality of distribution of drops were evaluate the application of defensive on the position 20 hydro sensible paper on the pulverization bar. To evaluate the risk potential of drift (PRD) the density of drops (N cm ¿2), the volume mean diameter medium and (VMD) and the relative amplitude and also the value 750 points for the controls all of them were in conformity. The median punctuation of the factors like hand labor, machine agriculture, material and method were of the 78, 211, 49,20 and 94 points. The soma of points of the factors for the 32 process, the minimum value found was 230 and maximum of the 620 points. For the participant¿s process of quality program can watch the smallest amplitude. It being that variation was of 410 until 620 points. The soma of the evaluate factors for the 10 process with achieved point on the control changed 812 until 1263 points with the metodoly can identify the comum cause of the process can affect your result
Doutorado
Maquinas Agricolas
Doutor em Engenharia Agrícola
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Tian, Chao. „Towards effective analysis of big graphs : from scalability to quality“. Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29578.

Der volle Inhalt der Quelle
Annotation:
This thesis investigates the central issues underlying graph analysis, namely, scalability and quality. We first study the incremental problems for graph queries, which aim to compute the changes to the old query answer, in response to the updates to the input graph. The incremental problem is called bounded if its cost is decided by the sizes of the query and the changes only. No matter how desirable, however, our first results are negative: for common graph queries such as graph traversal, connectivity, keyword search and pattern matching, their incremental problems are unbounded. In light of the negative results, we propose two new characterizations for the effectiveness of incremental computation, and show that the incremental computations above can still be effectively conducted, by either reducing the computations on big graphs to small data, or incrementalizing batch algorithms by minimizing unnecessary recomputation. We next study the problems with regards to improving the quality of the graphs. To uniquely identify entities represented by vertices in a graph, we propose a class of keys that are recursively defined in terms of graph patterns, and are interpreted with subgraph isomorphism. As an application, we study the entity matching problem, which is to find all pairs of entities in a graph that are identified by a given set of keys. Although the problem is proved to be intractable, and cannot be parallelized in logarithmic rounds, we provide two parallel scalable algorithms for it. In addition, to catch numeric inconsistencies in real-life graphs, we extend graph functional dependencies with linear arithmetic expressions and comparison predicates, referred to as NGDs. Indeed, NGDs strike a balance between expressivity and complexity, since if we allow non-linear arithmetic expressions, even of degree at most 2, the satisfiability and implication problems become undecidable. A localizable incremental algorithm is developed to detect errors using NGDs, where the cost is determined by small neighbors of nodes in the updates instead of the entire graph. Finally, a rule-based method to clean graphs is proposed. We extend graph entity dependencies (GEDs) as data quality rules. Given a graph, a set of GEDs and a block of ground truth, we fix violations of GEDs in the graph by combining data repairing and object identification. The method finds certain fixes to errors detected by GEDs, i.e., as long as the GEDs and the ground truth are correct, the fixes are assured correct as their logical consequences. Several fundamental results underlying the method are established, and an algorithm is developed to implement the method. We also parallelize the method and guarantee to reduce its running time with the increase of processors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Zellagui, Soumia. „Reengineering Object Oriented Software Systems for a better Maintainability“. Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS010/document.

Der volle Inhalt der Quelle
Annotation:
Les systèmes logiciels existants représentent souvent des investissements importants pour les entreprises qui les développent avec l’intention de les utiliser pendant une longue période de temps. La qualité de ces systèmes peut être dégradée avec le temps en raison des modifications complexes qui leur sont incorporées. Pour faire face à une telle dégradation lorsque elle dépasse un seuil critique, plusieurs stratégies peuvent être utilisées. Ces stratégies peuvent se résumer en: 1) remplaçant le système par un autre développé à partir de zéro, 2) poursuivant la maintenance(massive) du système malgré son coût ou 3) en faisant une réingénierie du système. Le remplacement et la maintenance massive ne sont pas des solutions adaptées lorsque le coût et le temps doivent être pris en compte, car elles nécessitent un effort considérable et du personnel pour assurer la mise en œuvre du système dans un délai raisonnable. Dans cette thèse, nous nous intéressons à la solution de réingénierie. En général, la réingénierie d’un système logiciel inclut toutes les activités après la livraison à l’utilisateur pour améliorer sa qualité. Cette dernière est souvent caractérisé par un ensemble d’attributs de qualité. Nous proposons trois contributions pour améliorer les attributs de qualité spécifiques, que soient:la maintenabilité, la compréhensibilité et la modularité. Afin d’améliorer la maintenabilité, nous proposons de migrer les systèmes logiciels orientés objets vers des systèmes orientés composants. Contrairement aux approches existantes qui considèrent un descripteur de composant comme un cluster des classes, chaque classe du système existant sera migré en un descripteur de composant. Afin d’améliorer la compréhensibilité, nous proposons une approche pour la reconstruction de modèles d’architecture d’exécution des systèmes orientés objet et de gérer la complexité des modèles résultants. Les modèles, graphes, générés avec notre approche ont les caractéristiques suivantes: les nœuds sont étiquetés avec des durées de vie et des probabilités d’existence permettant 1) une visualisation des modèles avec un niveau de détail. 2) de cacher/montrer la structure interne des nœuds. Afin d’améliorer la modularité des systèmes logiciels orientés objets, nous proposons une approche d’identification des modules et des services dans le code source de ces systèmes.Dans cette approche, nous croyons que la structure composite est la structure principale du système qui doit être conservée lors du processus de modularisation, le composant et ses composites doivent être dans le même module. Les travaux de modularisation existants qui ont cette même vision, supposent que les relations de composition entre les éléments du code source sont déjà disponibles ce qui n’est pas toujours évident. Dans notre approche, l’identification des modules commence par une étape de reconstruction de modèles d’architecture d’exécution du système étudié. Ces modèles sont exploités pour d’identification de relations de composition entre les éléments du code source du système étudié. Une fois ces relations ont été identifiées, un algorithme génétique conservatif aux relations de composition est appliqué sur le système pour identifier des modules. En dernier, les services fournis par les modules sont identifiés à l’aide des modèles de l’architecture d’exécution du système logiciel analysé. Quelques expérimentations et études de cas ont été réalisées pour montrer la faisabilité et le gain en maintenabilité, compréhensibilité et modularité des logiciels traités avec nos propositions
Legacy software systems often represent significant investmentsfor the companies that develop them with the intention of using themfor a long period of time. The quality of these systems can be degraded over time due to the complex changes incorporated to them.In order to deal with these systems when their quality degradation exceeds a critical threshold, a number of strategies can be used. Thesestrategies can be summarized in: 1) discarding the system and developinganother one from scratch, 2) carrying on the (massive) maintenance of the systemdespite its cost, or 3) reengineering the system. Replacement and massive maintenance are not suitable solutions when the cost and time are to be taken into account, since they require a considerable effort and staff to ensurethe system conclusion in a moderate time. In this thesis, we are interested in the reengineering solution. In general, software reengineering includes all activities following the delivery to the user to improve thesoftware system quality. This latter is often characterized with a set of quality attributes. We propose three contributions to improve specific quality attributes namely: maintainability, understandability and modularity.In order to improve maintainability, we propose to migrateobject oriented legacy software systems into equivalent component based ones.Contrary to exiting approaches that consider a component descriptor as a clusterof classes, each class in the legacy system will be migrated into a componentdescriptor. In order to improve understandability, we propose an approach forrecovering runtime architecture models of object oriented legacy systems and managing the complexity of the resulted models.The models recovered by our approach have the following distinguishing features: Nodes are labeled with lifespans and empirical probabilities of existencethat enable 1) a visualization with a level of detail. 2) the collapsing/expanding of objects to hide/show their internal structure.In order to improve modularity of object-oriented software systems,we propose an approach for identifying modulesand services in the source code.In this approach, we believe that the composite structure is the main structure of the system that must be retained during the modularization process, the component and its composites must be in the same module. Existing modularization works that has this same vision assumes that the composition relationships between the elements of the source code are already available, which is not always obvious. In our approach, module identification starts with a step of runtime architecture models recovery. These models are exploited for the identification of composition relationships between the elements of the source code. Once these relationships have been identified, a composition conservative genetic algorithm is applied on the system to identify modules. Lastly, the services provided by the modules are identified using the runtime architecture models of the software system. Some experimentations and casestudies have been performed to show the feasibility and the gain inmaintainability, understandability and modularity of the software systems studied with our proposals
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Lankes, Franz. „Increasing the quality of real-time rendering in driving simulation by means of programmable graphics hardware = Qualitätssteigerung der Echtzeitvisualisierung in der Fahrsimulation mittels programmierbarer Graphik-Hardware“. kostenfrei, 2010. http://d-nb.info/100061722X/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Free, Frank Borrego Jaime. „Porting high quality graphics simulations to a low-cost computer architecture /“. Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA319381.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Computer Science) Naval Postgraduate School, September 1996.
Thesis advisor(s): David R. Pratt, J.S. Falby. "September 1996." Includes bibliographical references (p. 251-252). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Free, Frank, und Jaime Borrego. „Porting high quality graphics simulations to a low-cost computer architecture“. Thesis, Monterey, California. Naval Postgraduate School, 1996. http://hdl.handle.net/10945/32240.

Der volle Inhalt der Quelle
Annotation:
Two disadvantages of using Silicon Graphics, Inc. (SGI) computers and SGI's IRIS Performer application programming interface (API) in NPSNET are the current inability to run the graphic simulations on more popular environments, such as personal computer (PC) operating systems, and the increased expense associated with the alternative of choosing graphics specific hardware over lower cost PCs. Work detailed in this thesis addresses these problems by porting the graphics code from NPSNET to relatively inexpensive PC hardware running the Microsoft Windows NT OS. Two independent approaches were taken. The first created a library of graphics calls which simulate the syntax and functionality of Performer calls, but which have been redefined in terms of the Gemini Technology Corporation's OpenGVSTM API, which is capable of running on the NT platform. The second proposed and implemented a prototype graphics display manager coded using only OpenGVS, rather than Performer, for a proposed platform-independent redesign of NPSNET. As a result of this effort, the goal of porting IRIS Performer graphics simulations to the PC has been accomplished, and a new architecture for NPSNET display managers has been validated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Chan, Ming-Yuen. „Quality enhancement and relation-aware exploration pipeline for volume visualization /“. View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20CHANM.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Cook, Adrian Roger. „Neural network evaluation of the effectiveness of rendering algorithms for real-time 3D computer graphics“. Thesis, Nottingham Trent University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302404.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Torminato, Silvio Miotta. „Analise da utilização da ferramenta CEP = um estudo de caso na manufatura de autopeças“. [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264246.

Der volle Inhalt der Quelle
Annotation:
Orientador: Olivio Novaski
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-08-04T01:56:53Z (GMT). No. of bitstreams: 1 Torminato_SilvioMiotta_M.pdf: 1152419 bytes, checksum: debbdb5f6049f964ea57e8615499d2b1 (MD5) Previous issue date: 2004
Resumo: O objetivo principal desta dissertação é apresentar uma aplicação da ferramenta CEP, quando essa aplicação se desenvolve em um ambiente onde a preocupação maior é focada na qualidade, deslocando o conceito de volume de controle para a especificidade do mesmo. Mostra-se que apesar das limitações de determinados processos, a ferramenta pode auxiliar os mesmos controlando e reduzindo sua variabilidade dentro dos parâmetros tecnológicos conhecidos. O estudo se iniciou objetivando a redução de cartas de controle, já que este excesso de cartas, desmotivava os profissionais envolvidos, colocando a ferramenta CEP em descrédito. Durante o estudo de caso, identificou-se uma oportunidade de utilização eficaz dos recursos disponíveis e da aplicação da ferramenta em duas áreas distintas da empresa. Nessas, modo conseguiu-se um controle mais eficaz do processo, resultando na redução de horas paradas de máquina, eliminação de risco de acidente, volta da confiança na ferramenta CEP obteve-se estabilidade em dois processos controlando somente um deles
Abstract: The main goal of this dissertation is to present an application of the SPC tool when this application is developed in an environment where the concern is more focused on quality, shifting the concept of volume control for its specification. It shows that despite all the limitation at determined processes, the tool can assist them (the processes) by controlling and reducing their variability under known technological parameters. The study started by focusing the control letters, dismotivated by focusing the control letters, seen that consequence, dishonoring the SPC tool. During the case study, I could identify an effective usage of the available resources in the application of the tool at two distinctive areas in the plant. In this manner, an effective control of the process was more effective, resulting to a reduction of the idle time by the machine, eliminating accident risks, concerning the SPC tool¿s reliance, I can state that the SPC tool is stable at both processes controlling only one of them
Mestrado
Planejamento e Gestão Estrategica da Manufatura
Mestre Profissional em Engenharia Mecanica
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Garduno, Barrera David Rafael Diaz Michel. „A differentiated quality of service oriented multimedia multicast protocol Un protocole multimedia multipoint à qualité de service différenciée /“. Toulouse : INP Toulouse, 2005. http://ethesis.inp-toulouse.fr/archive/00000081.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Kite, Thomas David. „Design and quality assessment of forward and inverse error diffusion halftoning algorithms /“. Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Davot, Tom. „A la recherche de l’échafaudage parfait : efficace, de qualité et garanti“. Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS030.

Der volle Inhalt der Quelle
Annotation:
Le séquençage est un processus en biologie qui permet de déterminer l'ordre des nucléotides au sein de la molécule d'ADN. Le séquençage produit un ensemble de fragments, appelés lectures, dans lesquels l'information génétique est connue. Seulement, la séquence génomique n'est connue que de façon parcellaire, pour pouvoir faire son analyse, il convient alors de la reconstituer à l'aide d'un certain nombre de traitements informatiques. Dans cette thèse, nous avons étudié deux problèmes mathématiques issus de ce séquençage : l'échafaudage et la linéarisation.L'échafaudage est un processus qui intervient après l'assemblage des lectures en contigs. Il consiste en la recherche de chemins et de cycles dans un graphe particulier appelé graphe d'échafaudage. Ces chemins et cycles représentent les chromosomes linéaires et circulaires de l'organisme dont l'ADN a été séquencée. La linéarisation est un problème annexe à l'échafaudage : quand on prend en compte le fait que les contigs puissent apparaitre plusieurs fois dans la séquence génomique, des ambiguïtés surviennent dans le calcul d'une solution. Celles-ci, si elles ne sont pas traitées, peuvent entrainer la production d'une séquence chimérique lors de l'échafaudage. Pour résoudre ce problème, il convient alors de dégrader de façon parcimonieuse une solution calculée par l'échafaudage. Dans tous les cas, ces deux problèmes peuvent être modélisés comme des problèmes d'optimisation dans un graphe.Dans ce document, nous ferons l'étude de ces deux problèmes en se concentrant sur trois axes. Le premier axe consiste à classifier ces problèmes au sens de la complexité. Le deuxième axe porte sur le développement d'algorithmes, exacts ou approchés, pour résoudre ces problèmes. Enfin, le dernier axe consiste à implémenter et tester ces algorithmes pour observer leurs comportements sur des instances réelles
Sequencing is a process in biology that determines the order of nucleotides in the DNA. It produces a set of fragments, called reads, in which the genetic information is known. Unfortunatly, the genomic sequence is decomposed in small pieces. In order to analyse it, it is necessary to reconstruct it using a number of computer processes. In this thesis, we studied two mathematical problems arising from this sequencing: the scaffolding and the linearization.The scaffolding is a process that takes place after the reads assembly into larger subsequences called contigs. It consists in the search of paths and cycles in a particular graph called scaffold graph. These paths and cycles represent the linear and circular chromosomes of the organism whose DNA has been sequenced. The linearization is a problem related to the scaffolding. When we take into account that contigs may appear several times in the genomic sequence, some ambiguities can arise. If this ambiguities are not deleted, then a chimeric sequence may be produced by the scaffolding. To solve this problem, a solution computed by the scaffolding should be wisely deteriorated. In any case, both problems can be modelized as optimization problems in a graph.In this document, we study both problems focusing on three aspects. The first aspect consists in the study of the complexity of these problems. The second aspect consists in the development of algorithms, exact or approximate, to solve these problems. Finally, the last aspect consists in implementing and testing these algorithms to look at their behaviors on real instances
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Zachrisson, Mikael. „High Quality Shadows for Real-time Surface Visualization“. Thesis, Linköpings universitet, Medie- och Informationsteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133214.

Der volle Inhalt der Quelle
Annotation:
This thesis describes the implementation of a shadowing system able to produce hard shadows. Shadow mapping is the most common real-time shadowing algorithm but it suffers from severe aliasing artifacts and self-shadowing effects. Different advanced techniques based on Shadow Mapping are implemented in this thesis with the objective of creating accurate hard shadows. First, an implementation based on Cascaded Shadow Maps is presented. This technique improves the visual quality of shadow mapping by using multiple smaller shadow maps instead of a large one. The technique addresses the fact that objects near the viewer require a higher shadow map resolution than objects far away. The second technique presented is Sub-pixel Shadow Mapping. By storing information about occluding triangles in the shadow map this technique is able to produce accurate hard shadows with sub-pixel precision. Both methods can be combined in order to improve the resulting shadow quality. Finally, a collection of advanced biasing techniques that minimize the self-hadowing artifacts generated by shadow mapping are presented. The final implementation achieves real-time performances with considerably improved quality compared to standard shadow mapping.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Schulz, Christian [Verfasser], und P. [Akademischer Betreuer] Sanders. „High Quality Graph Partitioning / Christian Schulz. Betreuer: P. Sanders“. Karlsruhe : KIT-Bibliothek, 2013. http://d-nb.info/1037154363/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Tiano, Donato. „Learning models on healthcare data with quality indicators“. Electronic Thesis or Diss., Lyon 1, 2022. http://www.theses.fr/2022LYO10182.

Der volle Inhalt der Quelle
Annotation:
Les séries temporelles sont des collections de données obtenues par des mesures dans le temps. Cette données vise à fournir des éléments de réflexion pour l'extraction d'événements et à les représenter dans une configuration compréhensible pour une utilisation ultérieure. L'ensemble du processus de découverte et d'extraction de modèles à partir de l'ensemble de données s'effectue avec plusieurs techniques d'extraction, notamment l'apprentissage automatique, les statistiques et les clusters. Ce domaine est ensuite divisé par le nombre de sources adoptées pour surveiller un phénomène. Les séries temporelles univariées lorsque la source de données est unique, et les séries temporelles multivariées lorsque la source de données est multiple. La série chronologique n'est pas une structure simple. Chaque observation de la série a une relation forte avec les autres observations. Cette interrelation est la caractéristique principale des séries temporelles, et toute opération d'extraction de séries temporelles doit y faire face. La solution adoptée pour gérer l'interrelation est liée aux opérations d'extraction. Le principal problème de ces techniques est de ne pas adopter d'opération de prétraitement sur les séries temporelles. Les séries temporelles brutes comportent de nombreux effets indésirables, tels que des points bruyants ou l'énorme espace mémoire requis pour les longues séries. Nous proposons de nouvelles techniques d'exploration de données basées sur l'adoption des caractéristiques plus représentatives des séries temporelles pour obtenir de nouveaux modèles à partir des données. L'adoption des caractéristiques a un impact profond sur la scalabilité des systèmes. En effet, l'extraction d'une caractéristique de la série temporelle permet de réduire une série entière en une seule valeur. Par conséquent, cela permet d'améliorer la gestion des séries temporelles, en réduisant la complexité des solutions en termes de temps et d'espace. FeatTS propose une méthode de clustering pour les séries temporelles univariées qui extrait les caractéristiques les plus représentatives de la série. FeatTS vise à adopter les particularités en les convertissant en réseaux de graphes pour extraire les interrelations entre les signaux. Une matrice de cooccurrence fusionne toutes les communautés détectées. L'intuition est que si deux séries temporelles sont similaires, elles appartiennent souvent à la même communauté, et la matrice de cooccurrence permet de le révéler. Dans Time2Feat, nous créons un nouveau clustering de séries temporelles multivariées. Time2Feat propose deux extractions différentes pour améliorer la qualité des caractéristiques. Le premier type d'extraction est appelé extraction de caractéristiques intra-signal et permet d'obtenir des caractéristiques à partir de chaque signal de la série temporelle multivariée. Inter-Signal Features Extraction permet d'obtenir des caractéristiques en considérant des couples de signaux appartenant à la même série temporelle multivariée. Les deux méthodes fournissent des caractéristiques interprétables, ce qui rend possible une analyse ultérieure. L'ensemble du processus de clustering des séries temporelles est plus léger, ce qui réduit le temps nécessaire pour obtenir le cluster final. Les deux solutions représentent l'état de l'art dans leur domaine. Dans AnomalyFeat, nous proposons un algorithme pour révéler des anomalies à partir de séries temporelles univariées. La caractéristique de cet algorithme est la capacité de travailler parmi des séries temporelles en ligne, c'est-à-dire que chaque valeur de la série est obtenue en streaming. Dans la continuité des solutions précédentes, nous adoptons les fonctionnalités de révélation des anomalies dans les séries. Avec AnomalyFeat, nous unifions les deux algorithmes les plus populaires pour la détection des anomalies : le clustering et le réseau neuronal récurrent. Nous cherchons à découvrir la zone de densité du nouveau point obtenu avec le clustering
Time series are collections of data obtained through measurements over time. The purpose of this data is to provide food for thought for event extraction and to represent them in an understandable pattern for later use. The whole process of discovering and extracting patterns from the dataset is carried out with several extraction techniques, including machine learning, statistics, and clustering. This domain is then divided by the number of sources adopted to monitor a phenomenon. Univariate time series when the data source is single and multivariate time series when the data source is multiple. The time series is not a simple structure. Each observation in the series has a strong relationship with the other observations. This interrelationship is the main characteristic of time series, and any time series extraction operation has to deal with it. The solution adopted to manage the interrelationship is related to the extraction operations. The main problem with these techniques is that they do not adopt any pre-processing operation on the time series. Raw time series have many undesirable effects, such as noisy points or the huge memory space required for long series. We propose new data mining techniques based on the adoption of the most representative features of time series to obtain new models from the data. The adoption of features has a profound impact on the scalability of systems. Indeed, the extraction of a feature from the time series allows for the reduction of an entire series to a single value. Therefore, it allows for improving the management of time series, reducing the complexity of solutions in terms of time and space. FeatTS proposes a clustering method for univariate time series that extracts the most representative features of the series. FeatTS aims to adopt the features by converting them into graph networks to extract interrelationships between signals. A co-occurrence matrix merges all detected communities. The intuition is that if two time series are similar, they often belong to the same community, and the co-occurrence matrix reveals this. In Time2Feat, we create a new multivariate time series clustering. Time2Feat offers two different extractions to improve the quality of the features. The first type of extraction is called Intra-Signal Features Extraction and allows to obtain of features from each signal of the multivariate time series. Inter-Signal Features Extraction is used to obtain features by considering pairs of signals belonging to the same multivariate time series. Both methods provide interpretable features, which makes further analysis possible. The whole time series clustering process is lighter, which reduces the time needed to obtain the final cluster. Both solutions represent the state of the art in their field. In AnomalyFeat, we propose an algorithm to reveal anomalies from univariate time series. The characteristic of this algorithm is the ability to work among online time series, i.e. each value of the series is obtained in streaming. In the continuity of previous solutions, we adopt the functionality of revealing anomalies in the series. With AnomalyFeat, we unify the two most popular algorithms for anomaly detection: clustering and recurrent neural network. We seek to discover the density area of the new point obtained with clustering
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Mehdi, Wasan. „Structure evaluation of computer human animation quality“. Thesis, University of Bedfordshire, 2013. http://hdl.handle.net/10547/322822.

Der volle Inhalt der Quelle
Annotation:
This work will give a wide survey for various techniques that are present in the field of character computer animation, which concentrates particularly on those techniques and problems involved in the production of realistic character synthesis and motion. A preliminary user study (including Questionnaire, online publishing such as flicker.com, interview, multiple choice questions, publishing on Android mobile phone, and questionnaire analysis, validation, statistical evaluation, design steps and Character Animation Observation) was conducted to explore design questions, identify users' needs, and obtain a "true story" of quality character animation and the effect of using animation as useful tools in Education. The first set of questionnaires were designed to accommodate the evaluation of animation from candidates from different walks of life, ranging from animators, gamers, teacher assistances (TA), students, teaches, professionals and researchers using and evaluating pre-prepared animated character videos scenarios, and the study outcomes has reviewed the recent advances techniques of character animation, motion editing that enable the control of complex animations by interactively blending, improving and tuning artificial or captured motions. The goal of this work was to augment the students learning intuition by providing ways to make education and learning more interesting, useful and fun objectively, in order to improve students’ respond and understanding to any subject area through the use of animation also by producing the required high quality motion, reaction, interaction and story board to viewers of the motion. We present a variety of different evaluation to the motion quality by measuring user sensitivity, observations to any noticeable artefact, usability, usefulness etc. to derive clear useful guidelines from the results, and discuss several interesting systematic trends we have uncovered in the experimental data. We also present an efficient technique for evaluating the capability of animation influence on education to fulfil the requirements of a given scenario, along with the advantages and the effect on those deficiencies of some methods commonly used to improve animation quality to serve the learning process. Finally, we propose a wide range of extensions and statistical calculation enabled by these evaluation tools, such as Wilcoxon, F-test, T-test, Wondershare Quiz creator (WQC), Chi square and many others explained with full details.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Schmitz, Leonardo Augusto. „Analysis and acceleration of high quality isosurface contouring“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/151064.

Der volle Inhalt der Quelle
Annotation:
Este trabalho apresenta uma análise dos principais algoritmos de poligonização de isosuperfícies na GPU. O resultado desta análise mostra tanto como a GPU pode ser modificada para oferecer suporte a este tipo de algoritmo quanto como os algoritmos podem ser modificados para se adaptar as características das GPUs atuais. As técnicas usadas em versões de GPU do Marching Cubes são extendidas e uma poligonização com menos artefatos é gerada. São propostas versões paralelas do Dual Contouring e do Macet, algoritmos que melhoram a aproximação e a forma das malhas de triângulos, respectivamente. Ambas técnicas extraem isosuperfícies a partir de grandes volumes de dados em menos de um segundo, superando versões de CPU em até duas ordens de grandeza. As contribuições desse trabalho incluem uma versão orientada a tabelas do Dual Contouring (DC) para grids estruturados. A tabela é utilizada na especificação da topologia dos quadriláteros, que ajuda a implementação e a eficiência de cache em cenários paralelos. A tabela é adequada para a expansão de streams na GPU em ambos geometry shader e Histogram Pyramids. Além disso, nossa versão de aproximação de características das isosuperfícies é mais simples que a Decomposição de Valores Singulares e também que a Decomposição QR. O posicionamento dos vértices não requer uma diagonalização de matrizes. Ao invés disso, usa-se uma simples interpolação trilinear. Afim de avaliar a eficiência das técnicas apresentadas neste trabalho, comparamos nossas técnicas com versões do Marching Cubes na GPU do estado da arte. Também incluímos uma análise detalhada da arquitetura de GPU para a extração de isosuperfícies, usando ferramentas de avaliação de desempenho da indústria. Essa análise apresenta os gargalos das placas gráficas na extração de isosuperfícies e ajuda na avaliação de possíveis soluções para as GPUs das próximas gerações.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

McNaughton, Ross. „Inference graphs : a structural model and measures for evaluating knowledge-based systems“. Thesis, London South Bank University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260994.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Baalbaki, Hussein. „Designing Big Data Frameworks for Quality-of-Data Controlling in Large-Scale Knowledge Graphs“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS697.

Der volle Inhalt der Quelle
Annotation:
Les Knowledge Graphs (KG) sont la représentation la plus utilisée d'informations structurées sur un domaine particulier, composée de milliards de faits sous la forme d'entités (nœuds) et de relations (bords) entre eux. De plus, les informations de type sémantique des entités sont également contenues dans les KG. Le nombre de KG n'a cessé d'augmenter au cours des 20 dernières années dans divers domaines, notamment le gouvernement, la recherche universitaire, les domaines biomédicaux, etc. Les applications basées sur l'apprentissage automatique qui utilisent les KG incluent la liaison d'entités, les systèmes de questions-réponses, les systèmes de recommandation, etc. Les Open KG sont généralement produits de manière heuristique, automatiquement à partir de diverses sources, notamment du texte, des photos et d'autres ressources, ou sont sélectionnés manuellement. Cependant, ces KG sont souvent incomplètes, c'est-à-dire qu'il existe des liens manquants entre les entités et des liens manquants entre les entités et leurs types d'entités correspondants. Dans cette thèse, nous abordons l’un des problèmes les plus difficiles auxquels est confronté le Knowledge Graph Completion (KGC), à savoir la prédiction de liens. Prédiction générale des liens en KG qui inclut la prédiction de la tête et de la queue, triple classification. Ces dernières années, les KGE ont été formés pour représenter les entités et les relations du KG dans un espace vectoriel de faible dimension préservant la structure du graphe. Dans la plupart des travaux publiés tels que les modèles translationnels, les modèles de réseaux neuronaux et autres, la triple information est utilisée pour générer la représentation latente des entités et des relations. Dans cette thèse, plusieurs méthodes ont été proposées pour KGC et leur efficacité est démontrée empiriquement dans cette thèse. Tout d’abord, un nouveau modèle d’intégration KG, TransModE, est proposé pour la prédiction de liens. TransModE projette les informations contextuelles des entités dans un espace modulaire, tout en considérant la relation comme vecteur de transition qui guide l'entité tête vers l'entité queue. Deuxièmement, nous avons travaillé sur la construction d'un modèle KGE simple et de faible complexité, tout en préservant son efficacité. KEMA est un nouveau modèle KGE parmi les modèles KGE les plus bas en termes de complexité, tout en obtenant des résultats prometteurs. Enfin, KEMA++ est proposé comme une mise à niveau de KEMA pour prédire les triplets manquants dans les KG en utilisant l'opération arithmétique des produits dans un espace modulaire. Les expériences approfondies et les études d'ablation montrent l'efficacité du modèle proposé, qui rivalise avec les modèles de pointe actuels et établit de nouvelles références pour KGC
Knowledge Graphs (KGs) are the most used representation of structured information about a particular domain consisting of billions of facts in the form of entities (nodes) and relations (edges) between them. Additionally, the semantic type information of the entities is also contained in the KGs. The number of KGs has steadily increased over the past 20 years in a variety of fields, including government, academic research, the biomedical fields, etc. Applications based on machine learning that use KGs include entity linking, question-answering systems, recommender systems, etc. Open KGs are typically produced heuristically, automatically from a variety of sources, including text, photos, and other resources, or are hand-curated. However, these KGs are often incomplete, i.e., there are missing links between the entities and missing links between the entities and their corresponding entity types. In this thesis, we are addressing one of the most challenging issues facing Knowledge Graph Completion (KGC) which is link prediction. General Link Prediction in KGs that include head and tail prediction, triple classification. In recent years, KGE have been trained to represent the entities and relations in the KG in a low-dimensional vector space preserving the graph structure. In most published works such as the translational models, neural network models and others, the triple information is used to generate the latent representation of the entities and relations. In this dissertation, several methods have been proposed for KGC and their effectiveness is shown empirically in this thesis. Firstly, a novel KG embedding model TransModE is proposed for Link Prediction. TransModE projects the contextual information of the entities to modular space, while considering the relation as transition vector that guide the head to the tail entity. Secondly, we worked on building a simple low complexity KGE model, meanwhile preserving its efficiency. KEMA is a novel KGE model among the lowest KGE models in terms of complexity, meanwhile it obtains promising results. Finally, KEMA++ is proposed as an upgrade of KEMA to predict the missing triples in KGs using product arithmetic operation in modular space. The extensive experiments and ablation studies show efficiency of the proposed model, which compete the current state of the art models and set new baselines for KGC. The proposed models establish new way in solving KGC problem other than transitional, neural network, or tensor factorization based approaches. The promising results and observations open up interesting scopes for future research involving exploiting the proposed models in domain-specific KGs such as scholarly data, biomedical data, etc. Furthermore, the link prediction model can be exploited as a base model for the entity alignment task as it considers the neighborhood information of the entities
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Jeunesse, Jean-Paul. „Measuring Interactive Narrative Quality with Experience Management as Story Graph Pruning“. ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/honors_theses/129.

Der volle Inhalt der Quelle
Annotation:
An interactive narrative in a virtual environment is created through player and system interaction, often through an experience manager controlling the actions of all non-player characters (NPCs). Thus, the narrative (and its quality) is entirely dependent on a conflicting combination of unpredictability from the player and a controlled environment that must react to this unpredictability. Ideally, the experience manager should decide NPC actions in a way that never limits player freedom and shows the NPCs acting in believable manners to create a story that can be meaningfully affected by the player and feels organic. One solution to this is to view experience management as a story graph pruning problem. Nodes in the graph represent all the states that the virtual environment could possibly represent. These nodes are then connected by edges, which represent the actions that change one state to another. This graph is then intelligently pruned until NPCs have believable, unambiguous actions to take in every state, while never pruning player actions, with the intention of offering a more meaningful narrative.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Cordeiro, de Lemos Fernando. „Infrastructure and algorithms for information quality analysis and process discovery“. Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0014.

Der volle Inhalt der Quelle
Annotation:
Ces dernières années, des stratégies pour la conduite de projets visant à assurer la qualité dans les systèmes d'information ont été proposées. Dans la pratique, cependant, chaque domaine d'application a développé sa propre procédure ainsi qu'une série d'outils pour résoudre les problèmes de qualité. Toutefois, les solutions fourbies par ces approches ne sont pas suffisantes pour répondre aux plus larges exigences des utilisateurs. En outre, ils ont leurs propres modèles de qualité, terminologies et modèles d'accès, ce qui pose des défis techniques pour les rendre interopérables. Visant à combler cette lacune, nous proposons une approchhe dont l'objectif est de faciliter la définition de métriques de qualité et de méthodes de mesure appropriées et adaptées aux besoins de qualité spécifiques à une organisation. La qualité dans les systèmes d’information désigne aussi l'exploitation des attributs de qualité pour des tâches spécifiques telles que la recherche d'objets remplissant un ensemble de critères dont certains sont relatifs à la qualité. Dans ce contexte, nous snous sommes intéressés à untype particulier d'objets que sont les processus métiers (généralement formulés sous forme de graphes). La plupart des travaux adressant le problème de la découverte de processus sont basés sur l'appariement approximatif de graphes. Toutefois, ces approches ont encore un taux de sélectivité faible, renvoyant un nombre conséquent de processus offrant les mêmes fonctionnalités, mais à des niveaux de qualité différents. Motivé par ce contexte, cette thèse propose également une approche pour l'évaluation des préférences de qualité dans l'appariement de modéles de processus
In the last years, strategies to improve or assure the quality in information systems have been addressed by several approaches. In practice, however, each application domain developed its own quality management procedure providing a specific vision of quality as well as a suite of tools to solve quality problems. Still, the solutions provided by such approaches are not sufficient to deal with broader user's requirements. Moreover, they have their own quality models, terminology and access patterns, which makes interoperability between ther a technical challenge. Aiming at filling this gap, this thesis proposes an approach whose main objective is to facilitate the definition of appropriate quality metrics and measurement methods tailored to specific quality organization. The quality in information systems also includes the exploitation of the quality information in specific tasks, e. G. , the retieval of objects satisfying a set of criteria (some of them related to quality attributes). In this context, the problem of process retrieval gained special attention due to the investment of organizations on process management practices and due to the consequent growth of the repositories of business processes. Current approaches for process retrieval propose strategies for handling processes with reasonable size and define ranking metrics that improve the relevance of the answers. However, these works address only the structural representation of the process and do not properly deal with the non-functional aspects at the different granularity levels. Motivated by these problems, this thesis also proposes an approach for evaluating quality preferences in process model matching
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Meidiana, Amyra. „Sublinear-time Algorithms and Faithfulness Metrics for Big Complex Graph Visualisation“. Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/27429.

Der volle Inhalt der Quelle
Annotation:
With the continuing ability to gather and store increasingly larger amounts of data, so has grown the size of network data being collected. The networks, which can be represented as graphs, are not only big in scale, but also complex, introducing issues of scalability and complexity in visualising and analysing them. With the rate of growth in the size of these graphs, traditional graph drawing algorithms have failed to scale when visualising big, complex graphs. Furthermore, evaluation of graph drawing algorithms is important to ensure that the algorithms are not only efficient, but also effective. Yet, traditional quality metrics have been shown to be not as effective in evaluating drawings of big complex graphs. In this thesis, we first present new sublinear-time graph drawing algorithms to efficiently visualise and analyse big complex graphs. More precisely, we present the following algorithms for drawing big graphs: topological spectral sparsification for fast, good quality graph sampling, as well as sublinear-time force-directed algorithms and sublinear-time stress-based algorithms for graph drawing. Our algorithms run faster than the current state-of-the-art linear-time graph drawing algorithms, while obtaining similar or even better quality drawings. We then present new faithfulness metrics for the evaluation of complex graph drawing,where faithfulness measures how well the drawing depicts the ground truth information of the underlying graph. Specifically, we present the cluster faithfulness metric, symmetry quality metric, and change faithfulness metrics. Our metrics have been validated to effectively evaluate the quality of drawings of complex graphs, based on how well they depict the ground truth information of the graphs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Gawalpanchi, Sheetal. „DEVELOPMENT OF A GRAPHICAL USER INTERFACE FOR CAL3QHC CALLED CALQCAD“. Master's thesis, University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2874.

Der volle Inhalt der Quelle
Annotation:
One of the major sources of air pollution in the United States metropolitan areas is due to automobiles. With the huge growth of motor vehicles and, greater dependence on them, air pollution problems have been aggravated. According to the EPA, nearly 95% of carbon monoxide (CO ) (EPA 1999) in urban areas comes from mobile sources, of which 51% is contributed by on road vehicles. It is well known fact that, carbon monoxide is one of the major mobile source pollutants and CO has detrimental effects on the human health. Carbon monoxide is the result of mainly incomplete combustion of gasoline in motor vehicles (FDOT 1996). The National Environmental Policy Act (NEPA) gives important considerations to the actions to be taken. Transportation conformity . The Clean Air Act Amendments (CAAA, 1970) was an important step in meeting the National Ambient Air Quality Standards In order to evaluate the effects of CO and Particulate Matter (PM) impacts based on the criteria for NAAQS standards, it is necessary to conduct dispersion modeling of emissions for mobile source emissions. Design of transportation engineering systems (roadway design) should take care of both the flow of the traffic as well as the air pollution aspects involved. Roadway projects need to conform to the State Implementation Plan (SIP) and meet the NAAQS. EPA guidelines for air quality modeling on such roadway intersections recommend the use of CAL3QHC. The model has embedded in it CALINE 3.0 (Benson 1979) – a line source dispersion model based on the Gaussian equation. The model requires parameters with respect to the roadway geometry, fleet volume, averaging time, surface roughness, emission factors, etc. The CAL3QHC model is a DOS based model which requires the modeling parameters to be fed into an input file. The creation of input the file is a tedious job. Previous work at UCF, resulted in the development of CALQVIEW, which expedites this process of creating input files, but the task of extracting the coordinates still has to be done manually. The main aim of the thesis is to reduce the analysis time for modeling emissions from roadway intersections, by expediting the process of extracting the coordinates required for the CAL3QHC model. Normally, transportation engineers design and model intersections for the traffic flow utilizing tools such as AutoCAD, Microstation etc. This thesis was to develop advanced software allowing graphical editing and coordinates capturing from an AutoCAD file. This software was named as CALQCAD. This advanced version will enable the air quality analyst to capture the coordinates from an AutoCAD 2004 file. This should expedite the process of modeling intersections and decrease analyst time from a few days to few hours. The model helps to assure the air quality analyst to retain accuracy during the modeling process. The idea to create the standalone interface was to give the AutoCAD user full functionality of AutoCAD tools in case editing is required to the main drawing. It also provides the modeler with a separate graphical user interface (GUI).
M.S.Env.E.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Environmental Engineering
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Akhremtsev, Yaroslav [Verfasser], und P. [Akademischer Betreuer] Sanders. „Parallel and External High Quality Graph Partitioning / Yaroslav Akhremtsev ; Betreuer: P. Sanders“. Karlsruhe : KIT-Bibliothek, 2019. http://d-nb.info/1198310022/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Howarth, Michael Saville. „Children and computers : the development of graphical user interfaces to improve the quality of interaction“. Thesis, Middlesex University, 2003. http://eprints.mdx.ac.uk/13487/.

Der volle Inhalt der Quelle
Annotation:
The development of educational multimedia since 1994 has been characterised by a rapid expansion of new technologies. In the context of an exciting and controversial exploration of techniques, research into how children used computers in the classroom had been limited. The thesis therefore included a wide-ranging study into factors informing a deeper understanding of the way 5 to7-year-old school children use interactive computer programs. The thesis originated in contextual studies undertaken by the researcher in classrooms. The contextual research raised issues that are not the common ground of educational multimedia practitioners. These issues were explored in depth in the literature review. The thesis tested the potential improvements in interface design - an interactive educational CD-ROM using audio and visual resources from a BBC School Radio music series. The focus was not the music content or the teaching of the subject. The results of testing the research tool that used observation of groups of three children, interviews with individual children and teachers were summarised and improvements identified. The aim was to seek answers to the question 'How can the quality of computer interface interaction be improved?' Improvements were considered by enhancing the quality of interaction through greater depth of engagement by using the computer mouse to move icons on the computer screen. In the process of contextual research the following issues were raised: the need for teachers to have a method of mediating the content of educational CD-ROMs, the physiological demands made on children in terms of eye search; the difficulties they encountered using navigation metaphors; and the potential of pseudo 3-D perspective interfaces. The research re-evaluates the relationship between children and computers in the familiar context of groups of three children using computers in the primary classroom, and resulted in a coherent set of suggestions for a more effective holistic paradigm for the design of multimedia programs that takes into account practical realities in classroom environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Wikander, Daniel. „Exploring the quality attribute and performance implications of using GraphQL in a data-fetching API“. Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20867.

Der volle Inhalt der Quelle
Annotation:
The dynamic query language GraphQL is gaining popularity within the field as more and more software architects choose it as their architectural model of choice when designing an API. The dynamic nature of the GraphQL queries provide a different way of thinking about data fetching, focusing more on the experience for the API consumer. The language provides many exciting features for the field, but not much is known about the implications of implementing them. This thesis analyzes the architecture of GraphQL and explores its attributes in order to understand the tradeoffs and performance implications of implementing a GraphQL architecture in a data-fetching API, as opposed to a conventional REST architecture. The results from the architectural analysis suggests that the GraphQL architecture values the usability and supportability attributes higher than its REST counterpart. A performance experiment was performed, testing the internal performance of GraphQL versus REST in a use-case where its dynamic functionality is not utilized (returning the same static response as its REST equivalent). The results indicate that the performance of GraphQL implementations are lower than that of its REST equivalents in use-cases where the dynamic functionality is not utilized.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Zhang, Qing. „HIGH QUALITY HUMAN 3D BODY MODELING, TRACKING AND APPLICATION“. UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/39.

Der volle Inhalt der Quelle
Annotation:
Geometric reconstruction of dynamic objects is a fundamental task of computer vision and graphics, and modeling human body of high fidelity is considered to be a core of this problem. Traditional human shape and motion capture techniques require an array of surrounding cameras or subjects wear reflective markers, resulting in a limitation of working space and portability. In this dissertation, a complete process is designed from geometric modeling detailed 3D human full body and capturing shape dynamics over time using a flexible setup to guiding clothes/person re-targeting with such data-driven models. As the mechanical movement of human body can be considered as an articulate motion, which is easy to guide the skin animation but has difficulties in the reverse process to find parameters from images without manual intervention, we present a novel parametric model, GMM-BlendSCAPE, jointly taking both linear skinning model and the prior art of BlendSCAPE (Blend Shape Completion and Animation for PEople) into consideration and develop a Gaussian Mixture Model (GMM) to infer both body shape and pose from incomplete observations. We show the increased accuracy of joints and skin surface estimation using our model compared to the skeleton based motion tracking. To model the detailed body, we start with capturing high-quality partial 3D scans by using a single-view commercial depth camera. Based on GMM-BlendSCAPE, we can then reconstruct multiple complete static models of large pose difference via our novel non-rigid registration algorithm. With vertex correspondences established, these models can be further converted into a personalized drivable template and used for robust pose tracking in a similar GMM framework. Moreover, we design a general purpose real-time non-rigid deformation algorithm to accelerate this registration. Last but not least, we demonstrate a novel virtual clothes try-on application based on our personalized model utilizing both image and depth cues to synthesize and re-target clothes for single-view videos of different people.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Phillips, Mark Edward. „Exploring the Use of Metadata Record Graphs for Metadata Assessment“. Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707350/.

Der volle Inhalt der Quelle
Annotation:
Cultural heritage institutions, including galleries, libraries, museums, and archives are increasingly digitizing physical items and collecting born-digital items and making these resources available on the Web. Metadata plays a vital role in the discovery and management of these collections. Existing frameworks to identify and address deficiencies in metadata rely heavily on count and data-value based metrics that are calculated over aggregations of descriptive metadata. There has been little research into the use of traditional network analysis to investigate the connections between metadata records based on shared data values in metadata fields such as subject or creator. This study introduces metadata record graphs as a mechanism to generate network-based statistics to support analysis of metadata. These graphs are constructed with the metadata records as the nodes and shared metadata field values as the edges in the network. By analyzing metadata record graphs with algorithms and tools common to the field of network analysis, metadata managers can develop a new understanding of their metadata that is often impossible to generate from count and data-value based statistics alone. This study tested application of metadata record graphs to analysis of metadata collections of increasing size, complexity, and interconnectedness in a series of three related stages. The findings of this research indicate effectiveness of this new method, identify network algorithms that are useful for analyzing descriptive metadata and suggest methods and practices for future implementations of this technique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Ruiz-Laverde, Manuel Fabián. „Image quality analysis of the reproductions of black and white photographs obtained from a desktop publishing system /“. Online version of thesis, 1989. http://ritdml.rit.edu/handle/1850/11485.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

James, Andrew Michael. „A link-quality-aware graph model for cognitive radio network routing topology management /“. Online version of thesis, 2007. http://hdl.handle.net/1850/5209.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Karlsson, Linus. „Optimering av sampling quality-parametrar för Mental Ray“. Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-10099.

Der volle Inhalt der Quelle
Annotation:
Fotorealistiska 3d-bilder används idag inom ett brett spektrum av branscher. Framställningen av denna typ av grafik kräver ofta väldigt mycket datorkraft. Vid rendering med renderingsmotorer som använder sig av raytracing algoritmer är aliasing ett medfött problem. Lösningen heter anti-aliasing som arbetar för att undvika aliasing artefakter som jagged edges eller Moiré-effekter med mera. En del av anti-aliasingprocessen är supersampling som ofta kräver mycket datorkraft. Att optimera parametrar för supersampling är därför mycket viktigt. Det är möjligt att genom optimering spara väldigt mycket datorkraft och därmed tid. Detta arbete innehåller resultat från experiment där deltagare får bedöma bilder med olika kvalité av anti-aliasing. Resultaten av dessa experiment kan användas som referens vid optimering av renderingsparametrar för anti-aliasing vid rendering med hjälp av Mental Ray.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Lelli, leitao Valeria. „Testing and maintenance of graphical user interfaces“. Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0022/document.

Der volle Inhalt der Quelle
Annotation:
La communauté du génie logiciel porte depuis ses débuts une attention spéciale à la qualité et la fiabilité des logiciels. De nombreuses techniques de test logiciel ont été développées pour caractériser et détecter des erreurs dans les logiciels. Les modèles de fautes identifient et caractérisent les erreurs pouvant affecter les différentes parties d’un logiciel. D’autre part, les critères de qualité logiciel et leurs mesures permettent d’évaluer la qualité du code logiciel et de détecter en amont du code potentiellement sujet à erreur. Les techniques d’analyses statiques et dynamiques scrutent, respectivement, le logiciel à l’arrêt et à l’exécution pour trouver des erreurs ou réaliser des mesures de qualité. Dans cette thèse, nous prônons le fait que la même attention doit être portée sur la qualité et la fiabilité des interfaces utilisateurs (ou interface homme-machine, IHM), au sens génie logiciel du terme. Cette thèse propose donc deux contributions dans le domaine du test et de la maintenance d’interfaces utilisateur : 1. Classification et mutation des erreurs d’interfaces utilisateur. 2. Qualité du code des interfaces utilisateur. Nous proposons tout d’abord un modèle de fautes d’IHM. Ce modèle a été conçu à partir des concepts standards d’IHM pour identifier et classer les fautes d’IHM ; Au travers d’une étude empirique menée sur du code Java existant, nous avons montré l’existence d’une mauvaise pratique récurrente dans le développement du contrôleur d’IHM, objet qui transforme les évènements produits par l’interface utilisateur pour les transformer en actions. Nous caractérisons cette nouvelle mauvaise pratique que nous avons appelée Blob listener, en référence à la méthode Blob. Nous proposons également une analyse statique permettant d’identifier automatiquement la présence du Blob listener dans le code d’interface Java Swing
The software engineering community takes special attention to the quality and the reliability of software systems. Software testing techniques have been developed to find errors in code. Software quality criteria and measurement techniques have also been assessed to detect error-prone code. In this thesis, we argue that the same attention has to be investigated on the quality and reliability of GUIs, from a software engineering point of view. We specifically make two contributions on this topic. First, GUIs can be affected by errors stemming from development mistakes. The first contribution of this thesis is a fault model that identifies and classifies GUI faults. We show that GUI faults are diverse and imply different testing techniques to be detected. Second, like any code artifact GUI code should be analyzed statically to detect implementation defects and design smells. As for the second contribution, we focus on design smells that can affect GUIs specifically. We identify and characterize a new type of design smell, called Blob listener. It occurs when a GUI listener, that gathers events to treat and transform as commands, can produce more than one command. We propose a systematic static code analysis procedure that searches for Blob listener that we implement in a tool called InspectorGuidget. Experiments we conducted exhibits positive results regarding the ability of InspectorGuidget in detecting Blob listeners. To counteract the use of Blob listeners, we propose good coding practices regarding the development of GUI listeners
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Garzone, Guillaume. „Approche de gestion orientée service pour l'Internet des objets (IoT) considérant la Qualité de Service (QoS)“. Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0027/document.

Der volle Inhalt der Quelle
Annotation:
L’Internet des Objets (IoT) est déjà omniprésent aujourd’hui : domotique, bâtiments connectés ou ville intelligente, beaucoup d’initiatives et d’innovations sont en cours et à venir. Le nombre d’objets connectés ne cesse de croître à tel point que des milliards d’objets sont attendus dans un futur proche.L’approche de cette thèse met en place un système de gestion autonomique pour des systèmes à base d’objets connectés, en les combinant avec d’autres services comme par exemple des services météo accessibles sur internet. Les modèles proposés permettent une prise de décision autonome basée sur l’analyse d’évènements et la planification d’actions exécutées automatiquement. Des paramètres comme le temps d’exécution ou l’énergie consommée sont aussi considérés afin d’optimiser les choix d’actions à effectuer et de services utilisés. Un prototype concret a été réalisé dans un scénario de ville intelligente et de bus connectés dans le projet investissement d'avenir S2C2
The Internet of Things (IoT) is already everywhere today: home automation, connected buildings or smart city, many initiatives and innovations are ongoing and yet to come. The number of connected objects continues to grow to the point that billions of objects are expected in the near future.The approach of this thesis sets up an autonomic management architecture for systems based on connected objects, combining them with other services such as weather services accessible on the Internet. The proposed models enable an autonomous decision making based on the analysis of events and the planning of actions executed automatically. Parameters such as execution time or consumed energy are also considered in order to optimize the choices of actions to be performed and of services used. A concrete prototype was realized in a smart city scenario with connected buses in the investment for future project: S2C2
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Garduno, Barrera David Rafael. „A differentiated quality of service oriented multimedia multicast protocol“. Phd thesis, Toulouse, INPT, 2005. http://oatao.univ-toulouse.fr/7383/1/gardunobarrera.pdf.

Der volle Inhalt der Quelle
Annotation:
Modern multimedia (MM) communication systems aim to provide new services such as multicast (MC) communication. But the rising of new very different MM capable devices and the growing number of clients drive to new requirements for mechanisms and protocols. In a MM communication, there are some flows that have constraints different from others and the required QoS for each flow is not the same. Furthermore, in MC communications, all the users do not want or are not able to receive the same QoS. These constraints imply that new communication mechanisms have to take into account the user requirements in order to provide an ad hoc service to each user and to avoid wasting the network resources. This dissertation proposes a new differentiated QoS multicast architecture, based on client/server proxies, called M-FPTP, which relays many MC LANs by single partially reliable links. This architecture provides a different QoS to each LAN depending on the users requirements. For doing so, it is also provided a network model called Hierarchized Graph (HG) which represents at the same time the network performances and the users QoS constraints. Nevertheless, the application of standard tree creation methods on an HG can lead to source overloading problems. It is then proposed a new algorithm called Degree-Bounded Shortest-Path-Tree (DgB-SPT) which solves this problem. However, the deployment of such a service needs a new protocol in order to collect users requirements and correctly deploy the proxies. This protocol is called Simple Session Protocol for QoS MC (SSP-QoM). The proposed solutions have been modeled, verified, validated and tested by using UML 2.0 and TAU G2 CASE tool.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Garduno, Barrera David Rafael. „A differentiated quality of service oriented multimedia multicast protocol“. Phd thesis, Toulouse, INPT, 2005. http://hal.science/tel-00009582.

Der volle Inhalt der Quelle
Annotation:
Cette thèse propose M-FPTP, une nouvelle architecture multipoint (MP) à QdS différentiée. Basée sur des proxies client/serveur, elle connecte plusieurs LANs MP en utilisant des connections partiellement fiables. Cette architecture fournit une QdS différente pour chaque LAN par rapport à ses besoins. Pour ce faire, on a d’abord proposé un Arbre Hiérarchisé (AH) qui représente en même temps les performances du réseau et les contraintes de QdS des utilisateurs. Cependant, l’application de méthodes standard de création d’arbres sur un AH peut conduire à des problèmes de surcharge de la source. Un nouvel algorithme appelé Arbre de Plus Courts Chemins Limité en Sortie résout ce problème. Le déploiement de ce service nécessite, pour gérer les utilisateurs et le déploiement correct des proxies, un nouveau protocole appelé Protocole Simple de Session pour QdS MP
Modern multimedia (MM) communication systems aim to provide new services such as multicast (MC) communication. But the rising of new very different MM capable devices and the growing number of clients drive to new requirements for mechanisms and protocols. In a MM communication, there are some flows that have constraints different from others and the required QoS for each flow is not the same. Furthermore, in MC communications, all the users do not want or are not able to receive the same QoS. These constraints imply that new communication mechanisms have to take into account the user requirements in order to provide an ad hoc service to each user and to avoid wasting the network resources. This dissertation proposes a new differentiated QoS multicast architecture, based on client/server proxies, called M-FPTP, which relays many MC LANs by single partially reliable links. This architecture provides a different QoS to each LAN depending on the users requirements. For doing so, it is also provided a network model called Hierarchized Graph (HG) which represents at the same time the network performances and the users QoS constraints. Nevertheless, the application of standard tree creation methods on an HG can lead to source overloading problems. It is then proposed a new algorithm called Degree-Bounded Shortest-Path-Tree (DgB-SPT) which solves this problem. However, the deployment of such a service needs a new protocol in order to collect users requirements and correctly deploy the proxies. This protocol is called Simple Session Protocol for QoS MC (SSP-QoM). The proposed solutions have been modeled, verified, validated and tested by using UML 2. 0 and TAU G2 CASE tool
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Guo, Jinjiang. „Contributions to objective and subjective visual quality assessment of 3d models“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI099.

Der volle Inhalt der Quelle
Annotation:
Dans le domaine de l’informatique graphique, les données tridimensionnelles, généralement représentées par des maillages triangulaires, sont employées dans une grande variété d’applications (par exemple, le lissage, la compression, le remaillage, la simplification, le rendu, etc.). Cependant, ces procédés introduisent inévitablement des artefacts qui altèrent la qualité visuelle des données 3D rendues. Ainsi, afin de guider perceptuellement les algorithmes de traitement, il y a un besoin croissant d'évaluations subjectives et objectives de la qualité visuelle à la fois performantes et adaptées, pour évaluer et prédire les artefacts visuels. Dans cette thèse, nous présentons d'abord une étude exhaustive sur les différentes sources d'artefacts associés aux données numériques graphiques, ainsi que l’évaluation objective et subjective de la qualité visuelle des artefacts. Ensuite, nous introduisons une nouvelle étude sur la qualité subjective conçue sur la base de l’évaluations de la visibilité locale des artefacts géométriques, dans laquelle il a été demandé à des observateurs de marquer les zones de maillages 3D qui contiennent des distorsions visibles. Les cartes de distorsion visuelle collectées sont utilisées pour illustrer plusieurs fonctionnalités perceptuelles du système visuel humain (HVS), et servent de vérité-terrain pour évaluer les performances des attributs et des mesures géométriques bien connus pour prédire la visibilité locale des distorsions. Notre deuxième étude vise à évaluer la qualité visuelle de modèles 3D texturés, subjectivement et objectivement. Pour atteindre ces objectifs, nous avons introduit 136 modèles traités avec à la fois des distorsions géométriques et de texture, mené une expérience subjective de comparaison par paires, et invité 101 sujets pour évaluer les qualités visuelles des modèles à travers deux protocoles de rendu. Motivés par les opinions subjectives collectées, nous proposons deux mesures de qualité visuelle objective pour les maillages texturés, en se fondant sur les combinaisons optimales des mesures de qualité issues de la géométrie et de la texture. Ces mesures de perception proposées surpassent leurs homologues en termes de corrélation avec le jugement humain
In computer graphics realm, three-dimensional graphical data, generally represented by triangular meshes, have become commonplace, and are deployed in a variety of application processes (e.g., smoothing, compression, remeshing, simplification, rendering, etc.). However, these processes inevitably introduce artifacts, altering the visual quality of the rendered 3D data. Thus, in order to perceptually drive the processing algorithms, there is an increasing need for efficient and effective subjective and objective visual quality assessments to evaluate and predict the visual artifacts. In this thesis, we first present a comprehensive survey on different sources of artifacts in digital graphics, and current objective and subjective visual quality assessments of the artifacts. Then, we introduce a newly designed subjective quality study based on evaluations of the local visibility of geometric artifacts, in which observers were asked to mark areas of 3D meshes that contain noticeable distortions. The collected perceived distortion maps are used to illustrate several perceptual functionalities of the human visual system (HVS), and serve as ground-truth to evaluate the performances of well-known geometric attributes and metrics for predicting the local visibility of distortions. Our second study aims to evaluate the visual quality of texture mapped 3D model subjectively and objectively. To achieve these goals, we introduced 136 processed models with both geometric and texture distortions, conducted a paired-comparison subjective experiment, and invited 101 subjects to evaluate the visual qualities of the models under two rendering protocols. Driven by the collected subjective opinions, we propose two objective visual quality metrics for textured meshes, relying on the optimal combinations of geometry and texture quality measures. These proposed perceptual metrics outperform their counterparts in term of the correlation with the human judgment
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Abrahamsson, Petter. „User Interface Design for Quality Control : Development of a user interface for quality control of industrial manufactured parts“. Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79724.

Der volle Inhalt der Quelle
Annotation:
The expected quality on manufactured components in the automotive industry is high, often with an accuracy of tenths of a millimeter. The conventional methods used to ensure the manufactured components are very accurate, but they are both time consuming and insufficient and only a small part of the produced series are analyzed today. The measurement is performed manually in so-called measurement fixtures. Where each component is fixed and predetermined points of investigation are controlled with a dial indicator. These fixtures are very expensive to manufacture and they are only compatible with one specific kind of component. Nowadays, great volumes of material are scrapped from these procedures in the automotive industry. Hence, there is a great need to increase the amount of controlled components without affecting the production rate negatively. This project was carried out for the relatively new company Viospatia, which is a spin-off company based on research from Luleå University of Technology. They have developed a system that automatically measures each component directly at the production line with the use of photogrammetry technology. This makes it possible to discover erroneous components almost immediately and the manufacturer gets a more distinct view of their production and its capability. The aim of this thesis has been to investigate how a user interface should be developed to be as user-friendly as possible without limiting the system’s functions. The objective has been to design a proposal of a user interface adapted for the intended user, creating value and is easy to use. The progression has been structured around a human-centered approach expedient for interaction design, where the developing phase, containing analyze, design and validate, is performed through iterations with continuous feedback from users and the project’s employer. The context, where the intended solution is supposed to be used, was investigated through interviews and observations at the involved companies. In the project there were three factories involved, Gestamp Hardtech and Scania Ferruform in Luleå and Volvo Cars in Olofström. These factories are using similar production methods, sheet metal stamping, so their prerequisites and needs are similar for this type of quality control system. Creative methods have been applied throughout the project to generate as much ideas as possible while trying to satisfy all the important aspects. Initially analog prototypes were created but they were soon developed to digital interactive prototypes. A larger usability-test was conducted with seven participants by using a weblink to the digital prototype. With support from the feedback these tests generated some adjustments were made and the final user interface was designed, separated in two levels - Supervisor and Operator. Through extensive literature study and user-testing it became clear that the operator needs to get an unmistakable message from the user interface. There should not be any doubts whatsoever and the operator should react immediately. This message is delivered with the use of colors that have an established meaning. By identifying what needs the different actors have, the system’s functions can be separated and made accessible only for the intended user. The functions can then be more specifically developed for the intended user instead of modifying them trying to make a compromise that fits everybody. This separation of functions is not anything the user has to actively do but it is performed automatically by the user interface when the user is signing in.
Den förväntade kvalitén på tillverkade delar inom bilindustrin är väldigt hög, med toleranser på så lite som tiondels millimeter många gånger. De konventionella metoderna som används för att kontrollmäta de tillverkade delarna idag är mycket noggranna, men de är både tidskrävande och otillräckliga och endast en väldigt liten del av en producerad serie blir kontrollmätt idag. Mätningen utförs manuellt i så kallade mätfixturer. Där varje komponent fixeras och förutbestämda undersökningspunkter kontrolleras med en så kallad mätklocka. Dessa fixturer är även väldigt dyra att tillverka och de är bara kompatibla med en specifik komponent. I dagens läge så kasseras otroligt stora mängder material från dessa komponenter inom bilindustrin. Här finns det alltså ett stort behov för att öka mängden komponenter som kontrolleras utan att påverka tillverkningstakten. Det här projektet utfördes åt det relativt nystartade företaget Viospatia, vilket är ett spin-off företag från forskning utförd vid Luleå tekniska universitet. De har utvecklat ett system som med hjälp av fotogrammetri automatiskt mäter av varje komponent direkt i produktionslinan. Detta gör att eventuella fel upptäcks nästan omedelbart samtidigt som tillverkaren får en tydligare bild av sin produktion och dess kapacitet. Syftet med denna masteruppsats har varit att undersöka hur ett gränssnitt bör utvecklas för att det ska bli så användarvänligt som möjligt utan att begränsa systemets viktiga funktioner. Målet har varit att ta fram ett förslag på ett gränssnitt som är anpassat för den tänkta användaren, som skapar ett mervärde och är enkelt att använda. Processen har följt en användarcentrerad struktur fördelaktig för interaktionsdesign, där utvecklingsfasen bestående av analys, design och validering sker i flera iterationer med kontinuerlig återkoppling med användare och uppdragsgivare. Kontexten, där den tänkta lösningen ska användas, undersöktes initialt hos de involverade företagen. I projektet var tre fabriker involverade, Gestamp Hardtech och Scania Ferruform i Luleå och Volvo Cars i Olofström. Dessa fabriker använder mestadels liknande tillverkningsmetoder, metallpressning, vilket gör att de rimligtvis har en del gemensamma förutsättningar och behov. Under arbetets gång har diverse kreativa metoder använts för att generera så mycket idéer som möjligt utan att förbise viktiga aspekter. Till en början utvecklades prototyper analogt för att sedan utvecklas till digitala interaktiva prototyper. Ett större användbarhetstest genomfördes på distans med sju testpersoner via en länk till den digitala prototypen. Med hjälp av responsen från dessa tester gjordes en del ändringar och den slutliga designen på gränssnittet blev uppdelat i två nivåer, Supervisor och Operator. Genom teoristudie och användartester framgick det att operatören behöver få en omisskännlig uppmaning från gränssnittet. Det bör inte uppstå några som helst tveksamheter och operatören skall kunna agera direkt. Denna uppmaning sker genom en tydlig färgkodning som utnyttjar vedertagna uppfattningar om färgers innebörd. Genom att identifiera vilka behov de olika aktörerna har kan man på så sätt också hålla isär de olika funktionerna och göra de tillgängliga endast för den typen av aktör som behöver de. De kan på så sätt också utvecklas mer specifikt för den tänkta aktören istället för att modifieras för att passa alla. Denna separering av funktioner är inget som användaren behöver ställa in själv utan görs automatiskt då den loggar in med sitt användarkonto.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Pérez, Cazorla Frederic. „Global illumination techniques for the computation of hight quality images in general environments“. Doctoral thesis, Universitat Politècnica de Catalunya, 2003. http://hdl.handle.net/10803/6640.

Der volle Inhalt der Quelle
Annotation:
The objective of this thesis is the development of algorithms for the simulation of the light transport in general environments to render high quality still images. To this end, first we have analyzed the existing methods able to render participating media, more concretely those that account for multiple scattering within the media. Next, we have devised a couple of two pass methods for the computation of those images. For the first step we have proposed algorithms to cope with the scenes we want to deal with. The second step uses the coarse solution of the first step to obtain the final rendered image.
The structure of the dissertation is briefly presented below.
In the first chapter the motivation of the thesis and its objectives are discussed. It also summarizes the contributions of the thesis and its organization.
In the second chapter the principles of global illumination for general environments are reviewed, with the most important equations---the rendering equation and the transport equation---whose solution constitutes the global illumination problem. In order to solve the global illumination problem, a certain number of multi-pass methods exist. Their objective is to be able to skip restrictions on the number of types of light paths that could be dealt with a single technique, or increase efficiency and/or accuracy. We have opted to follow this philosophy, and a pair of two pass methods have been developed for general environments.
The third chapter includes the study of the methods that perform the single scattering approximation, and also the study of the ones that take into account multiple scattering.
The fourth chapter is devoted to our first pass method, which computes a rough estimate of the global illumination. Knowing the benefits of hierarchical approaches, two concrete algorithms based on hierarchies have been extended to be more generic: Hierarchical Radiosity with Clustering and Hierarchical Monte Carlo Radiosity.
Our second pass is considered in the next chapter. Using the coarse solution obtained by the first pass, our second pass computes a high quality solution from a given viewpoint. Radiances and source radiances are estimated using Monte Carlo processes in the context of path tracing acceleration and also for final gather. Probability density functions (PDFs) are created at ray intersection points. For such a task, we initially used constant basis functions for the directional domain. After realizing of their limitations we proposed the Link Probabilities (LPs), which are objects with adaptive PDFs in the links-space.
In order to take advantage of the effort invested for the construction of the LPs, we have devised two closely related progressive sampling strategies. In the second pass, instead of sampling each pixel individually, only a subset of samples is progressively estimated across the image plane. Our algorithms are inspired by the work of Michael D. McCool on anisotropic diffusion using conductance maps.
The final chapter presents the conclusions of the thesis. Also possible lines of further research are suggested.
El objetivo de esta tesis es el desarrollo de algoritmos para la simulación del transporte de la luz en los entornos genéricos para generar imágenes de la alta calidad. Con este fin, primero hemos analizado los métodos existentes capaces de visualizar medios participativos, más concretamente los que tienen en cuenta la dispersión múltiple en los medios. Después, hemos ideado un par de métodos de dos pasos para el cómputo de esas imágenes. Para el primer paso hemos propuesto algoritmos que hacen frente a las escenas que deseamos tratar. El segundo paso utiliza la solución aproximada del primer paso para obtener la imagen final.
La estructura de la disertación se presenta brevemente en lo que sigue.
En el primer capítulo se discuten la motivación de la tesis y sus objetivos. También se resumen las contribuciones de la tesis y su organización.
En el segundo capítulo se repasan los principios de la iluminación global para los ambientes genéricos, con las ecuaciones-más importantes (la ecuación de rendering y la ecuación de transporte) cuya solución constituye el problema global de iluminación. Para solucionar el problema global de iluminación, cierto número de métodos de múltiples pasos existen. Su objetivo es poder eliminar restricciones en el número de tipos de caminos de luz que se podrían tratar con una sola técnica, o aumentar su eficacia y/o exactitud. Hemos optado seguir esta filosofía, desarrollando un par de métodos de dos pasos para entornos genéricos.
El tercer capítulo incluye el estudio de los métodos que utilizan la aproximación de dispersión simple, y también el estudio de los que consideran la dispersión múltiple.
El cuarto capítulo está dedicado a nuestro método de primer paso, que computa un cálculo aproximado de la iluminación global. Conociendo las ventajas de los métodos jerárquicos, dos algoritmos concretos basados en jerarquías se han ampliado para ser más genéricos: radiosidad jerárquica con clustering y radiosidad jerárquica usando Monte Carlo. Nuestro segundo paso se considera en el capítulo siguiente. Usando la solución aproximada obtenida por el primer paso, el segundo paso computa una solución de la alta calidad para un punto de vista dado. Se estiman las radiancias usando procesos de Monte Carlo en el contexto de la aceleración de trazadores de rayos y también para final gather. Las funciones de densidad de probabilidad (PDFs) se crean en los puntos de interacción de los rayos. Para tal tarea, utilizamos inicialmente funciones constantes como base para el dominio direccional. Después de comprender sus limitaciones, propusimos establecer probabilidades directamente sobre los enlaces (link probabilities, o LPs), usando objetos con PDFs adaptativos en el espacio de los enlaces.
Para aprovechar el esfuerzo invertido en la construcción de los LPs, hemos ideado dos estrategias de muestreo progresivas. En el segundo paso, en vez de muestrear cada pixel individualmente, solamente se estima progresivamente un subconjunto de muestras a través del plano de imagen. Nuestros algoritmos han sido inspirados en el trabajo de Michael D. McCool en la difusión anisotrópica usando mapas de conductancia.
El capítulo final presenta las conclusiones de la tesis, y también sugiere las líneas posibles de investigación futura.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Royer, Loic. „Unraveling the Structure and Assessing the Quality of Protein Interaction Networks with Power Graph Analysis“. Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-62562.

Der volle Inhalt der Quelle
Annotation:
Molecular biology has entered an era of systematic and automated experimentation. High-throughput techniques have moved biology from small-scale experiments focused on specific genes and proteins to genome and proteome-wide screens. One result of this endeavor is the compilation of complex networks of interacting proteins. Molecular biologists hope to understand life's complex molecular machines by studying these networks. This thesis addresses tree open problems centered upon their analysis and quality assessment. First, we introduce power graph analysis as a novel approach to the representation and visualization of biological networks. Power graphs are a graph theoretic approach to lossless and compact representation of complex networks. It groups edges into cliques and bicliques, and nodes into a neighborhood hierarchy. We demonstrate power graph analysis on five examples, and show its advantages over traditional network representations. Moreover, we evaluate the algorithm performance on a benchmark, test the robustness of the algorithm to noise, and measure its empirical time complexity at O (e1.71)- sub-quadratic in the number of edges e. Second, we tackle the difficult and controversial problem of data quality in protein interaction networks. We propose a novel measure for accuracy and completeness of genome-wide protein interaction networks based on network compressibility. We validate this new measure by i) verifying the detrimental effect of false positives and false negatives, ii) showing that gold standard networks are highly compressible, iii) showing that authors' choice of confidence thresholds is consistent with high network compressibility, iv) presenting evidence that compressibility is correlated with co-expression, co-localization and shared function, v) showing that complete and accurate networks of complex systems in other domains exhibit similar levels of compressibility than current high quality interactomes. Third, we apply power graph analysis to networks derived from text-mining as well to gene expression microarray data. In particular, we present i) the network-based analysis of genome-wide expression profiles of the neuroectodermal conversion of mesenchymal stem cells. ii) the analysis of regulatory modules in a rare mitochondrial cytopathy: emph{Mitochondrial Encephalomyopathy, Lactic acidosis, and Stroke-like episodes} (MELAS), and iii) we investigate the biochemical causes behind the enhanced biocompatibility of tantalum compared with titanium.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Singh, Maninder. „Using Machine Learning and Graph Mining Approaches to Improve Software Requirements Quality: An Empirical Investigation“. Diss., North Dakota State University, 2019. https://hdl.handle.net/10365/29803.

Der volle Inhalt der Quelle
Annotation:
Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design). Software inspections are commonly used in industry to detect and fix problems in requirements and design artifacts, thereby mitigating the fault propagation to later phases where the same faults are harder to find and fix. The output of an inspection process is list of faults that are present in software requirements specification document (SRS). The artifact author must manually read through the reviews and differentiate between true-faults and false-positives before fixing the faults. The first goal of this research is to automate the detection of useful vs. non-useful reviews. Next, post-inspection, requirements author has to manually extract key problematic topics from useful reviews that can be mapped to individual requirements in an SRS to identify fault-prone requirements. The second goal of this research is to automate this mapping by employing Key phrase extraction (KPE) algorithms and semantic analysis (SA) approaches to identify fault-prone requirements. During fault-fixations, the author has to manually verify the requirements that could have been impacted by a fix. The third goal of my research is to assist the authors post-inspection to handle change impact analysis (CIA) during fault fixation using NL processing with semantic analysis and mining solutions from graph theory. The selection of quality inspectors during inspections is pertinent to be able to carry out post-inspection tasks accurately. The fourth goal of this research is to identify skilled inspectors using various classification and feature selection approaches. The dissertation has led to the development of automated solution that can identify useful reviews, help identify skilled inspectors, extract most prominent topics/keyphrases from fault logs; and help RE author during the fault-fixation post inspection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

GOMES, JUNIOR ALCIDES. „Determinacao de selenio em agua subterranea utilizando a espectrometria de absorcao atomica com atomizacao eletrotermica em forno de grafita (GFAAS) e geracao de hidretos (HGAAS)“. reponame:Repositório Institucional do IPEN, 2008. http://repositorio.ipen.br:8080/xmlui/handle/123456789/9378.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-10-09T12:26:22Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:10:00Z (GMT). No. of bitstreams: 0
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie