Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Abstract information.

Дисертації з теми "Abstract information"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Abstract information".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Janakiraman, Muralidharan. "Abstract Index Interfaces." PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5288.

Повний текст джерела
Анотація:
An index in a database system interacts with many of the software modules in the system. For systems supporting a wide range of index structures, interfacing the index code with the rest of the system poses a great problem. The problems are an order of magnitude more for adding new access methods to the system. These problems could be reduced manifold if common interfaces could be specified for different access methods. It would be even better, if these interfaces could be made database-system independent. This thesis addresses the problem of defining generic index interfaces for access methods in database systems. It concentrates on two specific issues: First, specification of a complete set of abstract interfaces that would work for all access methods and for all database systems. Second, optimized query processing for all data types including userdefined data types. An access method in a database system can be considered to be made up of three specific parts: Upper interfaces, lower interfaces, and type interfaces. An access method interacts with a database system through its upper interfaces, lower interfaces and type interfaces. Upper interfaces consist of the functions an index provides to a database system. Lower interfaces are the database-system dependent software modules an index has to interact with, to accomplish any system related functions. Type interfaces consist of the set of functions an index uses, which interpret the data type. These three parts together characterize an access method in a database system. This splitting of an access method makes it possible to define generic interfaces. In this thesis, we will discuss each of these three different interfaces in detail, identify functionalities and design clear interfaces. The design of these interfaces promote development of type-independent and database-system independent access methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ledezma, Carlos. "Static analysis of multi-threaded applications by abstract interpretation." Thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143029.

Повний текст джерела
Анотація:
There exist currently in production an immense number of applications that are considered safety critical, meaning that the execution of them is directly related to issues concerning the well being of people. A domain where these applications are particularly present is in the aeronautics industry. A piece of critical software that’s embedded into an airplane’s calculator cannot, under any circumstance, fail while the aircraft is in-flight. And this restriction becomes more and more severe when the priority of the application escalates. This situation also poses an inconvenient at the moment of testing software. Since for applications to be tested on their real environment (flight test) it is necessary to have certain guarantees that it won’t fail, other methods such as unitary tests and simulations have to be used. But none of these methods are sound, meaning that if some particular case is unintentionally left out of the executions, then the behavior of the program in such scenario is not contemplated in the performed analysis. But when we are talking about safety critical applications, these small cases could mean a very big difference. This is why more and more companies that produce this kind of software are starting to include in their verification process sound techniques to validate the absence of run-time errors on their programs. Particularly Airbus, one of the main aircraft manufacturers of the world, uses AstréeA, a static analyzer based on abstract interpretation, to prove that the programs embedded in their calculators cannot possibly fail. In the following report an investigation will be presented were AstréeA was used at Airbus to prove the absence of run-time errors on the ATSU. The introductory chapter presents a description of the software analyzed, an explanation of the objectives set for the project and its scope. Then, on chapter 2 all the necessary theoretical concepts will be presented. Sections 2.1 - 2.3 give an overview of the basics of abstract interpretation, while section 2.4 presents the analyzer used. Then chapters 3 and 4 describe in depth the solution given and how the investigation was carried out. Finally chapters 5 and 6 enter into the presentation and analysis of the results obtained in the period of study and the current state of the solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Shovman, Mark. "Measuring comprehension of abstract data visualisations." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/4cfbdab1-0f91-4886-8b02-a4a8da48aa72.

Повний текст джерела
Анотація:
Common visualisation techniques such as bar-charts and scatter-plots are not sufficient for visual analysis of large sets of complex multidimensional data. Technological advancements have led to a proliferation of novel visualisation tools and techniques that attempt to meet this need. A crucial requirement for efficient visualisation tool design is the development of objective criteria for visualisation quality, informed by research in human perception and cognition. This thesis presents a multidisciplinary approach to address this requirement, underpinning the design and implementation of visualisation software with the theory and methodology of cognitive science. An opening survey of visualisation practices in the research environment identifies three primary uses of visualisations: the detection of outliers, the detection of clusters and the detection of trends. This finding, in turn, leads to a formulation of a cognitive account of the visualisation comprehension processes, founded upon established theories of visual perception and reading comprehension. Finally, a psychophysical methodology for objectively assessing visualisation efficiency is developed and used to test the efficiency of a specific visualisation technique, namely an interactive three-dimensional scatterplot, in a series of four experiments. The outcomes of the empirical study are three-fold. On a concrete applicable level, three-dimensional scatterplots are found to be efficient in trend detection but not in outlier detection. On a methodological level, ‘pop-out’ methodology is shown to be suitable for assessing visualisation efficiency. On a theoretical level, the cognitive account of visualisation comprehension processes is enhanced by empirical findings, e.g. the significance of the learning curve parameters. All these provide a contribution to a ‘science of visualisation’ as a coherent scientific paradigm, both benefiting fundamental science and meeting an applied need.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Okhravi, Christopher. "Markup has resolution. : In search of a more abstract language." Thesis, Uppsala universitet, Institutionen för informatik och media, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-236664.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bylund, Johanna, and Josefine Nåvik. "Allt är inte sagt bara för att en lag har talat : En kvalitativ dokumentstudie om hur insiderlagen i praktiken kan ses som en spelregel." Thesis, Mittuniversitetet, Institutionen för ekonomi, geografi, juridik och turism, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-37009.

Повний текст джерела
Анотація:
Reglering av insiderhandel är ett omdebatterat ämne som ofta leder till svarta rubriker i media. Att försöka motverka insiderhandel grundar sig i den asymmetriska information som är vanligt förekommande på värdepappersmarknaden och som tycks vara anledningen till att marknaden kan upplevas som orättvis och omoralisk. Reglering av insiderhandel har således ansetts vara nödvändigt men själva lagen har ifrågasatts när det kommer till dess verkliga funktion och effektivitet. Tidigare forskning visar nämligen på att lagen kanske bäst kan förstås som en spelregel där spelet är amoraliskt. Hur lagen kan liknas vid en spelregel i praktiken verkar däremot saknas i tidigare forskning. En studie som jämför insiderlagen med ekonomiska brott, såsom bedrägeribrott, har således ansetts vara nödvändig för att avgöra på vilka sätt insiderlagen kan liknas vid en amoralisk spelregel. Denna kvalitativa dokumentstudie ämnar således att utifrån ett socialkonstruktionistiskt synsätt analysera och beskriva insiderlagen och på vilka sätt den kan förstås som en spelregel i praktiken. Vikten av att upprätthålla förtroendet för värdepappersmarknaden har vidare framhållits av tillsynsmyndigheterna Ekobrottsmyndigheten och Finansinspektionen, därför ansågs det även vara intressant att studera dokument från deras respektive hemsidor. Dokumenten analyserades utifrån en tematisk analys där resultaten kopplades till värdepappersreglering och insiderlag, förtroende i dess abstrakta form samt insdiderlag och/eller spelregler. Resultaten i denna studie pekar på att insiderlagen i praktiken bäst förstås som en spelregel eftersom insiderhandel är ett offerlöst brott där det tillsynes finns svårigheter att bevisa att ett brott faktiskt har begåtts.
Nyckelord: Värdepappersreglering, asymmetrisk information, abstrakta system, abstrakt förtroende, spelregel, spelAbstractInsider trading regulation is a highly-debated topic which often leads to black headlines in the media. Attempts to counteract insider trading is based on the asymmetric information that is common in the securities market and which seems to be the reason why the market can be perceived as unfair and immoral. Insider trading regulation has thus been considered necessary, but the law itself has been questioned when it comes to its real function and efficiency. Earlier research namely shows that the law may be better understood as a game rule where the game is amoral. How the law practically can be better understood as a game rule seems however yet to be lacking in previous research. A study comparing insider law with economic crimes, such as fraud offenses, has thus been considered necessary to determine in which ways the insider law can be compared to an amoral rule of law. This qualitative document-study thus aims to analyze and describe the insider law in a social constructive approach and in what ways it practically can be better understood as a game rule. The importance of maintaining trust in the securities market has also been emphasized by the supervisory authorities Ekobrottsmyndigheten and Finansinspektionen, therefore it was also considered interesting to study documents from their respective websites. The documents were analyzed based on a content analysis where the results were linked to securities regulation and insider law, trust in its abstract form and insiderlaw and/or game rules. The results of this study indicate that the insider law practically should be better understood as a game rule as insider trading is a victimless crime where there are difficulties in proving that a crime actually has been committed.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tennis, Joseph T. "The economic and aesthetic axis of information organization frameworks (extended abstract)." dLIST, 2007. http://hdl.handle.net/10150/106120.

Повний текст джерела
Анотація:
When we examine how and why decisions get made in the indexing enterprise writ large, we see that two factors shape the outcome: economics and aesthetics. For example, the Library of Congress has reduced the time and effort it has spent on creating bibliographic records, while the Library and Archives Canada has begun coordinating the work of librarians and archivists in describing the documentary heritage of Canada (Oda and Wilson, 2006; LAC, 2006). Both of these initiatives aim at reducing costs of the work of description. They are decisions based on economic considerations. When engaged in deciding what fields, tags, and indicators to use in cataloguing, librarians consider the cost of labour and whether or not the system will use that work for display and retrieval. On the other hand, international bodies craft standards that are designed to shape the indexing enterprise. For example, we see the form of controlled vocabularies in ANSI/NISO Z39.19-2005. We then evaluate such vocabularies as to whether or not they comport with that form. This is one interpretation of the aesthetic consideration of indexing. We can take this further. We can look at indexing theory and, for example the work of Ranganathan and the CRG, and compare instantiations of classification schemes as to whether or not they are truly faceted. These examples result from designers and implementers of description and identification systems asking: what is good enough? When is my framework for information organization good enough? Though each of these acts is governed by a different purpose (sometimes pragmatic, sometimes artistic), the acts involved, the identification and description of resources, is measured against both economic and aesthetic concerns: how much does it cost, and how well does it comply with an abstract form, how is it evocative of our human urge to name and organize? Information organization frameworks, like those discussed above, comprise structures, work practices, and discourses. Examples of structure would be: the bibliographic record, the archival description, and the list developed by the patrons of the art installation. Work practices enable, result in, and evaluate structures, and the discourse shapes how priorities and purposes are aligned in both work practices and structures. Key to all examples of and components of Information Organization Frameworks are considerations of cost and compliance with abstract form (standardization or design). This paper explores the diversity of information organization frameworks, looking specifically at how aesthetic concerns and economic concerns manifest their work practices, structures, and discourse. In order to do this I examine the manuals and policies that shape work practice, the structures and their paratextual material (introductions, how-to-use guides, etc.), and the literature that references these practices and structures. I take the position that we need to move into a more descriptive stance on practices of knowledge organization, not only in documentary heritage institutions (libraries, archives, and museums), but also into the cultural and artistic realms. By expanding the scope of inquiry we can interrogate the integrity of my assertion above that information organization frameworks wrestle with, and manifest along a spectrum drawn from economic to aesthetic decision-making. This project, investigating the economic-aesthetic axis of information organization frameworks, follows the recent development in knowledge organization research, which is moving from prescriptive (how to design systems) to a descriptive (what systems are being built, how and why) approach, (Beghtol, 2003; Andersen, 2005). By engaging in this work, we grow more familiar with not only the professional concerns with knowledge organization, but rather, expand the scope of our inquiring into the knowledge organization practices for various purposes, and develop a deeper understanding of the human urge to name and organize.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ménard, Elaine. "Indexing and retrieving images in a multilingual world (extended abstract)." dLIST, 2007. http://hdl.handle.net/10150/105900.

Повний текст джерела
Анотація:
The Internet constitutes a vast universe of knowledge and human culture, allowing the dissemination of ideas and information without borders. The Web also became an important media for the diffusion of multilingual resources. However, linguistic differences still form a major obstacle to scientific, cultural, and educational exchange. With the ever increasing size of the Web and the availability of more and more documents in various languages, this problem becomes all the more pervasive. Besides this linguistic diversity, a multitude of databases and collections now contain documents in various formats, which may also adversely affect the retrieval process. This paper presents the context, the problem statement, and the experiment carried out of a research project aiming to verify the existing relations between two different indexing approaches: (1) traditional image indexing recommending the use of controlled vocabularies or (2) free image indexing using uncontrolled vocabulary, and their respective performance for image retrieval, in a multilingual context. The use of controlled vocabularies or uncontrolled vocabularies raises a certain number of difficulties for the indexing process. These difficulties will necessarily entail consequences at the time of image retrieval. Indexing with controlled or uncontrolled vocabularies is a question extensively discussed in the literature. However, it is clear that many searchers recognize the advantages of either form of vocabulary according to circumstances (Arsenault, 2006). It appears that the many difficulties associated with free indexing using uncontrolled vocabularies can only be understood via a comparative analysis with controlled vocabulary indexing (Macgregor & McCulloch, 2006). This research compares image retrieval within two contexts: a monolingual context where the language of the query is the same as the indexing language; and a multilingual context where the language of the query is different from the indexing language. This research will indicate if one of these indexing approaches surpasses the other, in terms of effectiveness, efficiency, and satisfaction of the image searchers. For this research, three data collection methods are used: (1) the analysis of the vocabularies used for image indexing in order to examine the multiplicity of term types applied to images (generic description, identification, and interpretation) and the degree of indexing difficulty due to the subject and the nature of the image; (2) the simulation of the retrieval process with a subset of images indexed according to each indexing approach studied, and finally, (3) the administration of a questionnaire to gather information on searcher satisfaction during and after the retrieval process. The quantification of the retrieval performance of each indexing approach is based on the usability measures recommended by the standard ISO 9241-11, i.e. effectiveness, efficiency, and satisfaction of the user (AFNOR, 1998). The need to retrieve a particular image from a collection is shared by several user communities including teachers, artists, journalists, scientists, historians, filmmakers and librarians, all over the world. Image collections also have many areas of application: commercial, scientific, educational, and cultural. Until recently, image collections were difficult to access due to limitations in dissemination and duplication procedures. This research underlines the pressing necessity to optimize the methods used for image processing, in order to facilitate the imagesâ retrieval and their dissemination in multilingual environments. The results of this study will offer preliminary information to deepen our understanding of the influence of the vocabulary used in image indexing. In turn, these results can be used to enhance access to digital collections of visual material in multilingual environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lidén, Alice, and Victoria Nyberg. "Vem vet och vem bryr sig? : En kvalitativ studie om generation Y:s medvetenhet om produktplacering och övervakning på internet och i sociala medier." Thesis, Mittuniversitetet, Avdelningen för ekonomivetenskap och juridik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-34481.

Повний текст джерела
Анотація:
Trots att det mediala och akademiska intresset för övervakning på internet och i sociala medier har ökat, visar tidigare forskning att yngre användare verkar ställa sig relativt obrydda till övervakning. Utifrån fem semistrukturerade intervjuer samt två fokusgruppsintervjuer med respondenter från generation Y skapar sig författarna en bild över respondenternas medvetenhet om och inställning till övervakning och produktplacering på internet och i sociala medier. Studien, som är en kvalitativ fallstudie med en socialkonstruktivistisk ansats, syftar till att utöka tidigare forskning inom ämnet övervakning samt kunna bidra med ökadkunskap till unga konsumenter som idag verkar på den digitala marknaden. Studiens empiri presenteras genom en tematisk analys där resultatet kopplas till utvalda teoretiska modeller om bland annat övervakning, tillit och kunskap. Resultatet och analysen tyder på att en viss medvetenhet om produktplacering och övervakning finns samt att inställningen till begreppen är relativt likgiltig alternativt negativ. Respondenterna upplevs värdesätta användningen avabstrakta system i utbyte mot minskad kontroll vilket bland annat tros kunna bero på deras starka behov av att integrera med varandra samt den tillit som finns till abstrakta system. Studien mynnar ut i en sammanfattande diskussion som belyser de eventuella konsekvenserna av tilliten till systemen där en potentiell negativ följd tros kunna vara en minskad förmåga att samla in ny ontologisk kunskap.
Despite that the medial and academic interest in online and social media surveillance has increased, previous research shows that younger users of these technologies themselves appear to remain relatively unconcerned with surveillance. By conducting five semi structured interviews and two focus group interviews with respondents from generation Y, the authors of this study aim to get a picture of the respondents’ awareness and attitude towards surveillance and product placement on the internet and in social media. The study, which is a qualitative case study with a social constructivist approach, aims at expanding previous research in surveillance as well as contributing to increased knowledge for young consumers who on a daily basis are active in the digital market. The empirical data of the study are presented by a thematic analysis, which links the results to selected theoretical models of surveillance, trust and knowledge. The result and analysis indicate that some awareness of product placement and surveillance exists and that the attitude towards both of the terms are relatively indifferent or negative. Respondents are perceived to accept the trade-off of greater usability for decreased control which, among other things, may be due to their strong need to integrate with each other and their trust in abstract systems. The study concludes with a final discussion highlighting the potential consequences of trusting the systems where a possible negative consequence may be a reduced ability to gain new ontological knowledge.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Krieglstein, Daniel. "Rethinking the Scientific Database| Exploring the Feasibility of Building a New Scientific Abstract Database." Thesis, Illinois Institute of Technology, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10827846.

Повний текст джерела
Анотація:

Abstract databases are essential for literature reviews, and in turn for the scientific process. Research into user interface designs and their impact on scientific article discovery is limited. The following study details the process of building a new abstract database and explores several user interface design elements that should be tested in the future.

The initial goal of this study was to test the feasibility of building a new abstract database. Using Crossref metadata, we concluded that the cost to produce parsing code for the entire data set proved prohibitive for a volunteer team. The legal, production, and design elements necessary to build a new abstract database are discussed in detail. This study should serve as a baseline for future abstract database testing.

Стилі APA, Harvard, Vancouver, ISO та ін.
10

Geng, Zhao. "Visual analysis of abstract multi-dimensional data with parallel coordinates." Thesis, Swansea University, 2013. https://cronfa.swan.ac.uk/Record/cronfa43002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Herbert, George D. "Compiling Unit Clauses for the Warren Abstract Machine." UNF Digital Commons, 1987. http://digitalcommons.unf.edu/etd/571.

Повний текст джерела
Анотація:
This thesis describes the design, development, and installation of a computer program which compiles unit clauses generated in a Prolog-based environment at Argonne National Laboratories into Warren Abstract Machine (WAM) code. The program enhances the capabilities of the environment by providing rapid unification and subsumption tests for the very significant class of unit clauses. This should improve performance substantially for large programs that generate and use many unit clauses.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bruhl, Analee. "The role of situations and thematic reorganization in the conceptual processing of abstract concepts." Thesis, Lyon 2, 2014. http://www.theses.fr/2014LYO20040/document.

Повний текст джерела
Анотація:
Le système conceptuel humain est connu pour contenir deux types de concepts principaux: concrets et abstraits. Les concepts abstraits tels que l'opinion ou la détermination expriment les relations séquentielles entre les entités, ainsi que les états mentaux et introspectifs qui caractérisent la conscience humaine. Les recherches antérieures se sont très peu intéressées à la manière dont les concepts abstraits sont représentés tant sur le plan cognitif que conceptuel. Dans la littérature, la représentation, récupération et traitement des concepts abstraits dans le système conceptuel sont principalement attribués au phénomène connu sous le nom de l’effet de concrétude (avantage pour les mots concrets relativement aux mots abstraits par rapport aux processus cognitifs). Les théories actuelles de la cognition incarnée telles que la Théorie des Symboles Perceptifs proposent que les expériences réelles et les informations situationnelles pourraient jouer un rôle clé dans la façon dont les gens simulent, comprennent et utilisent les concepts abstraits. Le but principal de la présente thèse était d’explorer la structure conceptuelle, l'organisation et la représentation des concepts abstraits dans le système cognitif. Pour ce faire, quatre séries d'études expérimentales utilisant des tâches de catégorisation et de jugement de similarité ont été réalisées. Le premier objectif était de déterminer l’effet des informations situationnelles sur la réorganisation des concepts abstraits. Le deuxième objectif était de déterminer si une organisation taxonomique ou thématique pourrait être à la base de la représentation conceptuelle des concepts abstraits. Les résultats suggèrent que la réorganisation thématique et les informations situationnelles jouent un rôle central dans le traitement et la réorganisation conceptuels des concepts abstraits
The human conceptual system is known to contain two main types of concepts: concrete and abstract. Abstract concepts such as opinion or determination express the sequences of relations between different entities. They also manifest the internal and introspective states of existence that characterize the human consciousness. The semantic representation and organization of abstract concepts has received very little attention in the cognitive psychology literature over the past decades, whereas the vast majority of studies have been dedicated to concrete concepts. Previous research on abstract concepts has explained how they are conceptually represented by focusing on their differences from concrete concepts i.e., the concreteness effect. Current theories of grounded cognition such as the Perceptual Symbol Systems Theory propose that situational knowledge and experiences could play a key role in how people simulate, understand and use abstract concepts.Our aim was to assess the principles that underlie the conceptual structure, organization and representation of abstract concepts within the cognitive system. Four series of behavioural experiments using categorization and similarity judgment tasks were designed to investigate the role of situational information and thematic organization in the processing of abstract concepts. The results indicated that the co-occurrences and the experiencing of unrelated abstract concepts in relevant situations significantly influenced the emergence of novel thematic reorganizations between the concepts compared to baseline. Thus, suggesting the central role that thematic reorganization and situational information play in the conceptual representation of abstract concepts
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Wagner, Filho Jorge Alberto. "Evaluating immersive approaches to multidimensional information visualization." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/175082.

Повний текст джерела
Анотація:
O uso de novos recursos de display e interação para suportar a visualização imersiva de dados e incrementar o raciocínio analítico é uma tendência de pesquisa em Visualização de Informações. Neste trabalho, avaliamos o uso de ambientes baseados em HMD para a exploração de dados multidimensionais, representados em scatterplots 3D como resultado de redução de dimensionalidade. Nós apresentamos uma nova modelagem para o problema de avaliação neste contexto, levando em conta os dois fatores cuja interação determina o impacto no desempenho total nas tarefas: a diferença nos erros introduzidos ao se realizar redução de dimensionalidade para 2D ou 3D, e a diferença nos erros de percepção humana sob diferentes condições de visualização. Este framework em duas etapas oferece uma abordagem simples para estimar os benefícios de se utilizar um setup 3D imersivo para um dado conjunto de dados. Como caso de uso, os erros de redução de dimensionalidade para uma série de conjuntos de dados de votações na Câmara dos Deputados, ao se utilizar duas ou três dimensões, são avaliados por meio de uma abordagem empírica baseada em tarefas. O erro de percepção e o desempenho geral de tarefa, por sua vez, são avaliados através de estudos controlados comparativos com usuários. Comparando-se visualizações baseadas em desktop (2D e 3D) e em HMD (3D), resultados iniciais indicaram que os erros de percepção foram baixos e similares em todas abordagens, resultando em benefícios para o desempenho geral em ambas técnicas 3D A condição imersiva, no entanto, demonstrou requerer menor esforço para encontrar as informações e menos navegação, além de prover percepções subjetivas de precisão e engajamento muito maiores. Todavia, o uso de navegação por voo livre resultou em tempos ineficientes e frequente desconforto nos usuários. Em um segundo momento, implementamos e avaliamos uma abordagem alternativa de exploração de dados, onde o usuário permanece sentado e mudanças no ponto de vista só são possíveis por meio de movimentos físicos. Toda a manipulação é realizada diretamente por gestos aéreos naturais, com os dados sendo renderizados ao alcance dos braços. A reprodução virtual de uma cópia exata da mesa de trabalho do analista visa aumentar a imersão e possibilitar a interação tangível com controles e informações bidimensionais associadas. Um segundo estudo com usuários foi conduzido em comparação a uma versão equivalente baseada em desktop, explorando um conjunto de 9 tarefas representativas de percepção e interação, baseadas em literatura prévia. Nós demonstramos que o nosso protótipo, chamado VirtualDesk, apresentou resultados excelentes em relação a conforto e imersão, e desempenho equivalente ou superior em todas tarefas analíticas, enquanto adicionando pouco ou nenhum tempo extra e ampliando a exploração dos dados.
The use of novel displays and interaction resources to support immersive data visualization and improve the analytical reasoning is a research trend in Information Visualization. In this work, we evaluate the use of HMD-based environments for the exploration of multidimensional data, represented in 3D scatterplots as a result of dimensionality reduction. We present a new modelling for the evaluation problem in such a context, accounting for the two factors whose interplay determine the impact on the overall task performance: the difference in errors introduced by performing dimensionality reduction to 2D or 3D, and the difference in human perception errors under different visualization conditions. This two-step framework offers a simple approach to estimate the benefits of using an immersive 3D setup for a particular dataset. As use case, the dimensionality reduction errors for a series of roll calls datasets when using two or three dimensions are evaluated through an empirical task-based approach. The perception error and overall task performance are assessed through controlled comparative user studies. When comparing desktop-based (2D and 3D) with an HMD-based (3D) visualization, initial results indicated that perception errors were low and similar in all approaches, resulting in overall performance benefits in both 3D techniques. The immersive condition, however, was found to require less effort to find information and less navigation, besides providing much larger subjective perception of accuracy and engagement. Nonetheless, the use of flying navigation resulted in inefficient times and frequent user discomfort In a second moment, we implemented and evaluated an alternative data exploration approach where the user remains seated and viewpoint change is only realisable through physical movements. All manipulation is done directly by natural mid-air gestures, with the data being rendered at arm’s reach. The virtual reproduction of an exact copy of the analyst’s desk aims to increase immersion and enable tangible interaction with controls and two dimensional associated information. A second user study was carried out comparing this scenario to a desktop-based equivalent, exploring a set of 9 representative perception and interaction tasks based on previous literature. We demonstrate that our prototype setup, named VirtualDesk, presents excellent results regarding user comfort and immersion, and performs equally or better in all analytical tasks, while adding minimal or no time overhead and amplifying data exploration.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mittelbach, Martin. "Coding Theorem and Memory Conditions for Abstract Channels with Time Structure." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-168302.

Повний текст джерела
Анотація:
In the first part of this thesis, we generalize a coding theorem and a converse of Kadota and Wyner (1972) to abstract channels with time structure. As a main contribution we prove the coding theorem for a significantly weaker condition on the channel output memory, called total ergodicity for block-i.i.d. inputs. We achieve this result mainly by introducing an alternative characterization of information rate capacity. We show that the ψ-mixing condition (asymptotic output-memorylessness), used by Kadota and Wyner, is quite restrictive, in particular for the important class of Gaussian channels. In fact, we prove that for Gaussian channels the ψ-mixing condition is equivalent to finite output memory. Moreover, we derive a weak converse for all stationary channels with time structure. Intersymbol interference as well as input constraints are taken into account in a flexible way. Due to the direct use of outer measures and a derivation of an adequate version of Feinstein’s lemma we are able to avoid the standard extension of the channel input σ-algebra and obtain a more transparent derivation. We aim at a presentation from an operational perspective and consider an abstract framework, which enables us to treat discrete- and continuous-time channels in a unified way. In the second part, we systematically analyze infinite output memory conditions for abstract channels with time structure. We exploit the connections to the rich field of strongly mixing random processes to derive a hierarchy for the nonequivalent infinite channel output memory conditions in terms of a sequence of implications. The ergodic-theoretic memory condition used in the proof of the coding theorem and the ψ-mixing condition employed by Kadota and Wyner (1972) are shown to be part of this taxonomy. In addition, we specify conditions for the channel under which memory properties of a random process are invariant when the process is passed through the channel. In the last part, we investigate cascade and integration channels with regard to mixing conditions as well as properties required in the context of the coding theorem. The results are useful to study many physically relevant channel models and allow a component-based analysis of the overall channel. We consider a number of examples including composed models and deterministic as well as random filter channels. Finally, an application of strong mixing conditions from statistical signal processing involving the Fourier transform of stationary random sequences is discussed and a list of further applications is given
Im ersten Teil der Arbeit wird ein Kodierungstheorem und ein dazugehöriges Umkehrtheorem von Kadota und Wyner (1972) für abstrakte Kanäle mit Zeitstruktur verallgemeinert. Als wesentlichster Beitrag wird das Kodierungstheorem für eine signifikant schwächere Bedingung an das Kanalausgangsgedächtnis bewiesen, die sogenannte totale Ergodizität für block-i.i.d. Eingaben. Dieses Ergebnis wird hauptsächlich durch eine alternative Charakterisierung der Informationsratenkapazität erreicht. Es wird gezeigt, dass die von Kadota und Wyner verwendete ψ-Mischungsbedingung (asymptotische Gedächtnislosigkeit am Kanalausgang) recht einschränkend ist, insbesondere für die wichtige Klasse der Gaußkanäle. In der Tat, für Gaußkanäle wird bewiesen, dass die ψ-Mischungsbedingung äquivalent zu endlichem Gedächtnis am Kanalausgang ist. Darüber hinaus wird eine schwache Umkehrung für alle stationären Kanäle mit Zeitstruktur bewiesen. Sowohl Intersymbolinterferenz als auch Eingabebeschränkungen werden in allgemeiner und flexibler Form berücksichtigt. Aufgrund der direkten Verwendung von äußeren Maßen und der Herleitung einer angepassten Version von Feinsteins Lemma ist es möglich, auf die Standarderweiterung der σ-Algebra am Kanaleingang zu verzichten, wodurch die Darstellungen transparenter und einfacher werden. Angestrebt wird eine operationelle Perspektive. Die Verwendung eines abstrakten Modells erlaubt dabei die einheitliche Betrachtung von zeitdiskreten und zeitstetigen Kanälen. Für abstrakte Kanäle mit Zeitstruktur werden im zweiten Teil der Arbeit Bedingungen für ein unendliches Gedächtnis am Kanalausgang systematisch analysiert. Unter Ausnutzung der Zusammenhänge zu dem umfassenden Gebiet der stark mischenden zufälligen Prozesse wird eine Hierarchie in Form einer Folge von Implikationen zwischen den verschiedenen Gedächtnisvarianten hergeleitet. Die im Beweis des Kodierungstheorems verwendete ergodentheoretische Gedächtniseigenschaft und die ψ-Mischungsbedingung von Kadota und Wyner (1972) sind dabei Bestandteil der hergeleiteten Systematik. Weiterhin werden Bedingungen für den Kanal spezifiziert, unter denen Eigenschaften von zufälligen Prozessen am Kanaleingang bei einer Transformation durch den Kanal erhalten bleiben. Im letzten Teil der Arbeit werden sowohl Integrationskanäle als auch Hintereinanderschaltungen von Kanälen in Bezug auf Mischungsbedingungen sowie weitere für das Kodierungstheorem relevante Kanaleigenschaften analysiert. Die erzielten Ergebnisse sind nützlich bei der Untersuchung vieler physikalisch relevanter Kanalmodelle und erlauben eine komponentenbasierte Betrachtung zusammengesetzter Kanäle. Es wird eine Reihe von Beispielen untersucht, einschließlich deterministischer Kanäle, zufälliger Filter und daraus zusammengesetzter Modelle. Abschließend werden Anwendungen aus weiteren Gebieten, beispielsweise der statistischen Signalverarbeitung, diskutiert. Insbesondere die Fourier-Transformation stationärer zufälliger Prozesse wird im Zusammenhang mit starken Mischungsbedingungen betrachtet
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bugajska, Malgorzata. "Spatial visualization of abstract information : a classification model for visual design guidelines in the digital domain /." [S.l.] : [s.n.], 2003. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chowdhury, Ziaul Islam. "Implementation of an abstract module for entity resolution to combine data sources with the same domain information." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-87294.

Повний текст джерела
Анотація:
Increasing digitalization is creating a lot of data every day. Sometimes the same real-world entity is stored in multiple data sources but lacks common reference. This creates a significant challenge on the integration of data sources and may cause duplicates and inconsistencies if not resolved correctly. The core idea of this thesis is to implement an abstract module for entity resolution to combine multiple data sources with similar domain information.  CRISP-DM process was used as the methodology in this thesis which started with an understanding of the business and data. Two open datasets containing product details from e-commerce sites are used to conduct the research (Abt-Buy and Amazon-Google). The datasets have similar structures and contain product name, description, manufacturer’s name, price. Both datasets contain gold-standard data to evaluate the performance of the model. In the data exploration phase, various aspects of the datasets are explored such as word-cloud containing important words in the product name and description, bigrams and trigrams of the product name, histograms, standard deviation, mean, min, max length of the product name. Data preparation phases contains NLP based preprocessing pipeline consists of normalization of case, removal of special characters and stop-words, tokenization, and lemmatization.  In the modeling phase of the CRISP-DM process, various similarity and distance measures are applied on the product name and/or description and the weighted scores are summed up to form total score of the fuzzy matching. A set of threshold values are applied to the total score and performance of the model is evaluated against the ground truth. The implemented model scored more than 60% F1-score in both datasets. Moreover, the abstract model can be applied to various datasets with similar domain information. The model is not deployed to the production environment which can be a future work. Moreover, blocking or indexing techniques can be also applied in the future with big data technologies which will reduce quadratic nature of entity resolution problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

La, Barre Kathryn. "Faceted navigation and browsing features in new OPACs: A more robust solution to problems of information seekers? (extended abstract)." dLIST, 2007. http://hdl.handle.net/10150/106157.

Повний текст джерела
Анотація:
In November, 2005, James Billington, the Librarian of Congress, proposed the creation of a “World Digital Library” of manuscripts and multimedia materials in order to “bring together online, rare and unique cultural materials.” Google became the first private sector partner for this project with a pledge of 3 million dollars (http://www.loc.gov/today/pr/2005/05- 250.html). One month later, the Bibliographic Services Task Force of the University of California Libraries released a report: Rethinking how we provide bibliographic services for the University of California. (Bibliographic Services Task Force, 2005). Key proposals included the necessity of enhancing search and retrieval, redesigning the library catalog or OPAC (Online Public Access Catalog), encouraging the adoption of new cataloguing practices, and supporting continuous improvements to digital access. By mid-January, 2006, the tenor of discussion reached fever pitch. On January 12, 2006, the North Carolina State University (NCSU) Library announced the deployment of a revolutionary implementation for their OPAC of Endeca’s ProFind™, which until now had only been used in commercial e-commerce or other business applications. NCSU made the bold claim that “the speed and flexibility of popular online search engines” had now entered the world of the online catalog through the use of faceted navigation and browsing (NCSU, online). A few days later, Indiana University posted A White Paper on the Future of Cataloging at Indiana University which served to identify current trends with direct impact on cataloging operations and defined possible new roles for the online catalog and cataloging staff at Indiana University (Byrd et. al, 2006). The Indiana report was a response to an earlier discussion regarding The Future of Cataloging put forth by Deanna Marcum, Director of Public Service and Collection Management at the Library of Congress (Marcum, 2005). Marcum posed a provocative series of questions and assertions based in part on the Pew Internet and American Life Project study: Counting on the Internet (Horrigan and Rainey, 2005). “[D]o we need to provide detailed cataloging information for digitized materials? Or can we think of Google as the catalog?” Following Marcum’s comments, and the announcement of the “World Digital Library”, the Library of Congress released a commissioned report in March 2006, The changing nature of the catalog and its integration with other discovery tools” (Calhoun, 2006). This report contained blueprints for change to Library of Congress cataloguing processes, advocated integration of the catalog with other discovery tools, included suggestions that the Library of Congress Subject Headings LCSH, long used to support subject access to a variety of cultural objects, be dismantled, and argued that fast access to materials should replace the current standard of full bibliographic records for materials. These arguments were supported by assertions that users seem to prefer the ease of Google over the catalog, and that the proposed changes would place the Library of Congress in a better market position to provide users with the services they want most (Fast and Campbell, 2004; OCLC, 2002). The ensuing debates served to crystallize the intersection and convergence of the traditional missions of the Libraries, Archives and Museum (LAM) communities to provide description, control and access to informational and cultural objects. One consistent theme emerged: What competencies and roles can each community bring to bear upon discussions of digitization, access and discovery, and provide solutions for user needs? The library community had a ready answer. Originally designed to provide inventory, acquisitions and circulation support for library staff, the modern library catalog was designed according to a set of principles and objectives as described by Charles Ammi Cutter in 1876. These principles and objectives underpin the core competency of the library community to create bibliographic records designed to assist users in the following tasks: to find (by author, title and subject), and to identify, select and obtain material that is of interest to them. Discussions about the aims of the catalog are not new and have been ongoing since the early 1970s when the earliest forays of the catalog into the digital age began (Cochrane, 1978). The role played by metadata (i.e. bibliographic records assembled in catalogs), as well as the central importance of search and retrieval mechanisms have long been central players in proposed solutions to providing better services to users. Thus, the suggestions of staff at the Library of Congress, that digitization is tantamount to access, and that search engines, like Google, may supplant the catalog as the chief means of access to cultural and informational materials, have galvanized action throughout the library and information science community. It is critical that any discussions and recommended solutions maintain a holistic view of the principles and objectives of the catalog. The actions and continuing discussions that resulted from these developments drew heavily from several sources, including the experiences of the LAM community with the creation of metadata standards, Web 2.0 applications that make data work harder, more accessible and consolidated, the appeal of folksonomy and social classification, and the importance of leveraging rather than abandoning legacy access systems in a time of spiraling costs and decreasing budgets. For archived discussions of these issues see: lNGC4LIB listserv (Next Generation Catalogs for Libraries http://listserv.nd.edu/archives/ngc4lib.html) and Web4LIB discussion list (http://lists.webjunction.org/web4lib/). Another valuable source is Lorcan Dempsey’s blog, Of libraries, services and networks (http://orweblog.oclc.org/). To leverage some legacy subject access systems it is proposed that more (not less) should be done to process these data, and corresponding authority files (e.g. thesaurus files) in order to use the faceted navigation and browsing features of new online search engines to best advantage. An ongoing research proposal will be described in brief, concentrating on the second goal of a project which plans to develop an integrated conceptual framework which could serve all designers working on information access and discovery systems. A framework for critical analysis of needed and missing features that is grounded in traditional principles, borne out by practice (Cutter, 1976; La Barre, 2006; Ranganathan, 1962) and which builds on feature analysis protocols for early OPACs is urgently needed (Cochrane, 1978; Hildreth, 1995). Further, another analysis of the sufficiency of current data preparation is long overdue (Anderson and Peréz-Carballo, 2005). This position paper builds on La Barre (2006, unpublished dissertation) which studied faceted browsing and navigation in websites, using wireframe analysis. This research uncovered features needed for digital library OPAC design. Building on JISC and Sparks work, a future study will focus on the information seeking research academics and the information seekers, rather than the general public, or the overstudied undergraduate user, thus rounding out the work of others cited by Marcum, Kuhlthau, etc.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ahrsjö, Carl. "Real-time event based visualization of multivariate abstract datasets : Implementing and evaluating a dashboard visualization prototype." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170395.

Повний текст джерела
Анотація:
As datasets in general grow in size and complexity over time while the human cognitive ability to interpret said datasets essentially stays the same, it becomes important to enable intuitive visualization methods for analysis. Based on previous research in the field of information visualization and visual analytics, a dashboard visualization prototype handling real-time event based traffic was implemented and evaluated. The real-time data is collected by a script and sent to a self-implemented web server that opens up a websocket connection with the dashboard client where the data is then visualized. Said data consisted of transactions and related metadata of an ecommerce retail site applied to a real customer scenario. The dashboard was developed using an agile method, continuously involving the thesis supervisor in the design and functionality process. The final design also depended on the results of an interview with a representative from one of the two target groups. The two target groups consisted of 5 novice and 5 expert users to the field of information visualization and visual analytics. The intuitiveness of the dashboard visualization prototype was evaluated by conducting two user studies, one for each target group, where the test subjects were asked to interact with the dashboard visualization, answer some questions and lastly solving a predefined set of tasks. The time spent solving said tasks, the amount of serious misinterpretations and the number of wrong answers was recorded and evaluated. The results from the user study showed that the use of colors, icons, level on animation, the choice of visualization method and level of interaction were the most important aspects for carrying out an efficient analytical process according to the test subjects. The test subjects desired to zoom in on each component, to filter the contents of the dashboard and to get additional information about the components on-demand. The most important result produced from developing the dashboard was how to handle the scalability of the application. It is highly important that the websocket connection remain stable when scaling out to handle more concurrent HTTP requests. The research also conclude that the dashboard should handle visualization methods that are intuitive for all users, that the real-time data needs to be put into relation to historical data if one wishes to carry out a valid analytical process and that real-time data can be used to discover trends and patterns in an early-as-possible stage. Lastly, the research provides a set of guidelines for scalability, modularity, intuitiveness and relations between datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Muñoz, Álvaro Aranda. "Comparing 3D interfaces of virtual factories : an iconic 3D interface against an abstract 3D visualisation." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4115.

Повний текст джерела
Анотація:
Context. 3D visualisations are highly demanded in different industries such as virtual factories. However, the benefits that 3D representations can bring to this industry have not been fully explored, being most of the representations either photorealistic or presenting abstract visualisations. Objectives. This thesis explores and compares two prototypes that present a visualisation of the process state of a factory. The first prototype presents a generic interface in which primitive 3D shapes convey the information of the factory status. The second prototype is complemented with specific and iconic 3D models of the factory that help the users associating the conveyed information to the factory flow. The motivation behind this dissertation is that the type of generic interface presented can lead to more reusable interfaces in the future. Methods. For the creation and development of the prototypes, the user-centered design process was followed in which the designs are iterated with users of the factory. Based on the two prototypes, a usability evaluation is conducted to analyse the perceived usability and the usability performance. This is complemented with post-interviews with all the participants. The results are presented attending to the triangulation methodology to support the strength of the qualitative findings. Conclusions. The results show that both interfaces are perceived as highly usable. However, the 3D iconic interface seemed to help the users more in depicting a better mental model of the factory flow, helping the users to complete most of the tasks with faster times.
This thesis explores and compares two prototypes that present a visualisation of the process state of a factory. The first prototype presents a generic interface in which primitive 3D shapes convey the information of the factory status. The second prototype is complemented with specific and iconic 3D models of the factory that help the users associating the conveyed information to the factory flow. The motivation behind this dissertation is that the type of generic interface presented can lead to more reusable interfaces in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lakshminarayanan, R. "TriSL: A Software Architecture Description Language and Environment." Thesis, Indian Institute of Science, 1999. http://hdl.handle.net/2005/87.

Повний текст джерела
Анотація:
As the size and complexity of a software system increases, the design problem goes beyond the algorithms and data structures of the computation. Designing and specifying the overall system structure -- or software architecture -- becomes the central problem. A system's architecture provides a model of the system that hides implementation detail, allowing the architect to concentrate on the analyses and decisions that are most crucial to structuring the system to satisfy its requirements. Unfortunately, with few exceptions, current exploitation of software architecture and architectural style is informal and ad hoc. The lack of an explicit, independent characterization of architecture and architectural style significantly limits the extent to which software architecture can be exploited using current practices. Architecture Description Languages(ADL) result from a linguistic approach to the formal description of software architectures. ADLs should facilitate building of architectures, not just specification. Further, they should also address the compositionality, substitutability, and reusability issues, which are the key to successful large-scale software development. A software architecture description language with a well defined type system can facilitate compositionality, substitutability, and usability, the three keys to successful large-scale software development. Our contribution is a new software architecture description language, TriSL, which supports these features. In this talk we describe the design and implementation of TriSL and its type system. We demonstrate the power of our language and its expressiveness through case studies of real world applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Tran, David. "Investigating the applicability of Software Metrics and Technical Debt on X++ Abstract Syntax Tree in XML format : calculations using XQuery expressions." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162719.

Повний текст джерела
Анотація:
This thesis investigates how XML representation of X++ abstract syntax trees (AST) residing in an XML database can be subject to static code analysis. Microsoft Dynamics 365 for Finance & Operations comprises a large and complex corpus of X++ source code and intuitive ways of visualizing and analysing the state of the code base in terms of software metrics and technical debt are non-existent. A solution is to extend an internal web application and semantic search tool called SocrateX, to calculate software metrics and technical debt. This is done by creating a web service to construct XQuery and XPath code to be queried to the XML database. The values are stored in a relational database and imported to Power BI for intuitive visualization. Software metrics have been chosen based on the amount of previous research and compatibility with the X++ AST, whereas technical debt has been estimated using the SQALE method. This thesis concludes that XML representations of X++ abstract syntax trees are viable candidates for measuring quality of source codes with the use of functional query programming languages.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Francis, Caroline M. "How Information Retrieval Systems Impact on Designers' Searching Strategies Within the Early Stages of the Design Process." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16280/.

Повний текст джерела
Анотація:
The purpose of this research is to investigate the influences that Information Retrieval Systems such as online Search Engines and Databases have on designers' early searching strategies. The study involves the observation of designers transforming early design language into query 'keyword' language for the operation of Information Retrieval Systems and how this transition causes a shift in early design exploration. This transformation is referred to in this research as the CLASS activity; Converting Language from Abstract Searching to Specific. Findings show a common pattern across the activity of both professional and advanced student designers. Information Retrieval Systems are seen to drive the searching process into specific, explored domains rather than stimulate an 'abstract' broad investigation. The IR systems are built upon categories that are created to manage the information content. It is these categories that require a person to use defined keywords and query sentences to operate the Information Retrieval Systems. The findings suggest that using Information Retrieval Systems prior to defining the scope of a design problem causes designers to prematurely focus on specific searching.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Jilläng, Emil. "Making ASN.1 (Abstract Syntax Notation One) human-readable : Investigative and practical study to generalize decoding and manual validation of ASN.1 from the cellular network during run time." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-67595.

Повний текст джерела
Анотація:
ASN.1 is a powerful formal notation divided into two parts, a specification of the data and the data itself in binary form. Creating decoders for these files can often be tedious. The purpose of this degree work is to extend current tools at Arctic Group to make an application that decodes a range of different ASN.1 specifications and data. This should be done during runtime, without needing to rebuild the application for each specification, while generating human-readable data and abstracting unwanted information. Two ways to create ASN.1 decoders were identified, and the application was designed taking heavy inspiration from a solution that stores intermediate data in a list. By not including encoding as a feature of the application a few shortcuts could be made, and the desired result could be achieved during runtime. The application was designed to include three parts. The first part was an ASN.1 parser using the Java-based tool ANTLR4. The second part matched the binary data to the information in the specification. The final part was an output formatter that abstracts and prettifies the output data to text files. The result was an application that parses at least three of the most commonly used specifications of the employer and does only have to be rebuilt when a new data type is present in the specifications. Problems arose when matching the data to the ASN.1 specifications, thus the matching and output formatting was only partially implemented. The application was evaluated by testing many different ASN.1 specifications, making sure everything was generated correctly during runtime and extending the parser to support more syntax as it was introduced in new specifications. Although the application did not support any arbitrary ASN.1 specification, it could serve as a foundation for further development to make the application truly generalized.
ASN.1 är en kraftfull formell notation uppdelad i två delar. En specifikation av data och medföljande data i binär form. Att skapa avkodare till dessa filer kan ofta vara tidskrävande. Syftet med det här examensarbetet är att vidareutveckla nuvarande verktyg på Arctic Group till en applikation som avkodar ett flertal olika ASN.1-specifikationer och data. Detta skall göras under körning och skall inte kräva att applikationen byggs om för varje specifikation. Applikationen skall även generera mänskligt läsbar utdata och abstrahera bort oönskad information. Två sätt att bygga en ASN.1-avkodare hittades under förstudien och applikationen designades med inspiration av en lösning som sparar data i en mellanliggande lista. Genom att inte inkludera kodning av data i applikationen kunde genvägar tas och det önskade resultatet kunde uppnås under körning. Applikationen designades med tre delar. Den första delen var en ASN.1-läsare som använde verktyget ANTLR4 byggt i Java, den andra en del som matchade informationen från specifikationen till den binära datan. Den sista delen var en formaterare för utdata som även abstraherade bort oönskad information. Resultatet blev en applikation som korrekt läser av minst tre av de mest använda ASN.1-specifikationerna hos uppdragsgivaren och som bara behöver byggas om då en ny datatyp introduceras i specifikationerna. Problem uppstod vid matching av data till specifikationen vilket ledde till att matchningen och formateringen blev bara delvis implementerat. Applikationen utvärderades genom att testa många olika ASN.1 specifikationer, kontrollera att allt genererades korrekt under körning och att utöka läsaren efterhand för att kunna hantera mer syntax då den introducerades i de nya specifikationerna. Även om applikationen inte än stödjer godtycklig ASN.1-specifikation kan den verka som en bas för vidareutveckling mot en mer komplett generaliserad lösning.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Rahman, Mohammad Hafijur. "Designing Framework for Web Development." Thesis, Uppsala universitet, Institutionen för informatik och media, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-168362.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zimic, Sheila. "Internetgenerationen bit för bit : Representationer av IT och ungdom i ett informationssamhälle." Doctoral thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21926.

Повний текст джерела
Анотація:
The aim of this thesis is to gain a deeper understanding in relation to the construction of a ‘Net Generation’. With regards to the idea of an information society, technologies and young people are given certain positions, which are not in any sense natural but are socially constructed. This thesis explores these socially given meanings and shows what types of meanings are prioritized and legitimized. The exploration is conducted by examining, both externally and internally, given meanings of a generation identity. The external (nominal identification) in this study is understood as the construction of an abstract user and is studied by means of academic texts concerning the ‘Net Generation’. The internal (virtual identification) involves young people’s construction of their generation identity and is studied by means of collage. The collages are used to understand how the young participants position themselves in contemporary society and how they, as concrete users, articulate their relationship with information technologies.   The findings show that the ‘type of behavior’ which is articulated in the signifying practice of the construction of the abstract user, ‘Net Generation’, reduces users and technology to a marketing / economical discourse. In addition the idea of the abstract user implies that all users have the same possibilities to achieve ‘success’ in the information society, by being active ‘prosumers’. The concrete users articulate that they feel stressed and pressured in relation to all the choices that they are expected to make. In this sense, the participants do not articulate the (economical) interests as assumed for the ‘Net Generation’, but, rather articulate interests to play, to have a hobby and be social when using information technologies.   What this thesis thus proposes, is to critically explore the ‘taken for granted’ notions of a technological order in society as pertaining to young people. Only if we understand how socially given meaning is constructed can we break loose from the temporarily prioritized values to which the position of technology and users are fixed.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Camporesi, Ferdinanda. "Formal and exact reduction for differential models of signalling pathways in rule-based languages." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE005/document.

Повний текст джерела
Анотація:
Le comportement d'une cellule dépend de sa capacité à recevoir, propager et intégrer des signaux, constituant ainsi des voies de signalisations. Les protéines s'associent entre elles sur des sites de liaisons, puis modifient la structure spatiale des protéines voisines, ce qui a pour effet de cacher ou de découvrir leurs autres sites de liaisons, et donc d'empêcher ou de faciliter d'autres interactions. En raison du grand nombre de différents complexes bio-moléculaires, nous ne pouvons pas écrire ou générer les systèmes différentiels sous-jacents. Les langages de réécritures de graphes à sites offrent un bon moyen de décrire ces systèmes complexes. Néanmoins la complexité combinatoire resurgit lorsque l'on cherche à calculer de manière effective ce comportement. Ceci justifie l'utilisation d'abstractions. Nous proposons deux méthodes pour réduire la taille des modèles de voies de signalisation, décrits en Kappa. Ces méthodes utilisent respectivement la présence de symétries parmi certains sites et le fait que certaines corrélations entre l'état de différentes parties des complexes biomoléculaires n'ont pas d'impact sur la dynamique du système global. Des sites qui ont la même capacité d'interaction sont liés par une relation de symétrie. Nous montrons que cette relation induit une bisimulation qui peut être utilisée pour réduire la taille du modèle initial. L'analyse du flot d'information détecte les parties du système qui influencent le comportement de chaque site. Ceci nous autorise à couper les espèces moléculaires en petits morceaux pour écrire un nouveau système. Enfin, nous montrons comment raffiner cette analyse pour tenir compte d'information contextuelle. Les deux méthodes peuvent être combinées. La solution analytique du modèle réduit est la projection exacte de la solution originelle. Le calcul du modèle réduit se fait au niveau des règles, en évitant l'exécution du modèle initial
The behaviour of a cell is driven by its capability to receive, propagate and communicate signals. Proteins can bind together on some binding sites. Post-translational modifications can reveal or hide some sites, so new interactions can be allowed or existing ones can be inhibited. Due to the huge number of different bio-molecular complexes, we can no longer derive or integrate ODE models. A compact way to describe these systems is supplied by rule-based languages. However combinatorial complexity raises again when one attempt to describe formally the behaviour of the models. This motivates the use of abstractions. We propose two methods to reduce the size of the models, that exploit respectively the presence of symmetries between sites and the lack of correlation between different parts of the system. The symmetries relates pairs of sites having the same capability of interactions. We show that this relation induces a bisimulation which can be used to reduce the size of the original model. The information flow analysis detects, for each site, which parts of the system influence its behaviour. This allows us to cut the molecular species in smaller pieces and to write a new system. Moreover we show how this analysis can be tuned with respect to a context. Both approaches can be combined. The analytical solution of the reduced model is the exact projection of the original one. The computation of the reduced model is performed at the level of rules, without the need of executing the original model
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Петрасова, Світлана Валентинівна. "Інформаційна технологія ідентифікації знань у наукометричних системах на основі інтелектуального аналізу слабоформалізованих даних". Thesis, НТУ "ХПІ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/28125.

Повний текст джерела
Анотація:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.06 – інформаційні технології. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2017. Мета дисертаційного дослідження – підвищення ефективності процесу ідентифікації знань у наукометричних системах за рахунок побудови моделей і методів інтелектуального аналізу слабоформалізованих даних. Основні результати: уперше розроблено логіко-лінгвістичну модель визначення семантично зв'язних фрагментів слабоформалізованої реферативної інформації, яка заснована на використанні алгебро-предикатних операцій, що дозволяє ефективно ідентифікувати знання у реферативній інформації. Удосконалено метод формалізації смислових відношень сутностей, який базується на використанні міри смислової близькості та відрізняється застосуванням інтелектуального аналізу при виявленні класів еквівалентності та толерантності, що дозволяє визначити неявно виражені відношення близькості й відношення таксономії. Отримав по-дальший розвиток метод компараторної ідентифікації, який використано для класифікації смислових фрагментів рефератів у наукометричних системах що дозволяє виділити єдині інформаційні простори наукової взаємодії авторів за рахунок моделювання функцій інтелекту з розуміння та класифікації смислу. Удосконалено інформаційну технологію ідентифікації знань у наукометричних системах, яка дозволяє за рахунок визначення імпліцитних зв'язків між рефератами наукометричних систем динамічно виявляти спільні фронти наукових досліджень. Результати дослідження знайшли практичне застосування у системах обробки анотацій та рефератів. Використання розроблених у роботі моделей і методів дозволило підвищити ефективність процесу ідентифікації знань у слабоформалізованій реферативній інформації за рахунок підвищення значень коефіцієнтів повноти й точності видачі близької за смислом інформації.
Thesis for a candidate degree in technical sciences, speciality 05.13.06 – Information Technologies. – National Technical University "Kharkiv Polytechnic Institute". – Kharkiv, 2017. The objective of the thesis is to increase the effectiveness of knowledge identi-fication in scientometric systems by designing the models and methods of intelligent analysis of weakly formalized data. The main results are as follows. The current state of the knowledge identification problem in scientometric systems has been analysed. Existing methods for the intelligent analysis of weakly formalized data have been systematized. Thus the basic requirements for designing the information technology of knowledge identification have been formulated. Using the finite predicate algebra in the information and logical models of knowledge identification in Ukrainian and English abstract data of scientometric systems has been proved. The logical-linguistic model of semantically connected fragments identification in weakly formalized abstract information has been developed. The model is based on the use of algebraic-predicate operations that allows effectively extracting knowledge from abstract information. The method for the formalization of semantic relations between entities has been improved. The method is based on the use of the semantic similarity measure and intelligent analysis for the identification of equivalence and tolerance classes that allows defining implicit relations of similarity and relations of taxonomy. The method for comparator identification has got the further development. This method is used to classify abstract fragments in scientometric systems that allows determining common information spaces of scientific interaction by modelling intel-ligence functions of understanding and classification of sense. The information technology of knowledge identification in scientometric systems has been improved. The technology allows identifying common research fronts by defining dynamically implicit connections between abstracts of scientometric systems. The research results have been implemented in the systems of summaries and abstracts processing. Using the developed information technology improves the effectiveness of knowledge identification in weakly formalized data by increasing the average values of the precision and recall measures of semantic similarity of text information. The practical results can be used in information retrieval, expert, and information-analytical general-purpose systems for the formation of electronic catalogues of semantically connected texts in scientometric, library, and abstract systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Петрасова, Світлана Валентинівна. "Інформаційна технологія ідентифікації знань у наукометричних системах на основі інтелектуального аналізу слабоформалізованих даних". Thesis, НТУ "ХПІ", 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/28123.

Повний текст джерела
Анотація:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.06 – інформаційні технології. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2017. Мета дисертаційного дослідження – підвищення ефективності процесу ідентифікації знань у наукометричних системах за рахунок побудови моделей і методів інтелектуального аналізу слабоформалізованих даних. Основні результати: уперше розроблено логіко-лінгвістичну модель визначення семантично зв'язних фрагментів слабоформалізованої реферативної інформації, яка заснована на використанні алгебро-предикатних операцій, що дозволяє ефективно ідентифікувати знання у реферативній інформації. Удосконалено метод формалізації смислових відношень сутностей, який базується на використанні міри смислової близькості та відрізняється застосуванням інтелектуального аналізу при виявленні класів еквівалентності та толерантності, що дозволяє визначити неявно виражені відношення близькості й відношення таксономії. Отримав по-дальший розвиток метод компараторної ідентифікації, який використано для класифікації смислових фрагментів рефератів у наукометричних системах що дозволяє виділити єдині інформаційні простори наукової взаємодії авторів за рахунок моделювання функцій інтелекту з розуміння та класифікації смислу. Удосконалено інформаційну технологію ідентифікації знань у наукометричних системах, яка дозволяє за рахунок визначення імпліцитних зв'язків між рефератами наукометричних систем динамічно виявляти спільні фронти наукових досліджень. Результати дослідження знайшли практичне застосування у системах обробки анотацій та рефератів. Використання розроблених у роботі моделей і методів дозволило підвищити ефективність процесу ідентифікації знань у слабоформалізованій реферативній інформації за рахунок підвищення значень коефіцієнтів повноти й точності видачі близької за смислом інформації.
Thesis for a candidate degree in technical sciences, speciality 05.13.06 – Information Technologies. – National Technical University "Kharkiv Polytechnic Institute". – Kharkiv, 2017. The objective of the thesis is to increase the effectiveness of knowledge identi-fication in scientometric systems by designing the models and methods of intelligent analysis of weakly formalized data. The main results are as follows. The current state of the knowledge identification problem in scientometric systems has been analysed. Existing methods for the intelligent analysis of weakly formalized data have been systematized. Thus the basic requirements for designing the information technology of knowledge identification have been formulated. Using the finite predicate algebra in the information and logical models of knowledge identification in Ukrainian and English abstract data of scientometric systems has been proved. The logical-linguistic model of semantically connected fragments identification in weakly formalized abstract information has been developed. The model is based on the use of algebraic-predicate operations that allows effectively extracting knowledge from abstract information. The method for the formalization of semantic relations between entities has been improved. The method is based on the use of the semantic similarity measure and intelligent analysis for the identification of equivalence and tolerance classes that allows defining implicit relations of similarity and relations of taxonomy. The method for comparator identification has got the further development. This method is used to classify abstract fragments in scientometric systems that allows determining common information spaces of scientific interaction by modelling intel-ligence functions of understanding and classification of sense. The information technology of knowledge identification in scientometric systems has been improved. The technology allows identifying common research fronts by defining dynamically implicit connections between abstracts of scientometric systems. The research results have been implemented in the systems of summaries and abstracts processing. Using the developed information technology improves the effectiveness of knowledge identification in weakly formalized data by increasing the average values of the precision and recall measures of semantic similarity of text information. The practical results can be used in information retrieval, expert, and information-analytical general-purpose systems for the formation of electronic catalogues of semantically connected texts in scientometric, library, and abstract systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Real, Lucas Correia Villa. "Uma arquitetura para análise de fluxo de dados estruturados aplicada ao sistema brasileiro de TV digital." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-01092009-151152/.

Повний текст джерела
Анотація:
Diversos sistemas computacionais transmitem informação em fluxos contínuos de dados estruturados e, por vezes, hierarquizados. Este modelo de transmissão de dados tem como uma de suas características a grande densidade de informação, o que exige de um receptor o tratamento imediato das unidades extraídas deste canal de comunicação. Muitas vezes o volume de transmissão não permite, também, que a informação recebida seja armazenada permanentemente no receptor, o que torna a análise do conteúdo desses fluxos de dados um desafio. Este trabalho apresenta uma arquitetura para a análise de fluxo de dados estruturados aplicado à hierarquia lógica definida pelo Sistema Brasileiro de TV Digital para a transmissão de programas de televisão, validada por meio de uma implementação de referência completamente funcional.
Various computing systems transfer data in structured data streams which also happen to be, sometimes, hierarchically organized. Such data stream model is characterized by the dense amount of information transmitted, which requires the receiver to immediately manipulate the elements extracted from that communication channel. The high rate in which data flows also makes it hard, if not impossible, for the receiver to store the desired information in its memory, which makes data flow analysis especially challenging. This work presents a novel structured data flow analysis architecture applied to the logical hierarchy defined by the Brazilian Digital TV System for the transmission of television programs, validated by means of a fully functional reference implementation.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Kissinger, Susan M. "Development of an instructional natural resources information model /." Link to abstract, 2002. http://epapers.uwsp.edu/abstracts/2002/Kissinger.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Colledan, Andrea. "Abstract Machine Semantics for Quipper." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22835/.

Повний текст джерела
Анотація:
Quipper is a domain-specific programming language for the description of quantum circuits. Because it is implemented as an embedded language in Haskell, Quipper is a very practical functional language. However, for the same reason, it lacks a formal semantics and it is limited by Haskell's type-system. In particular, because Haskell lacks linear types, it is easy to write Quipper programs that violate the non-cloning property of quantum states. In order to formalize relevant fragments of Quipper in a type-safe way, the Proto-Quipper family of research languages has been introduced over the last years. In this thesis we first introduce Quipper and Proto-Quipper-M. Proto-Quipper-M is an instance of the Proto-Quipper family based on a categorical model for quantum circuits, which features a linear type-system that guarantees that the non-cloning property holds at compile time. We then derive a tentative small-step operational semantics from the big-step semantics of Proto-Quipper-M and we prove that the two are equivalent. After proving subject reduction and progress results for the tentative semantics, we build upon it to obtain a truly small-step semantics in the style of an abstract machine, which we eventually prove to be equivalent to the original semantics.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Palczynski, Jacob [Verfasser]. "Time-continuous behaviour comparison based on abstract models / Jacob Palczynski." Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014. http://d-nb.info/1051895839/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Schmid, Joachim [Verfasser]. "Refinement and implementation techniques for Abstract State Machines / Joachim Schmid." Ulm : Universität Ulm. Fakultät für Informatik, 2002. http://d-nb.info/1015323995/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kiselman, Vanda. "Kvalitetsutvärdering av den bibliografiska databasen Historical Abstracts." Thesis, Uppsala University, Department of ALM, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-101672.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Tsoupidi, Rodothea Myrsini. "Two-phase WCET analysis for cache-based symmetric multiprocessor systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222362.

Повний текст джерела
Анотація:
The estimation of the worst-case execution time (WCET) of a task is a problem that concerns the field of embedded systems and, especially, real-time systems. Estimating a safe WCET for single-core architectures without speculative mechanisms is a challenging task and an active research topic. However, the advent of advanced hardware mechanisms, which often lack predictability, complicates the current WCET analysis methods. The field of Embedded Systems has high safety considerations and is, therefore, conservative with speculative mechanisms. However, nowadays, even safety-critical applications move to the direction of multiprocessor systems. In a multiprocessor system, each task that runs on a processing unit might affect the execution time of the tasks running on different processing units. In shared-memory symmetric multiprocessor systems, this interference occurs through the shared memory and the common bus. The presence of private caches introduces cachecoherence issues that result in further dependencies between the tasks. The purpose of this thesis is twofold: (1) to evaluate the feasibility of an existing one-pass WCET analysis method with an integrated cache analysis and (2) to design and implement a cachebased multiprocessor WCET analysis by extending the singlecore method. The single-core analysis is part of the KTH’s Timing Analysis (KTA) tool. The WCET analysis of KTA uses Abstract Search-based WCET Analysis, an one-pass technique that is based on abstract interpretation. The evaluation of the feasibility of this analysis includes the integration of microarchitecture features, such as cache and pipeline, into KTA. These features are necessary for extending the analysis for hardware models of modern embedded systems. The multiprocessor analysis of this work uses the single-core analysis in two stages to estimate the WCET of a task running under the presence of temporally and spatially interfering tasks. The first phase records the memory accesses of all the temporally interfering tasks, and the second phase uses this information to perform the multiprocessor WCET analysis. The multiprocessor analysis assumes the presence of private caches and a shared communication bus and implements the MESI protocol to maintain cache coherence.
Uppskattning av längsta exekveringstid (eng. worst-case execution time eller WCET) är ett problem som angår inbyggda system och i synnerhet realtidssystem. Att uppskatta en säker WCET för enkelkärniga system utan spekulativa mekanismer är en utmanande uppgift och ett aktuellt forskningsämne. Tillkomsten av avancerade hårdvarumekanismer, som ofta saknar förutsägbarhet, komplicerar ytterligare de nuvarande analysmetoderna för WCET. Inom fältet för inbyggda system ställs höga säkerhetskrav. Således antas en konservativ inställning till nya spekulativa mekanismer. Trotts detta går säkerhetskritiska system mer och mer i riktning mot multiprocessorsystem. I multiprocessorsystem påverkas en process som exekveras på en processorenhet av processer som exekveras på andra processorenheter. I symmetriska multiprocessorsystem med delade minnen påträffas denna interferens i det delade minnet och den gemensamma bussen. Privata minnen introducerar cache-koherens problem som resulterar i ytterligare beroende mellan processerna. Syftet med detta examensarbete är tvåfaldigt: (1) att utvärdera en befintlig analysmetod för WCET efter integrering av en lågnivå analys och (2) att designa och implementera en cache-baserad flerkärnig WCET-analys genom att utvidga denna enkelkärniga metod. Den enkelkärniga metoden är implementerad i KTH’s Timing Analysis (KTA), ett verktyg för tidsanalys. KTA genomför en så-kallad Abstrakt Sök-baserad Metod som är baserad på Abstrakt Interpretation. Utvärderingen av denna analys innefattar integrering av mikroarkitektur mekanismer, såsom cache-minne och pipeline, i KTA. Dessa mekanismer är nödvändiga för att utvidga analysen till att omfatta de hårdvarumodeller som används idag inom fältet för inbyggda system. Den flerkärniga WCET-analysen genomförs i två steg och uppskattar WCET av en process som körs i närvaron av olika tids och rumsligt störande processer. Första steget registrerar minnesåtkomst för alla tids störande processer, medans andra steget använder sig av första stegets information för att utföra den flerkärniga WCET-analysen. Den flerkärniga analysen förutsätter ett system med privata cache-minnen och en gemensamm buss som implementerar MESI protokolen för att upprätthålla cache-koherens.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Pellitta, Giulio <1984&gt. "Extending Implicit Computational Complexity and Abstract Machines to Languages with Control." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6558/.

Повний текст джерела
Анотація:
The Curry-Howard isomorphism is the idea that proofs in natural deduction can be put in correspondence with lambda terms in such a way that this correspondence is preserved by normalization. The concept can be extended from Intuitionistic Logic to other systems, such as Linear Logic. One of the nice conseguences of this isomorphism is that we can reason about functional programs with formal tools which are typical of proof systems: such analysis can also include quantitative qualities of programs, such as the number of steps it takes to terminate. Another is the possiblity to describe the execution of these programs in terms of abstract machines. In 1990 Griffin proved that the correspondence can be extended to Classical Logic and control operators. That is, Classical Logic adds the possiblity to manipulate continuations. In this thesis we see how the things we described above work in this larger context.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Steinhöfel, Dominic [Verfasser], Reiner [Akademischer Betreuer] Hähnle, and Gilles [Akademischer Betreuer] Barthe. "Abstract Execution: Automatically Proving Infinitely Many Programs / Dominic Steinhöfel ; Reiner Hähnle, Gilles Barthe." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1212584007/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Stahlbauer, Andreas [Verfasser], Sven [Akademischer Betreuer] Apel, and Willem [Akademischer Betreuer] Visser. "Abstract Transducers for Software Analysis and Verification / Andreas Stahlbauer ; Sven Apel, Willem Visser." Passau : Universität Passau, 2020. http://d-nb.info/1219730890/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Berger, Josef. "An infinitesimal approach to stochastic analysis on abstract Wiener spaces." Diss., [S.l.] : [s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=965761444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Alba, Castro Mauricio Fernando. "Abstract Certification of Java Programs in Rewriting Logic." Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/13617.

Повний текст джерела
Анотація:
In this thesis we propose an abstraction based certification technique for Java programs which is based on rewriting logic, a very general logical and semantic framework efficiently implemented in the functional programming language Maude. We focus on safety properties, i.e. properties of a system that are defined in terms of certain events not happening, which we characterize as unreachability problems in rewriting logic. The safety policy is expressed in the style of JML, a standard property specification language for Java modules. In order to provide a decision procedure, we enforce finite-state models of programs by using abstract interpretation. Starting from a specification of the Java semantics written in Maude, we develop an abstraction based, finite-state operational semantics also written in Maude which is appropriate for program verification. As a by-product of the verification based on abstraction, a dependable safety certificate is delivered which consists of a set of rewriting proofs that can be easily checked by the code consumer by using a standard rewriting logic engine. The abstraction based proof-carrying code technique, called JavaPCC, has been implemented and successfully tested on several examples, which demonstrate the feasibility of our approach. We analyse local properties of Java methods: i.e. properties of methods regarding their parameters and results. We also study global confidentiality properties of complete Java classes, by initially considering non--interference and, then, erasure with and without non--interference. Non--interference is a semantic program property that assigns confidentiality levels to data objects and prevents illicit information flows from occurring from high to low security levels. In this thesis, we present a novel security model for global non--interference which approximates non--interference as a safety property.
Alba Castro, MF. (2011). Abstract Certification of Java Programs in Rewriting Logic [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/13617
Palancia
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Steup, Christoph [Verfasser]. "Abstract sensor event processing to achieve dynamic composition of cyber-physical systems / Christoph Steup." Magdeburg : Universitätsbibliothek, 2018. http://d-nb.info/1154485676/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cezar, Ivo Martins. "A participatory knowledge information system for beef farmers : a case applied to the State of Mato Grosso do Sul, Brazil." Thesis, University of Edinburgh, 1999. http://webex.lib.ed.ac.uk/abstracts/cezar01.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Stapel, Florian [Verfasser]. "Ontology-based representation of abstract optimization models for model formulation and system generation / Florian Stapel." Paderborn : Universitätsbibliothek, 2016. http://d-nb.info/1108389333/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Busatto, Giorgio [Verfasser]. "An abstract model of hierarchical graphs and hierarchical graph transformation / von Giorgio Busatto." Oldenburg : Univ., Fachbereich Informatik, 2002. http://d-nb.info/967851955/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Fröhling, Judith [Verfasser], Michael [Akademischer Betreuer] Sonnenschein, Oliver [Akademischer Betreuer] Kramer, and Wei Lee [Akademischer Betreuer] Woon. "Abstract flexibility description for virtual power plant scheduling / Judith Fröhling ; Michael Sonnenschein, Oliver Kramer, Wei Lee Woon." Oldenburg : BIS der Universität Oldenburg, 2017. http://d-nb.info/1141904462/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Fröhling, Judith Verfasser], Michael [Akademischer Betreuer] [Sonnenschein, Oliver [Akademischer Betreuer] Kramer, and Wei Lee [Akademischer Betreuer] Woon. "Abstract flexibility description for virtual power plant scheduling / Judith Fröhling ; Michael Sonnenschein, Oliver Kramer, Wei Lee Woon." Oldenburg : BIS der Universität Oldenburg, 2017. http://d-nb.info/1141904462/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Perrin, Olivier. "Un modèle d'intégration d'outils dans les environnements de développement de logiciels." Nancy 1, 1994. http://www.theses.fr/1994NAN10355.

Повний текст джерела
Анотація:
L'intégration dans les environnements (dédiés ou non au développement de logiciels) est un concept qui représente la capacité à établir des relations entre plusieurs composants (outils, personnes,). Il est nécessaire de représenter la coopération entre plusieurs outils. L'échange de données nécessite souvent une acceptation commune des formats de données ainsi que la disponibilité de mécanismes permettant de transformer une représentation de données en une autre. Des approches statiques préconisent l'adoption d'une représentation canonique ou des convertisseurs de données ad-hoc. Cette thèse propose une approche dynamique reposant sur un modèle abstrait de représentation des données ne nécessitant qu'un seul convertisseur. Nous proposons un modèle de représentation des données qui capture à la fois la sémantique des données et leurs contraintes. Ce modèle intègre un ensemble de constructeurs permettant de définir des structures complexes. La démarche utilisée vise à approcher la relation de sous-type entre deux structures afin de pouvoir utiliser indifféremment les objets définis par celles-ci. Des niveaux de compatibilité entre deux structures permettent de déterminer quelles sont les transformations à effectuer pour passer de l'une à l'autre. Un ensemble d'operateurs permet cette transformation: un premier sous-ensemble s'attache à définir des règles structurelles alors que le second se préoccupe de la transformation des instances. Nous avons défini un convertisseur générique capable de transformer les instances de manière la plus automatique possible, sachant que ce processus est indécidable (donc impossible à automatiser entièrement). L'axiomatisation proposée permet de connaitre précisément les structures exprimables et également les transformations réalisables sur celles-ci. Enfin, l'implantation des operateurs rendant compatibles deux structures et leurs instances nous a permis de valider les différentes propositions faites dans cette thèse
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Baumann, Ringo [Verfasser], Gerhard [Akademischer Betreuer] Brewka, Gerhard [Gutachter] Brewka, and Pietro [Gutachter] Baroni. "Metalogical Contributions to the Nonmonotonic Theory of Abstract Argumentation / Ringo Baumann ; Gutachter: Gerhard Brewka, Pietro Baroni ; Betreuer: Gerhard Brewka." Leipzig : Universitätsbibliothek Leipzig, 2014. http://d-nb.info/1238600093/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Poguntke, Mark [Verfasser]. "Abstrakte Interaktionsmodelle für die Integration in bestehende Benutzerschnittstellen / Mark Poguntke." Ulm : Universität Ulm, 2016. http://d-nb.info/1096473909/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Semmelrock, Nils [Verfasser], and Mila [Akademischer Betreuer] Majster-Cederbaum. "Complexity Results for Reachability in Cooperating Systems and Approximated Reachability by Abstract Over-Approximations / Nils Semmelrock. Betreuer: Mila Majster-Cederbaum." Mannheim : Universitätsbibliothek Mannheim, 2013. http://d-nb.info/1037076699/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії