Siga este link para ver outros tipos de publicações sobre o tema: Electronic digital computers Evaluation.

Teses / dissertações sobre o tema "Electronic digital computers Evaluation"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Electronic digital computers Evaluation".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Fernandez, Sepulveda Antonio. "Evaluation of digital identity using Windows CardSpace". Thesis, Edinburgh Napier University, 2008. http://researchrepository.napier.ac.uk/output/4032/.

Texto completo da fonte
Resumo:
The Internet was initially created for academic purposes, and due to its success, it has been extended to commercial environments such as e-commerce, banking, and email. As a result, Internet crime has also increased. This can take many forms, such as: personal data theft; impersonation of identity; and network intrusions. Systems of authentication such as username and password are often insecure and difficult to handle when the user has access to a multitude of services, as they have to remember many different authentications. Also, other more secure systems, such as security certificates and biometrics can be difficult to use for many users. This is further compounded by the fact that the user does not often have control over their personal information, as these are stored on external systems (such as on a service provider's site). The aim of this thesis is to present a review and a prototype of Federated Identity Management system, which puts the control of the user's identity information to the user. In this system the user has the control over their identity information and can decide if they want to provide specific information to external systems. As well, the user can manage their identity information easily with Information Cards. These Information Cards contain a number of claims that represent the user's personal information, and the user can use these for a number of different services. As well, the Federated Identity Management system, it introduces the concept of the Identity Provider, which can handle the user's identity information and which issues a token to the service provider. As well, the Identity Provider verifies that the user's credentials are valid. The prototype has been developed using a number of different technologies such as .NET Framework 3.0, CardSpace, C#, ASP.NET, and so on. In order to obtain a clear result from this model of authentication, the work has created a website prototype that provides user authentication by means of Information Cards, and another, for evaluation purposes, using a username and password. This evaluation includes a timing test (which checks the time for the authentication process), a functionality test, and also quantitative and qualitative evaluation. For this, there are 13 different users and the results obtained show that the use of Information Cards seems to improve the user experience in the authentication process, and increase the security level against the use of username and password authentication. This thesis concludes that the Federated Identity Management model provides a strong solution to the problem of user authentication, and could protect the privacy rights of the user and returns the control of the user's identity information to the user.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Grove, Duncan A. "Performance modelling of message-passing parallel programs". Title page, contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phg8832.pdf.

Texto completo da fonte
Resumo:
This dissertation describes a new performance modelling system, called the Performance Evaluating Virtual Parallel Machine (PEVPM). It uses a novel bottom-up approach, where submodels of individual computation and communication events are dynamically constructed from data-dependencies, current contention levels and the performance distributions of low-level operations, which define performance variability in the face of contention.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Surma, David Ray 1963. "Design and performance evaluation of parallel architectures for image segmentation processing". Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277042.

Texto completo da fonte
Resumo:
The design of parallel architectures to perform image segmentation processing is given. In addition, the various designs are evaluated as to their performance, and a discussion of an optimal design is given. In this thesis, a set of eight segmentation algorithms has been provided as a starting point. Four of these algorithms will be evaluated and partitioned using two techniques. From this study of partitioning and considering the data flow through the total system, architectures utilizing parallel techniques will be derived. Timing analysis using pen and paper techniques will be given on the architectures using three of today's current technologies. Next, NETWORK II.5 simulations will be run to provide performance measures. Finally, evaluations of the various architectures will be made as well as the applicability of using NETWORK II.5 as a simulation language.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Parker, Brandon S. "CLUE: A Cluster Evaluation Tool". Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5444/.

Texto completo da fonte
Resumo:
Modern high performance computing is dependent on parallel processing systems. Most current benchmarks reveal only the high level computational throughput metrics, which may be sufficient for single processor systems, but can lead to a misrepresentation of true system capability for parallel systems. A new benchmark is therefore proposed. CLUE (Cluster Evaluator) uses a cellular automata algorithm to evaluate the scalability of parallel processing machines. The benchmark also uses algorithmic variations to evaluate individual system components' impact on the overall serial fraction and efficiency. CLUE is not a replacement for other performance-centric benchmarks, but rather shows the scalability of a system and provides metrics to reveal where one can improve overall performance. CLUE is a new benchmark which demonstrates a better comparison among different parallel systems than existing benchmarks and can diagnose where a particular parallel system can be optimized.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Shultes, Bruce Chase. "Regenerative techniques for estimating performance measures of highly dependable systems with repairs". Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/25035.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Demir, Ali. "Real-time 2d/3d Display Of Dted Maps And Evaluation Of Interpolation Algorithms". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/3/12611729/index.pdf.

Texto completo da fonte
Resumo:
In Geographic Information System (GIS) applications, aster data constitutes one of the major data types. The displaying of the raster data has an important part in GIS applications. Digital Terrain Elevation Data (DTED) is one of the raster data types, which is used as the main data source in this thesis. The DTED data is displayed on the screen as digital images as a pixel value, which is represented in gray scale, corresponding to an elevation (texel). To draw the images, the texel values are mostly interpolated in order to perform zoom-in and/or zoom-out operations on the concerned area. We implement and compare four types of interpolation methods, nearest neighbor, bilinear interpolation, and two new proposed interpolation methods (1) 4-texel weighted average and (2) 8-texel weighted average. The real-time graphical display, with zoom-in/zoom-out capabilities, has also been implemented by buffering DTED data in memory and using a C++ clas that manages graphical operations (zoom-in, zoom-out, and 2D, 3D isplay) by using Windows GDI+ and OpenGL graphic ibraries resulting in 30-40 framesper-second for one grid of DTED Level 0 data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Codrescu, Lucian. "An evaluation of the Pica architecture for an object recognition application". Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15483.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Weingartner, Stephan G. "System development : an algorithmic approach". Virtual Press, 1987. http://liblink.bsu.edu/uhtbin/catkey/483077.

Texto completo da fonte
Resumo:
The subject chosen to develop this thesis project on is developing an algorithm or methodology for system selection. The specific problem studied involves a procedure to determine anion computer system alternative is the best choice for a given user situation.The general problem to be addressed is the need for one to choose computing hardware, software, systems, or services in a -Logical approach from a user perspective, considering cost, performance and human factors. Most existing methods consider only cost and performance factors, combining these factors in ad hoc, subjective fashions to react: a selection decision. By not considering factors treat measure effectiveness and functionality of computer services for a user, existing methods ignore some of the most important measures of value to the user.In this work, a systematic and comprehensive approach to computer system selection has been developed. Also developed were methods for selecting and organizing various criteria.Also ways to assess the importance and value of different service attributes to a end-user are discussed.Finally, the feasibility of a systematic approach to computer system selection has been proven by establishing a general methodology and by proving it through a demonstration of a specific application.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ali, Muhammad Usman, e Muhammad Aasim. "Usability Evaluation of Digital Library BTH a case study". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-6140.

Texto completo da fonte
Resumo:
Libraries have for hundreds of years been an important entity for every kind of institute, especially in the educational sector. So now it is an age of computers and internet. People are now using electronic resources to fulfill their needs and requirements of their life. Therefore libraries have also converted to computerized systems. People can access and use library resources just sitting at their computers by using the internet. This modern way of running a library has been called or given the name of digital libraries. Digital libraries are getting famous for flexibility of use and because more users can be facilitated at a time. As numbers of users are increasing, some issues relevant to interaction also arise while using digital libraries interface and utilizing its e-resources. In this thesis we evaluate usability factors and issues in digital libraries and the authors have taken as a case study the real time existing system of the digital library in BTH. This thesis report describes digital libraries and how users are being facilitated by them. Usability issues are also discussed relevant to digital libraries. Users have been the main source to evaluate and judge usability issues while interacting and using this digital library. The results obtained showed dis¬satisfaction of users regarding the usability evaluation of BTH:s digital library. The authors used usability evaluation techniques to evaluate functionality and services provided by the BTH digital library system interface. Moreover, based on the results of our case study, suggestions of improvement in BTH:s digital library are presented. Hopefully, these suggestions will help to make BTH digital library system more usable in an efficient and effective manner for users.
0046-738956073 0046-738956073 0046-734956502
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Piotrowski, Kathleen Ann. "A Feasibility Evaluation of a Digital Pen and Paper System for Accomplishing Electronic Anesthesia Record-keeping". Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/202988.

Texto completo da fonte
Resumo:
In 2001, the Institute of Medicine stated that one of the parameters needing to be addressed to improve health care was the creation of electronic health records for all patients. This goal has proven to be very challenging to health care providers. Many barriers exist that prevent the goal of computerizing health records such as high costs, usability problems, interface incompatibility, and fear of change. The purpose of this feasibility project was to evaluate the usefulness and acceptability of a digital pen and paper (DPP) system for anesthesia documentation. The specific DPP technology used in this evaluation was a product developed by Shareable Ink®. Seven certified registered nurse anesthetists (CRNAs) evaluated the DPP system through a cognitive walkthrough procedure. During the cognitive walkthrough, the participants talked aloud as they carried out a series of anesthesia documentation tasks. Just prior to the cognitive walkthrough, participants were given a questionnaire that measured their perceived computer knowledge, attitudes and skills. After the cognitive walkthrough, a second questionnaire was used to determine their satisfaction with the DPP and their opinions about its usefulness for use in multiple anesthesia work settings. In the second phase of the project, I interviewed other stakeholders in the hospital environment who would also be affected by implementation of a DPP system. This portion of the study was conducted at a community hospital without electronic record-keeping capability. Participation from several departments was sought via contact with hospital administration and department heads. Among those departments targeted for interviews were: Information Technology, Chief of Anesthesia, Anesthesia Billing, Medical Records and Nursing. Semi- structured interviews were conducted and the responses of the participants recorded both as field notes and via audio recording. This intent of this study was to test the feasibility of the digital pen and paper system for various types of anesthesia work environments by means of descriptive, survey and qualitative data analysis. Overall, the device was not only found to be usable by providers but also acceptable to stakeholders. Therefore, this device could be deemed a feasible solution toward implementing and adopting electronic documentation in some anesthesia work settings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

De, Pellegrini Martin. "Mobile-based 3D modeling : An indepth evaluation for the application to maintenance and supervision". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298024.

Texto completo da fonte
Resumo:
Indoor environment modeling has become a relevant topic in several applications fields including Augmented, Virtual and Mixed Reality. Furthermore, with the Digital Transformation, many industries have moved toward this technology trying to generate detailed models of an environment allowing the viewers to navigate through it or mapping surfaces to insert virtual elements in a real scene. Therefore, this Thesis project has been conducted with the purpose to review well- established deterministic methods for 3D scene reconstruction and researching the state- of- the- art, such as machine learning- based approaches, and a possible implementation on mobile devices. Initially, we focused on the well- established methods such as Structure from Motion (SfM) that use photogrammetry to estimate camera poses and depth using only RGB images. Lastly, the research has been centered on the most innovative methods that make use of machine learning to predict depth maps and camera poses from a video stream. Most of the methods reviewed are completely unsupervised and are based on a combination of two subnetwork, the disparity network (DispNet) for the depth estimation and pose network (PoseNet) for camera pose estimation. Despite the fact that the results in outdoor application show high quality depth map and and reliable odometry, there are still some limitations for the deployment of this technology in indoor environment. Overall, the results are promising.
Modellering av inomhusmiljö har blivit ett relevant ämne inom flera applikationsområden, inklusive Augmented, Virtual och Mixed Reality. Dessutom, med den digitala transformationen, har många branscher gått mot denna teknik som försöker generera detaljerade modeller av en miljö som gör det möjligt för tittarna att navigera genom den eller kartlägga ytor för att infoga virtuella element i en riktig scen. Därför har detta avhandlingsprojekt genomförts med syftet att granska väletablerade deterministiska metoder för 3Dscenrekonstruktion och undersöka det senaste inom teknik, såsom maskininlärningsbaserade metoder och en möjlig implementering på mobil. Inledningsvis fokuserade vi på de väletablerade metoderna som Structure From Motion (SfM) som använder fotogrammetri för att uppskatta kameraställningar och djup med endast RGBbilder. Slutligen har forskningen varit inriktad på de mest innovativa metoderna som använder maskininlärning för att förutsäga djupkartor och kameraposer från en videoström. De flesta av de granskade metoderna är helt utan tillsyn och baseras på en kombination av två undernätverk, skillnadsnätverket (DispNet) för djupuppskattning och posenätverk (PoseNet) för kameraposestimering. Trots att resultaten i utomhusanvändning visar djupkarta av hög kvalitet och tillförlitlig vägmätning, finns det fortfarande vissa begränsningar för användningen av denna teknik i inomhusmiljön, men ändå är resultaten lovande.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Thakar, Aniruddha. "Visualization feedback from informal specifications". Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-03242009-040810/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Khayeat, Ali. "Copy-move forgery detection in digital images". Thesis, Cardiff University, 2017. http://orca.cf.ac.uk/107043/.

Texto completo da fonte
Resumo:
The ready availability of image-editing software makes it important to ensure the authenticity of images. This thesis concerns the detection and localization of cloning, or Copy-Move Forgery (CMF), which is the most common type of image tampering, in which part(s) of the image are copied and pasted back somewhere else in the same image. Post-processing can be used to produce more realistic doctored images and thus can increase the difficulty of detecting forgery. This thesis presents three novel methods for CMF detection, using feature extraction, surface fitting and segmentation. The Dense Scale Invariant Feature Transform (DSIFT) has been improved by using a different method to estimate the canonical orientation of each circular block. The Fitting Function Rotation Invariant Descriptor (FFRID) has been developed by using the least squares method to fit the parameters of a quadratic function on each block curvatures. In the segmentation approach, three different methods were tested: the SLIC superpixels, the Bag of Words Image and the Rolling Guidance filter with the multi-thresholding method. We also developed the Segment Gradient Orientation Histogram (SGOH) to describe the gradient of irregularly shaped blocks (segments). The experimental results illustrate that our proposed algorithms can detect forgery in images containing copy-move objects with different types of transformation (translation, rotation, scaling, distortion and combined transformation). Moreover, the proposed methods are robust to post-processing (i.e. blurring, brightness change, colour reduction, JPEG compression, variations in contrast and added noise) and can detect multiple duplicated objects. In addition, we developed a new method to estimate the similarity threshold for each image by optimizing a cost function based probability distribution. This method can detect CMF better than using a fixed threshold for all the test images, because our proposed method reduces the false positive and the time required to estimate one threshold for different images in the dataset. Finally, we used the hysteresis to decrease the number of false matches and produce the best possible result.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Groves, Michael Peter. "A soliton circuit design system /". Title page, contents and summary only, 1987. http://web4.library.adelaide.edu.au/theses/09PH/09phg884.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Ghahroodi, Massoud. "Variation and reliability in digital CMOS circuit design". Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/365136/.

Texto completo da fonte
Resumo:
The silicon chip industry continues to provide devices with feature sizes at Ultra-Deep-Sub-Micron (UDSM) dimensions. This results in higher device density and lower power and cost per function. While this trend is positive, there are a number of negative side effects, including the increased device parameter variation, increased sensitivity to soft errors, and lower device yields. The lifetime of next- generation devices is also decreasing due to lower reliability margins and shorter product lifetimes. This thesis presents an investigation into the challenges of UDSM CMOS circuit design, with a review of the research conducted in this field. This investigation has led to the development of a methodology to determine the timing vulnerability factors of UDSM CMOS that leads to a more realistic definition of the Window of Vulnerability (WOV) for Soft-Error-Rate (SER) computation. We present an implementation of a Radiation-Hardened 32-bit Pipe-lined Processor as well as two novel radiation hardening techniques at Gate-level. We present a Single Event-Upset (SEU) tolerant Flip-Flop design with 38% less power overhead and 25% less area overhead at 65nm technology, compared to the conventional Triple Modular Redundancy (TMR) technique for Flip-Flop design. We also propose an approach for in-field repair (IFR) by trading area for reliability. In the case of permanent faults, spare logic blocks will replace the faulty blocks on the fly. The simulation results show that by tolerating approximately 70% area overhead and less than 18% power overhead, the reliability is increased by a factor of x10 to x100 for various component failure rates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Brower, Bernard V. "Evaluation of digital image compression algorithms for use on laptop computers /". Online version of thesis, 1992. http://hdl.handle.net/1850/11893.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Williams, Dewi L. (Dewi Lloyd) Carleton University Dissertation Engineering Electrical. "A Functional-test specification language". Ottawa, 1988.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Lima, Antonio. "Digital traces of human mobility and interaction : models and applications". Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6833/.

Texto completo da fonte
Resumo:
In the last decade digital devices and services have permeated many aspects of everyday life. They generate massive amounts of data that provide insightful information about how people move across geographic areas and how they interact with others. By analysing this detailed information, it is possible to investigate aspects of human mobility and interaction. Therefore, the thesis of this dissertation is that the analysis of mobility and interaction traces generated by digital devices and services, at different timescales and spatial granularity, can be used to gain a better understanding of human behaviour, build new applications and improve existing services. In order to substantiate this statement I develop analytical models and applications supported by three sources of mobility and interaction data: online social networks, mobile phone networks and GPS traces. First, I present three applications related to data gathered from online social networks, namely the analysis of a global rumour spreading in Twitter, the definition of spatial dissemination measures in a social graph and the analysis of collaboration between developers in GitHub. Then I describe two applications of the analysis of country-wide data of cellular phone networks: the modelling of epidemic containment strategies, with the goal of assessing their efficacy in curbing infectious diseases; the definition of a mobility-based measure of individual risk, which can be used to identify who needs targeted treatment. Finally, I present two applications based on GPS traces: the estimation of trajectories from spatially-coarse temporally-sparse location traces and the analysis of routing behaviour in urban settings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Betts, Thomas. "An investigation of the digital sublime in video game production". Thesis, University of Huddersfield, 2014. http://eprints.hud.ac.uk/id/eprint/25020/.

Texto completo da fonte
Resumo:
This research project examines how video games can be programmed to generate the sense of the digital sublime. The digital sublime is a term proposed by this research to describe experiences where the combination of code and art produces games that appear boundless and autonomous. The definition of this term is arrived at by building on various texts and literature such as the work of Kant, Deleuze and Wark and on video games such as Proteus, Minecraft and Love. The research is based on the investigative practice of my work as an artist-programmer and demonstrates how games can be produced to encourage digitally sublime scenarios. In the three games developed for this thesis I employ computer code as an artistic medium, to generate games that explore permutational complexity and present experiences that walk the margins between confusion and control. The structure of this thesis begins with a reading of the Kantian sublime, which I introduce as the foundation for my definition of the digital sublime. I then combine this reading with elements of contemporary philosophy and computational theory to establish a definition applicable to the medium of digital games. This definition is used to guide my art practice in the development of three games that examine different aspects of the digital sublime such as autonomy, abstraction, complexity and permutation. The production of these games is at the core of my research methodology and their development and analysis is used to produce contributions in the following areas. 1. New models for artist-led game design. This includes methods that re-contextualise existing aesthetic forms such as futurism, synaesthesia and romantic landscape through game design and coding. It also presents techniques that merge visuals and mechanics into a format developed for artistic and philosophical enquiry. 2. The development of new procedural and generative techniques in the programming of video games. This includes the implementation of a realtime marching cubes algorithm that generates fractal noise filtered terrain. It also includes a versatile three-dimensional space packing architectural construction algorithm. 3. A new reading of the digital sublime. This reading draws from the Kantian sublime and the writings of Deleuze, Wark and De Landa in order to present an understanding of the digital sublime specific to the domain of art practice within video games. These contributions are evidenced in the writing of this thesis and in the construction of the associated portfolio of games.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Ligon, Walter Batchelor III. "An empirical evaluation of architectural reconfigurability". Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/8204.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Jayasinghe, U. U. K. "Trust evaluation in the IoT environment". Thesis, Liverpool John Moores University, 2018. http://researchonline.ljmu.ac.uk/9451/.

Texto completo da fonte
Resumo:
Along with the many benefits of IoT, its heterogeneity brings a new challenge to establish a trustworthy environment among the objects due to the absence of proper enforcement mechanisms. Further, it can be observed that often these encounters are addressed only concerning the security and privacy matters involved. However, such common network security measures are not adequate to preserve the integrity of information and services exchanged over the internet. Hence, they remain vulnerable to threats ranging from the risks of data management at the cyber-physical layers, to the potential discrimination at the social layer. Therefore, trust in IoT can be considered as a key property to enforce trust among objects to guarantee trustworthy services. Typically, trust revolves around assurance and confidence that people, data, entities, information, or processes will function or behave in expected ways. However, trust enforcement in an artificial society like IoT is far more difficult, as the things do not have an inherited judgmental ability to assess risks and other influencing factors to evaluate trust as humans do. Hence, it is important to quantify the perception of trust such that it can be understood by the artificial agents. In computer science, trust is considered as a computational value depicted by a relationship between trustor and trustee, described in a specific context, measured by trust metrics, and evaluated by a mechanism. Several mechanisms about trust evaluation can be found in the literature. Among them, most of the work has deviated towards security and privacy issues instead of considering the universal meaning of trust and its dynamic nature. Furthermore, they lack a proper trust evaluation model and management platform that addresses all aspects of trust establishment. Hence, it is almost impossible to bring all these solutions to one place and develop a common platform that resolves end-to-end trust issues in a digital environment. Therefore, this thesis takes an attempt to fill these spaces through the following research work. First, this work proposes concrete definitions to formally identify trust as a computational concept and its characteristics. Next, a well-defined trust evaluation model is proposed to identify, evaluate and create trust relationships among objects for calculating trust. Then a trust management platform is presented identifying the major tasks of trust enforcement process including trust data collection, trust data management, trust information analysis, dissemination of trust information and trust information lifecycle management. Next, the thesis proposes several approaches to assess trust attributes and thereby the trust metrics of the above model for trust evaluation. Further, to minimize dependencies with human interactions in evaluating trust, an adaptive trust evaluation model is presented based on the machine learning techniques. From a standardization point of view, the scope of the current standards on network security and cybersecurity needs to be expanded to take trust issues into consideration. Hence, this thesis has provided several inputs towards standardization on trust, including a computational definition of trust, a trust evaluation model targeting both object and data trust, and platform to manage the trust evaluation process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Pattanaphanchai, Jarutas. "Trustworthiness of Web information evaluation framework". Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/370596/.

Texto completo da fonte
Resumo:
Assessing the quality of information on the Web is a challenging issue for at least two reasons. Firstly, there is little control over publishing quality. Secondly, when assessing the trustworthiness of Web pages, users tend to base their judgements upon subjective criteria such as the visual presentation of the website, rather than rigorous criteria such as the author's qualifications or the source's review process. As a result, Web users tend to make incorrect assessments of the trustworthiness of the Web information they are consuming. Also, they are uncertain of their ability to make a decision whether to trust information they are not familiar with. This research addresses this problem by collecting and presenting metadata based on useful practice trustworthiness criteria, in order to support the users' evaluation process for assessing the trustworthiness of Web information during their information seeking processes. In this thesis, we propose the Trustworthiness of Web Information Evaluation (TWINE) application framework, and present a prototype tool that employs this framework for a case study of academic publications. The framework gathers and provides useful information that can support users' judgments of the trustworthiness of Web information. The framework consists of two layers: the presentation layer and the logic layer. The presentation layer is composed of input and output modules, which are the modules that interface with the users. The logic layer consists of the trustworthiness criteria and metadata creation modules. The trustworthiness criteria module is composed of four basic criteria, namely: authority, accuracy, recency and relevance. Each criterion consists of the items, called indicators, in order to indicate the trustworthiness of Web information based on their criteria. The metadata creation module gathers and integrates metadata based on the proposed criteria that will then be used in the output module in order to generate the supportive information for users. The framework was evaluated based on the tool, using an empirical study. The study set a scenario that new postgraduate students search for publications to use in their report using the developed tool. The students were then asked to complete a questionnaire, which was then analysed using quantitative and qualitative methods. The results from the questionnaire show that the confidence level of users when evaluating the trustworthiness of Web information does increase if they obtain useful supportive information about that Web information. The mean of the confidence level of their judgments increases by 12.51 percentage points. Additionally, the number of selected ssessing the quality of information on the Web is a challenging issue for at least two reasons. Firstly, there is little control over publishing quality. Secondly, when assessing the trustworthiness of Web pages, users tend to base their judgements upon subjective criteria such as the visual presentation of the website, rather than rigorous criteria such as the author's qualifications or the source's review process. As a result, Web users tend to make incorrect assessments of the trustworthiness of the Web information they are consuming. Also, they are uncertain of their ability to make a decision whether to trust information they are not familiar with. This research addresses this problem by collecting and presenting metadata based on useful practice trustworthiness criteria, in order to support the users' evaluation process for assessing the trustworthiness of Web information during their information seeking processes. In this thesis, we propose the Trustworthiness of Web Information Evaluation (TWINE) application framework, and present a prototype tool that employs this framework for a case study of academic publications. The framework gathers and provides useful information that can support users' judgments of the trustworthiness of Web information. The framework consists of two layers: the presentation layer and the logic layer. The presentation layer is composed of input and output modules, which are the modules that interface with the users. The logic layer consists of the trustworthiness criteria and metadata creation modules. The trustworthiness criteria module is composed of four basic criteria, namely: authority, accuracy, recency and relevance. Each criterion consists of the items, called indicators, in order to indicate the trustworthiness of Web information based on their criteria. The metadata creation module gathers and integrates metadata based on the proposed criteria that will then be used in the output module in order to generate the supportive information for users. The framework was evaluated based on the tool, using an empirical study. The study set a scenario that new postgraduate students search for publications to use in their report using the developed tool. The students were then asked to complete a questionnaire, which was then analysed using quantitative and qualitative methods. The results from the questionnaire show that the confidence level of users when evaluating the trustworthiness of Web information does increase if they obtain useful supportive information about that Web information. The mean of the confidence level of their judgments increases by 12.51 percentage points. Additionally, the number of selected pieces of Web information used in their work does increase when supportive information is provided. The number of pieces of Web information selected by the users increases on average less than one percentage points. Participating users were satisfied with the supportive information, insofar as it helps them to evaluate the trustworthiness of Web information, with the mean satisfaction level of 3.69 of 5 points. Overall the supportive information provided, based on and provided by the framework, can help users to adequately evaluate the trustworthiness of Web information.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Ho, Chun-yin. "Group-based checkpoint/rollback recovery for large scale message-passing systems". Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B39794052.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Vasudevan, Vivek. "Evaluation of the separation involved in recycling end-of-life (EOL) electronic equipment". Morgantown, W. Va. : [West Virginia University Libraries], 2004. https://etd.wvu.edu/etd/controller.jsp?moduleName=documentdata&jsp%5FetdId=45.

Texto completo da fonte
Resumo:
Thesis (M.S.)--West Virginia University, 2004.
Title from document title page. Document formatted into pages; contains xi, 92 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 74-76).
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Saliou, Lionel. "Network firewall dynamic performance evaluation and formalisation". Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/2599.

Texto completo da fonte
Resumo:
Computer network security is key to the daily operations of an organisation, its growth and its future. It is unrealistic for an organisation to devote all of its resources to computer network security, but equally an organisation must be able to determine whether its security policy is achievable and under which criteria. Yet, it is not often possible for an organisation: to define its security policy, especially to fully comply with the laws of the land; ensure the actual implementation on network devices; and finally audit the overall system for compliance. This thesis argues that one of the obstacles to the complete realisation of such an Integrated Security Framework is the lack of deep understanding, in particular in terms of dynamic performance, of the network devices on which the security policy will be deployed. Thus, one novelty of this research is a Dynamic Evaluation Environment for Network Security that allows the identification of the strengths and weaknesses of networked security devices, such as in network firewalls. In turn, it enables organisations to model the dynamic performance impact of security policies deployed on these devices, as well as identifying the benefit of various implementation choices, or prioritisations. Hence, this novel evaluation environment allows the creation of instances of a network firewall dynamic performance model, and this modelling is part of the Integrated Security Framework, thus enabling it to highlight when particular security requirements cannot be met by the underlying systems, or how best to achieve the objectives. More importantly, perhaps, the evaluation environment enables organisations to comply with up-coming legislation that increases an organisation's legal cover, which demands consistent and scientific evidence of fitness prior to security incidents. Dynamic evaluations produce a large amount of raw data and this often does not allow for a comprehensive analysis and interpretation of the results obtained. Along with this, it is necessary to relate the data collected to a dynamic firewall performance model. To overcome this, this research proposes a unique formalisation of the inputs and outputs of the proposed model, and this, in turn, allows for performance analysis from multiple view-points, such as: the increase security requirements in the form of larger rule-set sizes; effects of changes in terms of the underlying network equipment; or the complexity of filtering. These view-points are considered as evaluation scenarios and also have unique formalisations. Evaluations focused on two types of network firewalls and key findings include the fact that strong security policy overhead can be kept acceptable on embedded firewalls provided that out-going filtering is used. Along with this, dynamic evaluation allows the identification of the additional performance impact of unoptimised configurations, and such findings complement work that focuses on the logical properties of network firewalls. Also, these evaluations demonstrate the need for scientific rigour as the data show that the embedded and software network firewalls evaluated have different areas of strengths and weaknesses. Indeed, it appears that software firewalls are not as affected as embedded firewalls by the complexity of filtering. On the other hand, the number of rules software firewalls enforce is the main performance factor, especially for high network speeds.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Pelletingeas, Christophe. "Performance evaluation of virtualization with cloud computing". Thesis, Edinburgh Napier University, 2010. http://researchrepository.napier.ac.uk/Output/4010.

Texto completo da fonte
Resumo:
Cloud computing has been the subject of many researches. Researches shows that cloud computing permit to reduce hardware cost, reduce the energy consumption and allow a more efficient use of servers. Nowadays lot of servers are used inefficiently because they are underutilized. The uses of cloud computing associate to virtualization have been a solution to the underutilisation of those servers. However the virtualization performances with cloud computing cannot offers performances equal to the native performances. The aim of this project was to study the performances of the virtualization with cloud computing. To be able to meet this aim it has been review at first the previous researches on this area. It has been outline the different types of cloud toolkit as well as the different ways available to virtualize machines. In addition to that it has been examined open source solutions available to implement a private cloud. The findings of the literature review have been used to realize the design of the different experiments and also in the choice the tools used to implement a private cloud. In the design and the implementation it has been setup experiment to evaluate the performances of public and private cloud. The results obtains through those experiments have outline the performances of public cloud and shows that the virtualization of Linux gives better performances than the virtualization of Windows. This is explained by the fact that Linux is using paravitualization while Windows is using HVM. The evaluation of performances on the private cloud has permitted the comparison of native performance with paravirtualization and HVM. It has been seen that paravirtualization hasperformances really close to the native performances contrary to HVM. Finally it hasbeen presented the cost of the different solutions and their advantages.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Celestini, Alessandro. "On the analysis and evaluation of trust and reputation systems". Thesis, IMT Alti Studi Lucca, 2013. http://e-theses.imtlucca.it/114/1/Celestini_phdthesis.pdf.

Texto completo da fonte
Resumo:
In recent years, we have witnessed an increasing use of trust and reputation systems in different areas of ICT. The idea at the base of trust and reputation systems is of letting users to rate the provided services after each interaction. Other users may use aggregate ratings to compute reputation scores for a given party. The computed reputation scores are a collective measure of parties trustworthiness and are used to drive parties interactions. Due to the widespread use of reputation systems, research work on them is intensifying and several models have been proposed. This calls for a methodology for the analysis and the evaluation of trust and reputation systems that can help researcher and developers in studying, designing and implementing such systems. In this thesis we propose different kinds of theoretical results and software tools that could be useful means for researchers and developers in area of trust and reputation systems. Our work addresses the three main stages of trust and reputation systems development, namely study, design and implementation. We provide: 1) a general framework based on Bayesian decision theory for the assessment of trust and reputation models, 2) an analysis methodology for reputation systems based on a coordination language, 3) a software tool for network-aware evaluation of reputation systems and their rapid prototyping.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Rudman, Hannah. "A framework for the transformation of the creative industries in a digital age". Thesis, Edinburgh Napier University, 2015. http://researchrepository.napier.ac.uk/Output/8866.

Texto completo da fonte
Resumo:
The creative industries sector faces a constantly changing context characterised by the speed of the development and deployment of digital information systems and Information Communications Technologies (ICT) on a global scale. This continuous digital disruption has had significant impact on the whole value chain of the sector: creation and production; discovery and distribution; and consumption of cultural goods and services. As a result, creative enterprises must evolve business and operational models and practices to be sustainable. Enterprises of all scales, type, and operational model are affected, and all sectors face ongoing digital disruption. Management consultancy practitioners and business strategy academics have called for new strategy development frameworks and toolkits, fit for a continuously changing world. This thesis investigates a novel approach to organisational change appropriate to the digital age, in the context of the creative sector in Scotland. A set of concepts, methods, tools, and processes to generate theoretical learning and practical knowing was created to support enterprises to digitally adapt through undertaking journeys of change and organisational development. The framework is called The AmbITion Approach. It was developed by blending participatory action research (PAR) methods and modern management consultancy, design, and creative practices. Empirical work also introduced to the framework Coghlan and Rashford's change categories. These enabled the definition and description of the extent to which organisations developed: whether they experienced first order (change), second order (adaptation) or third order (transformation) change. Digital research tools for inquiry were tested by a pilot study, and then embedded in a longitudinal study over two years of twentyone participant organisations from Scotland's creative sector. The author applied and investigated the novel approach in a national digital development programme for Scotland's creative industries. The programme was designed and delivered by the author and ran nationally between 2012-14. Detailed grounded thematic analysis of the data corpus was undertaken, along with analysis of rich media case studies produced by the organisations about their change journeys. The results of studies on participants, and validation criteria applied to the results, demonstrated that the framework triggers second (adaptation) and third order change (transformation) in creative industry enterprises. The AmbITion Approach framework is suitable for the continuing landscape of digital disruption within the creative sector. The thesis contributes to practice the concepts, methods, tools, and processes of The AmbITion Approach, which have been empirically tested in the field, and validated as a new framework for business transformation in a digital age. The thesis contributes to knowledge a theoretical and conceptual framework with a specific set of constructs and criteria that define first, second, and third order change in creative enterprises, and a robust research and action framework for the analysis of the quality, validity and change achieved by action research based development programmes. The thesis additionally contributes to the practice of research, adding to our understanding of the value of PAR and design thinking approaches and creative practices as methods for change.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Kwon, Hyosun. "From ephemerality to delicacy : applying delicacy in the design space of digital gifting". Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/46705/.

Texto completo da fonte
Resumo:
We encounter uncountable ephemeral phenomena in everyday life. Some of them are particularly appreciated for their ungraspable beauty and limited availability. From the outset, one strand of computing technology has evolved to encapsulate and preserve this transient experience. A myriad of digital devices has been developed to capture the fleeting moments and to store as digital files for later use, edit, share, and distribute. On the other hand, a portion of Human-Computer Interaction (HCI) research has engaged in adopting the transience of temporal phenomena in the design of interactive computing systems. Some computer and mobile applications metaphorically adopt the ephemerality in graphical elements or functions that resemble our real world experiences such as, forgetting and real-time conversation that naturally fades away immediately. Interactive artefacts or installations often incorporate ephemeral materials for abstract and artistic expression. Therefore, ephemeral artefacts or phenomena are often employed as a passive design element in ambient and peripheral interactions rather than in applications for practical purpose. However, ephemeral materials also engender experiences of a non-ambient nature. Some materials are physically fragile, only lasting for a brief moment, and therefore require constant care to retain their status, which might lead to highly focused attention, delicate interaction, and even a tense experience. This thesis aims to investigate how to harness the fleeting and irreversible feature of ephemeral artefacts in the design of practical products and services. This PhD builds on the methods of design-oriented HCI research. Thus, this thesis will present a research process that involves a series of challenges to initially frame a design problem in a fertile area for exploration; speculate a preferred situation; develop proof-of-concept prototypes to demonstrate the potential solution; and evaluate the prototypes through a user study. Contributions of this PhD have visualised by the outputs from multiple design studies. First, this thesis illustrates how the concept of ephemerality is currently understood in HCI. Then proposes a different approach to the use of ephemeral materials by shifting the focus to delicacy. The first design study introduces FugaciousFilm, a soap film based interactive touch display that shifted ephemerality from a user’s periphery to the focal point of interaction. The prototype is a platform for manifesting ephemeral interactions by inducing subtly delicate experiences. By demonstrating that ephemeral interactions reinforce user’s attention, delicacy was noticed as an attribute of user experience. By understanding of the use of delicacy, the research focus has moved from exploring how an individual ephemeral material can be utilised in interaction design, to harnessing delicacy of such materials in experience design that benefits Human-Computer Interaction. Thus, this thesis recaptures digital gift wrapping as a context by reviewing the current state of affairs in digital gifting in the field of HCI and design. A 5-stage gifting framework has been synthesised from the literature review and guided this PhD throughout the studies. The framework ought to be seen as a significant contribution in its own right. Based on this framework, a series of interviews was conducted to identify any weaknesses that reside in current media platforms, digital devices, and different modes of interaction. Hence, ‘unwrapping a digital gift’ has captured as a gap in the design space that could be reinforced by a delicate, ephemeral interaction. Therefore, this PhD proposes Hybrid Gift, a series of proof-of-concept prototypes that demonstrates digital gift wrappings. Hybrid Gift has been probed in a semi-structured design workshop to examine the use of delicacy and ephemerality in the design of digital gifting practices. The prototypes were designed to retrieve not only the unwrapping experience but also rituals around gift exchange. Therefore, this thesis discusses design implications of the findings that emerged throughout the study. Digital gifting is still an under-explored research area that is worthwhile to investigate through field works. Thus, the design implications and the framework are proposed to researchers and designers who wish to engage in the arena of digital gifting, also broadly in social user experience, and communication service and system design. From a macroscopic perspective, we are experiencing fleeting moments every second, minute, and day. However, they are rarely noticed unless we recognise that time passes irreversibly. This thesis extracted delicacy as a feature of ephemeral interactions and argued that it holds the potential to augment and enhance mundane experiences mediated by digital technology. In so doing, the series of design studies has conceptually influenced the design perspective to be shifted from material-oriented design to experience-focused design research. The design space of digital gifting would not have been recognised without the hands-on design practices in the process of this PhD. Finally, the proof-of-concept prototypes, framework, and design implications are thought to be of significance and value to the design students, researchers, and designers who want to employ similar methods and approaches in design research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Blair, Stuart Andrew. "On the classification and evaluation of prefetching schemes". Thesis, University of Glasgow, 2003. http://theses.gla.ac.uk/2274/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Miehling, Mathew J. "Correlation of affiliate performance against web evaluation metrics". Thesis, Edinburgh Napier University, 2014. http://researchrepository.napier.ac.uk/Output/7250.

Texto completo da fonte
Resumo:
Affiliate advertising is changing the way that people do business online. Retailers are now offering incentives to third-party publishers for advertising goods and services on their behalf in order to capture more of the market. Online advertising spending has already over taken that of traditional advertising in all other channels in the UK and is slated to do so worldwide as well [1]. In this highly competitive industry, the livelihood of a publisher is intrinsically linked to their web site performance. Understanding the strengths and weaknesses of a web site is fundamental to improving its quality and performance. However, the definition of performance may vary between different business sectors or even different sites in the same sector. In the affiliate advertising industry, the measure of performance is generally linked to the fulfilment of advertising campaign goals, which often equates to the ability to generate revenue or brand awareness for the retailer. This thesis aims to explore the correlation of web site evaluation metrics to the business performance of a company within an affiliate advertising programme. In order to explore this correlation, an automated evaluation framework was built to examine a set of web sites from an active online advertising campaign. A purpose-built web crawler examined over 4,000 sites from the advertising campaign in approximately 260 hours gathering data to be used in the examination of URL similarity, URL relevance, search engine visibility, broken links, broken images and presence on a blacklist. The gathered data was used to calculate a score for each of the features which were then combined to create an overall HealthScore for each publishers. The evaluated metrics focus on the categories of domain and content analysis. From the performance data available, it was possible to calculate the business performance for the 234 active publishers using the number of sales and click-throughs they achieved. When the HealthScores and performance data were compared, the HealthScore was able to predict the publisher's performance with 59% accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Maguire, Joseph Noel. "An ecologically valid evaluation of an observation-resilient graphical authentication mechanism". Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4708/.

Texto completo da fonte
Resumo:
Alphanumeric authentication, by means of a secret, is not only a powerful mechanism, in theory, but prevails over all its competitors in reality. Passwords, as they are more commonly known, have the potential to act as a fairly strong gateway. In practice, though, password usage is problematic. They are (1) easily shared, (2) trivial to observe and (3) maddeningly elusive when forgotten. Moreover, modern consumer devices only exacerbate the problems of passwords as users enter them in shared spaces, in plain view, on television screens, on smartphones and on tablets. Asterisks may obfuscate alphanumeric characters on entry but popular systems, e.g. Apple iPhone and Nintendo Wii, require the use of an on-screen keyboard for character input. ! ! A number of alternatives to passwords have been proposed but none, as yet, have been adopted widely. There seems to be a reluctance to switch from tried and tested passwords to novel alternatives, even if the most glaring flaws of passwords can be mitigated. One argument is that there has not been sufficient investigation into the feasibility of the password alternatives and thus no convincing evidence that they can indeed act as a viable alternative. ! ! Graphical authentication mechanisms, solutions that rely on images rather than characters, are a case in point. Pictures are more memorable than the words that name them, meaning that graphical authentication mitigates one of the major problems with passwords. This dissertation sets out to investigate the feasibility of one particular observation-resilient graphical authentication mechanism called Tetrad. The authentication mechanism attempted to address two of the core problems with passwords: improved memorability and resistance to observability (with on-screen entry).! ! Tetrad was tested in a controlled lab study, that delivered promising results and was well received by the evaluators. It was then deployed in a realistic context and its viability tested in three separate field tests. The unfortunate conclusion was that Tetrad, while novel and viable in a lab setting, failed to deliver a usable and acceptable experience to the end users. This thorough testing of an alternative authentication mechanism is unusual in this research field and the outcome is disappointing. Nevertheless, it acts to inform inventors of other authentication mechanisms of the problems that can manifest when a seemingly viable authentication mechanism is tested in the wild.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Almutiq, Mutiq Mohammed. "An evaluation model for information security strategies in healthcare data systems". Thesis, Keele University, 2018. http://eprints.keele.ac.uk/5595/.

Texto completo da fonte
Resumo:
This thesis presents a newly developed evaluation model, EMISHD (An "Evaluation Model for Information Security Strategies in Healthcare Data Systems") which can address the specific requirements of information security in healthcare sector. Based on a systematic literature review and case study, the information security requirements and the existing evaluation models used to examine the information security strategies of healthcare data systems have been analysed. The requirements of information security in any sector generally vary in line with changes in laws and regulations, and the emergence of new technologies and threats, which require existing information security strategies to be strengthened to deal with new challenges. The systemic review of the existing evaluation models identified from the previous research resulted in the development of a new evaluation model (EMISHD) specifically designed to examine the information security strategies in healthcare data systems according to the specific requirements. A case study of a healthcare organisation in Saudi Arabia is conducted in order to apply the newly developed evaluation model (EMISHD) in a real life case and to validate the evaluation results through observation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Ho, Chun-yin, e 何俊賢. "Group-based checkpoint/rollback recovery for large scale message-passing systems". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B39794052.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Leelanupab, Teerapong. "A ranking framework and evaluation for diversity-based retrieval". Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3442/.

Texto completo da fonte
Resumo:
There has been growing momentum in building information retrieval (IR) systems that consider both relevance and diversity of retrieved information, which together improve the usefulness of search results as perceived by users. Some users may genuinely require a set of multiple results to satisfy their information need as there is no single result that completely fulfils the need. Others may be uncertain about their information need and they may submit ambiguous or broad (faceted) queries, either intentionally or unintentionally. A sensible approach to tackle these problems is to diversify search results to address all possible senses underlying those queries or all possible answers satisfying the information need. In this thesis, we explore three aspects of diversity-based document retrieval: 1) recommender systems, 2) retrieval algorithms, and 3) evaluation measures. This first goal of this thesis is to provide an understanding of the need for diversity in search results from the users’ perspective. We develop an interactive recommender system for the purpose of a user study. Designed to facilitate users engaged in exploratory search, the system is featured with content-based browsing, aspectual interfaces, and diverse recommendations. While the diverse recommendations allow users to discover more and different aspects of a search topic, the aspectual interfaces allow users to manage and structure their own search process and results regarding aspects found during browsing. The recommendation feature mines implicit relevance feedback information extracted from a user’s browsing trails and diversifies recommended results with respect to document contents. The result of our user-centred experiment shows that result diversity is needed in realistic retrieval scenarios. Next, we propose a new ranking framework for promoting diversity in a ranked list. We combine two distinct result diversification patterns; this leads to a general framework that enables the development of a variety of ranking algorithms for diversifying documents. To validate our proposal and to gain more insights into approaches for diversifying documents, we empirically compare our integration framework against a common ranking approach (i.e. the probability ranking principle) as well as several diversity-based ranking strategies. These include maximal marginal relevance, modern portfolio theory, and sub-topic-aware diversification based on sub-topic modelling techniques, e.g. clustering, latent Dirichlet allocation, and probabilistic latent semantic analysis. Our findings show that the two diversification patterns can be employed together to improve the effectiveness of ranking diversification. Furthermore, we find that the effectiveness of our framework mainly depends on the effectiveness of the underlying sub-topic modelling techniques. Finally, we examine evaluation measures for diversity retrieval. We analytically identify an issue affecting the de-facto standard measure, novelty-biased discounted cumulative gain (α-nDCG). This issue prevents the measure from behaving as desired, i.e. assessing the effectiveness of systems that provide complete coverage of sub-topics by avoiding excessive redundancy. We show that this issue is of importance as it highly affects the evaluation of retrieval systems, specifically by overrating top-ranked systems that repeatedly retrieve redundant information. To overcome this issue, we derive a theoretically sound solution by defining a safe threshold on a query-basis. We examine the impact of arbitrary settings of the α-nDCG parameter. We evaluate the intuitiveness and reliability of α-nDCG when using our proposed setting on both real and synthetic rankings. We demonstrate that the diversity of document rankings can be intuitively measured by employing the safe threshold. Moreover, our proposal does not harm, but instead increases the reliability of the measure in terms of discriminative power, stability, and sensitivity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Song, Chunlin. "A robust region-adaptive digital image watermarking system". Thesis, Liverpool John Moores University, 2012. http://researchonline.ljmu.ac.uk/6122/.

Texto completo da fonte
Resumo:
Digital image watermarking techniques have drawn the attention of researchers and practitioners as a means of protecting copyright in digital images. The technique involves a subset of information-hiding technologies, which work by embedding information into a host image without perceptually altering the appearance of the host image. Despite progress in digital image watermarking technology, the main objectives of the majority of research in this area remain improvements in the imperceptibility and robustness of the watermark to attacks. Watermark attacks are often deliberately applied to a watermarked image in order to remove or destroy any watermark signals in the host data. The purpose of the attack is. aimed at disabling the copyright protection system offered by watermarking technology. Our research in the area of watermark attacks found a number of different types, which can be classified into a number of categories including removal attacks, geometry attacks, cryptographic attacks and protocol attacks. Our research also found that both pixel domain and transform domain watermarking techniques share similar levels of sensitivity to these attacks. The experiment conducted to analyse the effects of different attacks on watermarked data provided us with the conclusion that each attack affects the high and low frequency part of the watermarked image spectrum differently. Furthermore, the findings also showed that the effects of an attack can be alleviated by using a watermark image with a similar frequency spectrum to that of the host image. The results of this experiment led us to a hypothesis that would be proven by applying a watermark embedding technique which takes into account all of the above phenomena. We call this technique 'region-adaptive watermarking'. Region-adaptive watermarking is a novel embedding technique where the watermark data is embedded in different regions of the host image. The embedding algorithms use discrete wavelet transforms and a combination of discrete wavelet transforms and singular value decomposition, respectively. This technique is derived from the earlier hypothesis that the robustness of a watermarking process can be improved by using watermark data in the frequency spectrum that are not too dissimilar to that of the host data. To facilitate this, the technique utilises dual watermarking technologies and embeds parts of the watermark images into selected regions of the host image. Our experiment shows that our technique improves the robustness of the watermark data to image processing and geometric attacks, thus validating the earlier hypothesis. In addition to improving the robustness of the watermark to attacks, we can also show a novel use for the region-adaptive watermarking technique as a means of detecting whether certain types of attack have occurred. This is a unique feature of our watermarking algorithm, which separates it from other state-of-the-art techniques. The watermark detection process uses coefficients derived from the region-adaptive watermarking algorithm in a linear classifier. The experiment conducted to validate this feature shows that, on average, 94.5% of all watermark attacks can be correctly detected and identified.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Chen-Wilson, Lisha. "eCert : a secure and user centric edocument transmission protocol : solving the digital signing practical issues". Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/369983/.

Texto completo da fonte
Resumo:
Whilst our paper-based records and documents are gradually being digitized, security concerns about how such electronic data is stored, transmitted, and accessed have increased rapidly. Although the traditional digital signing method can be used to provide integrity, authentication, and non-repudiation for signed eDocuments, this method does not address all requirements, such as fine-grained access control and content status validation. What is more, information owners have increasing demands regarding their rights of ownership. Therefore, a secure user-centric eDocument management system is essential. Through a case study of a secure and user-centric electronic qualification certificate (eCertificate) system, this dissertation explores the issues and the technology gaps; it identifies existing services that can be re-used and the services that require further development; it proposes a new signing method and the corresponding system framework which solves the problems identified. In addition to tests that have been carried out for the newly designed eCertificate system to be employed under the selected ePortfolio environments, the abstract protocol (named eCert protocol) has also been applied and evaluated in two other eDocument transmitting situations, Mobile eID and eHealthcare patient data. Preliminary results indicate that the recommendation from this research meets the design requirements, and could form the foundation of future eDocument transmitting research and development.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Lee, Sae Hun. "A unified approach to optimal multiprocessor implementations from non-parallel algorithm specifications". Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/16745.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Blanke, Tobias. "Theoretical evaluation of XML retrieval". Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2828/.

Texto completo da fonte
Resumo:
This thesis develops a theoretical framework to evaluate XML retrieval. XML retrieval deals with retrieving those document parts that specifically answer a query. It is concerned with using the document structure to improve the retrieval of information from documents by only delivering those parts of a document an information need is about. We define a theoretical evaluation methodology based on the idea of `aboutness' and apply it to XML retrieval models. Situation Theory is used to express the aboutness proprieties of XML retrieval models. We develop a dedicated methodology for the evaluation of XML retrieval and apply this methodology to five XML retrieval models and other XML retrieval topics such as evaluation methodologies, filters and experimental results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Suh, Taeweon. "Integration and Evaluation of Cache Coherence Protocols for Multiprocessor SoCs". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14065.

Texto completo da fonte
Resumo:
System-on-a-chip (SoC) designs is characterized by heavy reuse of IP blocks to satisfy specific computing needs for target applications, reduce overall design cost, and expedite time-to-market. To meet their performance goal and cost constraint, SoC designers integrate multiple, sometimes heterogeneous, processor IPs to perform particular functions. This design approach is called Multiprocessor SoC (MPSoC). In this thesis, I investigated generic methodologies for enabling efficient communication among heterogeneous processors and quantified the efficiency of coherence traffic. Hardware techniques for two main MPSoC architectures were studied: Integration of cache coherence protocols for shared-bus-based MPSoCs and Cache coherence support for non-shared-bus-based MPSoCs. In the shared-bus-based MPSoCs, the integration techniques guarantee data consistency among incompatible coherence protocols. An integrated protocol will contain common states from these coherence protocols. A snoop-hit buffer and region-based cache coherence were also proposed to further enhance the coherence performance. For the non-shared-bus-based MPSoCs, bypass and bookkeeping approaches were proposed to maintain coherence in a new cache coherence-enforced memory controller. The simulations based on micro-benchmark and RTOS kernel showed the benefits of my methodologies over a generic software solution. This thesis also evaluated and quantified the efficiency of coherence traffic based on a novel emulation platform using FPGA. The proposed technique can completely isolate the intrinsic delay of the coherence traffic to demonstrate the impact of coherence traffic on system performance. Unlike previous evaluation methods, this technique eliminated non-deterministic factors in measurements such as bus arbitration delay and stall in the pipelined bus. The experimental results showed that the cache-to-cache transfer in the Intel server system is less efficient than the main memory access.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Corus, Dogan. "Runtime analysis of evolutionary algorithms with complex fitness evaluation mechanisms". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/48421/.

Texto completo da fonte
Resumo:
Evolutionary algorithms (EAs) are bio-inspired general purpose optimisation methods which are applicable to a wide range of problems. The performance of an EA can vary considerably according to the problem it tackles. Runtime analyses of EAs rigorously prove bounds on the expected computational resources required by the EA to solve a given problem. A crucial component of an EA is the way it evaluates the quality (i.e. fitness) of candidate solutions. Different fitness evaluation methods may drastically change the efficiency of a given EA. In this thesis, the effects of different fitness evaluation methods on the performance of evolutionary algorithms are investigated. A major contribution of this thesis is the first runtime analyses of EAs on bi-level optimisation problems. The performances of different EAs on The Generalised Minimum Spanning Tree Problem and The Generalised Travelling Salesperson Problem are analysed to illustrate how bi-level problem structures can be exploited to delegate part of the optimisation effort to problem-specific deterministic algorithms. Different bi-level representations are considered and it is proved that one of them leads to fixed-parameter evolutionary algorithms for both problems with respect to the number of clusters. Secondly, a new mathematical tool called the level-based theorem is presented. The theorem is a high level analytical tool which provides upper bounds on the runtime of a wide range of non-elitist population-based algorithms with independent sampling and using sophisticated high arity variation operators such as crossover. The independence of this new tool from the objective function allows runtime analyses of EAs which use complicated fitness evaluation methods. As an application of the level-based theorem, we conduct, for the first time, runtime analyses of non-elitist genetic algorithms on pseudo-Boolean test functions and also on three classical combinatorial optimisation problems. The last major contribution of this thesis is the illustration of how the level-based theorem can be used to design genetic algorithms with guaranteed runtime bounds. The well-known graph problems Single Source Shortest Path and All-Pairs Shortest Path are used as test beds. The used fitness evaluation method is tailored to incorporate the optimisation approach of a well known problem-specific algorithm and it is rigorously proved that the presented EA optimises both problems efficiently. The thesis is concluded with a discussion of the wider implications of the presented work and future work directions are explored.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Abdulla, Alan Anwer. "Exploiting similarities between secret and cover images for improved embedding efficiency and security in digital steganography". Thesis, University of Buckingham, 2015. http://bear.buckingham.ac.uk/149/.

Texto completo da fonte
Resumo:
The rapid advancements in digital communication technology and huge increase in computer power have generated an exponential growth in the use of the Internet for various commercial, governmental and social interactions that involve transmission of a variety of complex data and multimedia objects. Securing the content of sensitive as well as personal transactions over open networks while ensuring the privacy of information has become essential but increasingly challenging. Therefore, information and multimedia security research area attracts more and more interest, and its scope of applications expands significantly. Communication security mechanisms have been investigated and developed to protect information privacy with Encryption and Steganography providing the two most obvious solutions. Encrypting a secret message transforms it to a noise-like data which is observable but meaningless, while Steganography conceals the very existence of secret information by hiding in mundane communication that does not attract unwelcome snooping. Digital steganography is concerned with using images, videos and audio signals as cover objects for hiding secret bit-streams. Suitability of media files for such purposes is due to the high degree of redundancy as well as being the most widely exchanged digital data. Over the last two decades, there has been a plethora of research that aim to develop new hiding schemes to overcome the variety of challenges relating to imperceptibility of the hidden secrets, payload capacity, efficiency of embedding and robustness against steganalysis attacks. Most existing techniques treat secrets as random bit-streams even when dealing with non-random signals such as images that may add to the toughness of the challenges. This thesis is devoted to investigate and develop steganography schemes for embedding secret images in image files. While many existing schemes have been developed to perform well with respect to one or more of the above objectives, we aim to achieve optimal performance in terms of all these objectives. We shall only be concerned with embedding secret images in the spatial domain of cover images. The main difficulty in addressing the different challenges stems from the fact that the act of embedding results in changing cover image pixel values that cannot be avoided, although these changes may not be easy to detect by the human eye. These pixel changes is a consequence of dissimilarity between the cover LSB plane and the secretimage bit-stream, and result in changes to the statistical parameters of stego-image bit-planes as well as to local image features. Steganalysis tools exploit these effects to model targeted as well as blind attacks. These challenges are usually dealt with by randomising the changes to the LSB, using different/multiple bit-planes to embed one or more secret bits using elaborate schemes, or embedding in certain regions that are noise-tolerant. Our innovative approach to deal with these challenges is first to develop some image procedures and models that result in increasing similarity between the cover image LSB plane and the secret image bit-stream. This will be achieved in two novel steps involving manipulation of both the secret image and the cover image, prior to embedding, that result a higher 0:1 ratio in both the secret bit-stream and the cover pixels‘ LSB plane. For the secret images, we exploit the fact that image pixel values are in general neither uniformly distributed, as is the case of random secrets, nor spatially stationary. We shall develop three secret image pre-processing algorithms to transform the secret image bit-stream for increased 0:1 ratio. Two of these are similar, but one in the spatial domain and the other in the Wavelet domain. In both cases, the most frequent pixels are mapped onto bytes with more 0s. The third method, process blocks by subtracting their means from their pixel values and hence reducing the require number of bits to represent these blocks. In other words, this third algorithm also reduces the length of the secret image bit-stream without loss of information. We shall demonstrate that these algorithms yield a significant increase in the secret image bit-stream 0:1 ratio, the one that based on the Wavelet domain is the best-performing with 80% ratio. For the cover images, we exploit the fact that pixel value decomposition schemes, based on Fibonacci or other defining sequences that differ from the usual binary scheme, expand the number of bit-planes and thereby may help increase the 0:1 ratio in cover image LSB plane. We investigate some such existing techniques and demonstrate that these schemes indeed lead to increased 0:1 ratio in the corresponding cover image LSB plane. We also develop a new extension of the binary decomposition scheme that is the best-performing one with 77% ratio. We exploit the above two steps strategy to propose a bit-plane(s) mapping embedding technique, instead of bit-plane(s) replacement to make each cover pixel usable for secret embedding. This is motivated by the observation that non-binary pixel decomposition schemes also result in decreasing the number of possible patterns for the three first bit-planes to 4 or 5 instead of 8. We shall demonstrate that the combination of the mapping-based embedding scheme and the two steps strategy produces stego-images that have minimal distortion, i.e. reducing the number of the cover pixels changes after message embedding and increasing embedding efficiency. We shall also demonstrate that these schemes result in reasonable stego-image quality and are robust against all the targeted steganalysis tools but not against the blind SRM tool. We shall finally identify possible future work to achieve robustness against SRM at some payload rates and further improve stego-image quality.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Cooper, Simon. "DISE : a game technology-based digital interactive storytelling framework". Thesis, Liverpool John Moores University, 2011. http://researchonline.ljmu.ac.uk/6101/.

Texto completo da fonte
Resumo:
This thesis details the design and implementation of an Interactive Storytelling Framework. Using software engineering methodology and framework development methods, we aim to design a full Interactive Storytelling system involving a story manager, a character engine, an action engine, a planner, a 3D game engine and a set of editors for story data, world environment modelling and real-time character animation. The framework is described in detail and specified to meet the requirement of bringing a more dynamic real-time interactive story experience to the medium of computer games. Its core concepts borrow from work done in the fields of narrative theory, software engineering, computer games technology, HCI, 3D character animation and artificial intelligence. The contributions of our research and the novelties lie in the data design of the story which allows a modular approach to building reusable resources such as actions, objects, animated characters and whole story 'levels'; a switchable story planner and re-planning system implementation, allowing many planners, heuristics and schedulers that are compatible with PDDL (the "Planning Domain Definition Language") to be easily integrated with minor changes to the main classes; a 3D game engine and framework for web launched or in browser deployment of the finished product; and a user friendly story and world/environment editor; so story authors do not need advanced knowledge of coding PDDL syntax, games programming or 3D modelling to design and author a basic story. As far as we know our Interactive Storytelling Framework is the only one to include a full 3D cross-platform game engine, procedural and manual modelling tools, a story -editor and customisable planner in one complete integrated solution. The finished interactive storytelling applications are presented as computer games designed to be a real-time 3D first person experience, with the player as a main story character in a world where every context filtered action displayed is executable and the player's choices make a difference to the outcome of the story, whilst still allowing the authors high level constraints to progress the narrative along their desired path(s).
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Majeed, Taban Fouad. "Segmentation, super-resolution and fusion for digital mammogram classification". Thesis, University of Buckingham, 2016. http://bear.buckingham.ac.uk/162/.

Texto completo da fonte
Resumo:
Mammography is one of the most common and effective techniques used by radiologists for the early detection of breast cancer. Recently, computer-aided detection/diagnosis (CAD) has become a major research topic in medical imaging and has been widely applied in clinical situations. According to statics, early detection of cancer can reduce the mortality rates by 30% to 70%, therefore detection and diagnosis in the early stage are very important. CAD systems are designed primarily to assist radiologists in detecting and classifying abnormalities in medical scan images, but the main challenges hindering their wider deployment is the difficulty in achieving accuracy rates that help improve radiologists’ performance. The detection and diagnosis of breast cancer face two main issues: the accuracy of the CAD system, and the radiologists’ performance in reading and diagnosing mammograms. This thesis focused on the accuracy of CAD systems. In particular, we investigated two main steps of CAD systems; pre-processing (enhancement and segmentation), feature extraction and classification. Through this investigation, we make five main contributions to the field of automatic mammogram analysis. In automated mammogram analysis, image segmentation techniques are employed in breast boundary or region-of-interest (ROI) extraction. In most Medio-Lateral Oblique (MLO) views of mammograms, the pectoral muscle represents a predominant density region and it is important to detect and segment out this muscle region during pre-processing because it could be bias to the detection of breast cancer. An important reason for the breast border extraction is that it will limit the search-zone for abnormalities in the region of the breast without undue influence from the background of the mammogram. Therefore, we propose a new scheme for breast border extraction, artifact removal and removal of annotations, which are found in the background of mammograms. This was achieved using an local adaptive threshold that creates a binary mask for the images, followed by the use of morphological operations. Furthermore, an adaptive algorithm is proposed to detect and remove the pectoral muscle automatically. Feature extraction is another important step of any image-based pattern classification system. The performance of the corresponding classification depends very much on how well the extracted features represent the object of interest. We investigated a range of different texture feature sets such as Local Binary Pattern Histogram (LBPH), Histogram of Oriented Gradients (HOG) descriptor, and Gray Level Co-occurrence Matrix (GLCM). We propose the use of multi-scale features based on wavelet and local binary patterns for mammogram classification. We extract histograms of LBP codes from the original image as well as the wavelet sub-bands. Extracted features are combined into a single feature set. Experimental results show that our proposed method of combining LBPH features obtained from the original image and with LBPH features obtained from the wavelet domain increase the classification accuracy (sensitivity and specificity) when compared with LBPH extracted from the original image. The feature vector size could be large for some types of feature extraction schemes and they may contain redundant features that could have a negative effect on the performance of classification accuracy. Therefore, feature vector size reduction is needed to achieve higher accuracy as well as efficiency (processing and storage). We reduced the size of the features by applying principle component analysis (PCA) on the feature set and only chose a small number of eigen components to represent the features. Experimental results showed enhancement in the mammogram classification accuracy with a small set of features when compared with using original feature vector. Then we investigated and propose the use of the feature and decision fusion in mammogram classification. In feature-level fusion, two or more extracted feature sets of the same mammogram are concatenated into a single larger fused feature vector to represent the mammogram. Whereas in decision-level fusion, the results of individual classifiers based on distinct features extracted from the same mammogram are combined into a single decision. In this case the final decision is made by majority voting among the results of individual classifiers. Finally, we investigated the use of super resolution as a pre-processing step to enhance the mammograms prior to extracting features. From the preliminary experimental results we conclude that using enhanced mammograms have a positive effect on the performance of the system. Overall, our combination of proposals outperforms several existing schemes published in the literature.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Koutsouras, Panagiotis. "Crafting content : the discovery of Minecraft's invisible digital economy". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/51744/.

Texto completo da fonte
Resumo:
This thesis presents an ethnographic study that aims at explicating the work of creating content in Minecraft. The existing literature paves the way in understanding Minecraft’s community by providing fragments of what players do. However, the game is studied mainly from a ludic perspective or is treated as a resource to explore distinct research agendas, instead of a field of study in itself. As such, particular phenomena that are situated inside Minecraft’s community are lost. The conducted fieldwork discovered the invisible digital economy that is part of this community. More specifically, the chapters to follow elaborate on the actors involved in this economy, covering their roles, responsibilities and goals. Furthermore, the lived work of content production is unpacked by presenting the various work practices members attend to in commissioning, creating, and delivering Minecraft content. It also becomes evident that there is a complex division of labour at play, which is based on a fragmented infrastructure as Minecraft itself does not support the wide range of activities that are necessary for carrying out the work. Essentially, actors bootstrap the market’s infrastructure by appropriating or even creating bespoke systems for conducting the various work practices that are entailed in this business. On top of that, these systems are utilised for articulation work, which is necessary for tracking progress between the geographically dispersed actors, accounting for conducted work and addressing contingent scenarios. The main contribution of this PhD project is the discovery of this digital economy, which evidently plays a significant role in Minecraft’s current form and development. Additionally, prevailing understandings of Minecraft’s ecosystem are re-visited, re-examined, and re-specified, based on the empirical evidence presented in this thesis. Finally, a number of design implications are raised with regard to addressing the game’s lack of CSCW support.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Moss, William B. "Evaluating inherited attributes using Haskell and lazy evaluation". Diss., Connect to the thesis, 2005. http://hdl.handle.net/10066/1486.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Pettitt, Michael Andrew. "Visual demand evaluation methods for in-vehicle interfaces". Thesis, University of Nottingham, 2008. http://eprints.nottingham.ac.uk/10436/.

Texto completo da fonte
Resumo:
Advancements in computing technology have been keenly felt in the automotive industry. Novel in-car systems have the potential to substantially improve the safety, efficiency and comfort of the driving experience. However, they must be carefully designed, so their use does not dangerously distract drivers from fundamental, safety-critical driving tasks. Distraction is a well-established causal factor in road accidents. A concern is that the introduction of new in-vehicle technology may increase exposure to distraction, and lead to an increase in distraction-related accidents. The range of systems often termed In-Vehicle Information Systems (IVIS), encompassing navigation and entertainment systems, in-car email and Internet, are the focus of this thesis, since they are commonly associated with long tasks that are not considered fundamentally relevant to driving. A variety of Human-Computer Interaction (HCI) and Human Factors methods has been employed to assess the potential distraction of IVIS task engagement. These include on-road evaluations, driving simulator studies, and surrogate methods, such as peripheral detection tasks and static task time assessments. The occlusion technique is one such surrogate, where task performance is assessed under intermittent vision conditions. Participants complete a task with 1.5-second vision periods, followed by a period where their vision is occluded. In this way, the technique evaluates how visually demanding a task is, mimicking the behaviour of glancing to and from the forward road scene when driving and performing IVIS tasks. An evaluation of the technique's validity is presented. Sixteen participants performed two tasks on two systems under three conditions: static (full-vision), static (occlusion), and, whilst driving. Results confirmed other research, concluding that the technique is valid. However, the method's assessment through user-trials based on measures of human performance is problematic. Such trials require robust, reliable prototype systems, and can therefore only take place in later design stages. Consequently, the economic effectiveness of the technique is questionable. The keystroke-level model (KLM), which predicts task times for error-free performance by expert users in routine tasks, provides an alternative quantitative assessment method to user-trials. Tasks are decomposed into their most primitive actions, termed operators, which are associated with empirically assessed time values. These values are then summed to predict performance times. An evaluation of the technique in a vehicle environment is presented; twelve participants performed eleven tasks on two in-car entertainment systems, and task times were compared with KLM predictions. Results demonstrate the technique remains valid beyond its original, desktop computing based context. However, the traditional KLM predicts static task time only, and an extended procedure is required to consider occluded task performance. Two studies are presented, seeking to extend the KLM in order to model task performance under the interrupted vision conditions of occlusion trials. In the first, predictions of occlusion metrics are compared with results from the earlier occlusion assessment. In the second, twelve participants performed three tasks on two IVIS systems under occlusion conditions. Results were subsequently compared with predicted values. Both studies conclude that the extended KLM approach produces valid predictions of occlusion methods, with error rates generally within 20% of observed values, in line with expectations for KLM predictions. Subsequently, a case study is presented, to demonstrate the technique's reliability. The results of an independent occlusion study of two IVIS tasks are compared with predictions made by a HCI expert trained in the application of the extended KLM. Error rates for this study were equally low, leading to the conclusion that the extended KLM appears reliable, though further studies are required. It is concluded that the extended-KLM technique is a valid, reliable and economical method for assessing the visual demand of IVIS tasks. In contrast to many user-trial methods, the technique can be applied in early design stages. In addition, future work areas are identified, which could serve to further enhance the validity, reliability and economy of the technique. These include, automating the extended KLM procedure with a software tool, and, the development of new cognitive and physical operators, and new assumptions, specific to IVIS and/or occlusion conditions. For example, it will be useful to develop new cognitive operators that consider the time taken to visually reorient to complex displays following occluded periods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Greenwood, Rob. "Semantic analysis for system level design automation". Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020216/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Maache, Ahmed. "A prototype parallel multi-FPGA accelerator for SPICE CMOS model evaluation". Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/173435/.

Texto completo da fonte
Resumo:
Due to ever increasing complexity of circuits, EDA tools and algorithms are demanding more computational power. This made transistor-level simulation a growing bottleneck in the circuit development process. This thesis serves as a proof of concept to evaluate and quantify the cost of using multi-FPGA systems in SPICE-like simulations in terms of acceleration, throughput, area, and power. To this end, a multi-FPGA architecture is designed to exploit the inherent parallelism in the device model evaluation phase within the SPICE simulator. A code transformation flow which converts the high-level device model code to structural VHDL was also implemented. This ow showed that an automatic compiler system to design, map, and optimise SPICE-like simulations on FPGAs is feasible. This thesis has two main contributions. The first contribution is the multi-FPGA accelerator of the device model evaluation which demonstrated speedup of 10 times over a conventional processor, while consuming six times less power. Results also showed that it is feasible to describe and optimise FPGA pipelined implementations to exploit other class of applications similar to the SPICE device model evaluation. The constant throughput of the pipelined architecture is one of the main factors for the FPGA accelerator to outperform conventional processors. The second contribution lies in the use of multi-FPGA synthesis to optimise the inter-FPGA connections through altering the process of mapping partitions to FPGA devices. A novel technique is introduced which reduces the inter-FPGA connections by an average of 18%. The speedup and power effciency results showed that the proposed multi-FPGA system can be used by the SPICE community to accelerate the transistor-level simulation. The experimental results also showed that it is worthwhile continuing this research further to explore the use of FPGAs to accelerate other EDA tools
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Newman, Kimberly Eileen. "A parallel digital interconnect test methodology for multi-chip module substrate networks". Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13847.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia