Dissertations / Theses on the topic 'Other information and computing sciences'

To see the other types of publications on this topic, follow the link: Other information and computing sciences.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Other information and computing sciences.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Das, Gupta Jishu. "Performance issues for VOIP in Access Networks." Thesis, University of Southern Queensland, 2005. https://eprints.qut.edu.au/12724/1/Das_Gupta_MComputing_Dissertation.pdf.

Full text
Abstract:
The is a general consensus that the Quality of Service (QoS) of Voice over Internet Protocol (VOIP) is of growing importance for research and study. In this dissertation we investigate the performance of VOIP and the impact of resource limitations in the performance of Access Networks. The impact of VOIP performance in access networks is particularly important in regions where Internet resources are limited and the cost of improving these resources is prohibitive. It is clear that perceived VOIP performance, as measured by mean opinion score in experiments where subjects are asked to rate communication quality, is determined by end to end delay on the communication path, delay variation, packet loss, echo, the coding algorithm in use and noise. These performance indicators can be measured and the contribution in the access network can be estimated. The relation between MOS and technical measurement is less well understood. We investigate the contribution of the access network to the overall performance of VOIP services and the ways in which access networks can be designed to improve VOIP performance. Issues of interest include the choice of coding rate, dynamic variation of coding rate, packet length, methods of controlling echo, and the use of Active Queue Management (AQM) in Access Network routers. Methods for analyzing the impact of the access network on VOIP performance will be surveyed and reviewed. Also, we consider some approaches for improving performance of VOIP by doing some experiment using NS2 simulation software with a view to gaining a better understanding of the design of access networks.
APA, Harvard, Vancouver, ISO, and other styles
2

Ashwell, Douglas James. "Reflecting diversity or selecting viewpoints : an analysis of the GM debate in New Zealand's media 1998-2002 : a thesis presented in partial fulfilment of the requirements for the degree of PhD in Communication at Massey University, Palmerston North, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1200.

Full text
Abstract:
The advent of genetically modified (GM) food in New Zealand in 1998 saw the beginning of a highly politicised debate about GM use in the country. The concern over GM and the political situation after the 1999 general election saw the Government establish a Royal Commission of Inquiry on Genetic Modification in May, 2000. The Royal Commission and strong public opposition to GM, evident in large public protests and other actions, made the issue highly newsworthy. The aim of this study was to explore how newspapers reported the GM debate, in particular, examining whether the reportage facilitated greater public debate and awareness about GM through journalists adhering to the ideals of the theory of social responsibility and enacting their watchdog role as encapsulated in the Fourth Estate tradition of the media. To achieve these aims the overall tone of the reportage and also which news source types and themes were most frequently reported were examined. In addition, the relationship and perceptions of scientists and journalists involved in the reporting were explored to examine how these relationships may have influenced the reportage. Content analysis showed the reportage had a pro-GM bias with policy-makers, scientists and industry spokespeople the most frequently cited news sources. The themes of Science, Economics and Politics dominated the reportage. Other source types and themes were less represented, especially themes dealing with ethical and environmental arguments. This lack of representation occurred despite the Royal Commission offering a space for all interested parties to speak. The interviews illustrated that scientists believed the quality of newspaper coverage of GM lacked depth and that important issues were unreported. Journalists found the issue complex to report and said they took care not to oversimplify the science and issues surrounding GM. The relationship between scientists and journalists indicated particular tensions existing between the two groups. The thesis concludes that if robust public debate is to occur within New Zealand regarding GM and other scientific developments, then the media should reflect a greater diversity of opinion by citing other potential news sources offering alternative arguments based on, for example, ethical or environmental grounds.
APA, Harvard, Vancouver, ISO, and other styles
3

Mani, Sindhu. "Empirical Performance Analysis of High Performance Computing Benchmarks Across Variations in Cloud Computing." UNF Digital Commons, 2012. http://digitalcommons.unf.edu/etd/418.

Full text
Abstract:
High Performance Computing (HPC) applications are data-intensive scientific software requiring significant CPU and data storage capabilities. Researchers have examined the performance of Amazon Elastic Compute Cloud (EC2) environment across several HPC benchmarks; however, an extensive HPC benchmark study and a comparison between Amazon EC2 and Windows Azure (Microsoft’s cloud computing platform), with metrics such as memory bandwidth, Input/Output (I/O) performance, and communication computational performance, are largely absent. The purpose of this study is to perform an exhaustive HPC benchmark comparison on EC2 and Windows Azure platforms. We implement existing benchmarks to evaluate and analyze performance of two public clouds spanning both IaaS and PaaS types. We use Amazon EC2 and Windows Azure as platforms for hosting HPC benchmarks with variations such as instance types, number of nodes, hardware and software. This is accomplished by running benchmarks including STREAM, IOR and NPB benchmarks on these platforms on varied number of nodes for small and medium instance types. These benchmarks measure the memory bandwidth, I/O performance, communication and computational performance. Benchmarking cloud platforms provides useful objective measures of their worthiness for HPC applications in addition to assessing their consistency and predictability in supporting them.
APA, Harvard, Vancouver, ISO, and other styles
4

Samuelsson, Erika. "Arkivteoretiska krav på informationen i molnet : En studie om vilka kunskaper som leverantörer av molntjänster har angående arkivteoretiska kvalitetskrav på information." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-19254.

Full text
Abstract:
Idag pratas det mycket om ett fenomen som kallas Cloud Computing, vilket kan kort beskrivas som tillhandahållning av datorresurser så som informationslagring, processorkraft och applikationer via Internet. Begreppet Cloud Computing har ingen direkt motsvarighet på svenska men kan översättas till exempelvis molnbaserade datortjänster eller molntjänster. Vanligtvis så innebär Cloud Computing att it-företag hyr ut en del av en server samt annan datorservice till en organisation. Cloud Computing är ett attraktivt alternativ för både den privata och offentliga sektorn och efterfrågas i allt större utsträckning.   Från ett arkivvetenskapligt perspektiv associeras Cloud Computing med många fördelar, men kanske med ännu fler nackdelar. De centrala problemföreställningarna relateras till tillit i en digital miljö. Frågor om hur information kan förbli autentisk, tillgänglig och fri från insyn från icke behöriga personer genom hela dess livscykel är framträdande i den arkivvetenskapliga debatten.   Syftet med uppsatsen är att undersöka medvetenheten hos svenska leverantörer av molntjänster gällande de särskilda krav som ställs på arkivinformation, som kvalitetskraven pålitlighet, autenticitet samt tillgänglighet. Detta är något som inte har diskuterats i så stor utsträckning inom arkivvetenskapen.   För att samla in det empiriska materialet genomfördes kvalitativa intervjuer med företag som på något sätt tillhandahåller molntjänster. Resultaten av intervjuerna analyserades med hjälp av teoretiskt material huvudsakligen baserat på The International Research on Permanent Authentic Records in Electronic Systems (InterPARES) forskning och begreppsbildningar.   Resultat från studien visar att de tillfrågade företagen som tillhandahåller molntjänster arbetar mycket med säkerhet för att på så vis kunna så långt det är möjligt säkerställa att informationen förvarat i molnet förblir enligt vad de definierar som autentiskt och att kundens krav säkerställs. De har skilda uppfattningar om vad autenticitet innebär och vilket även delvis skiljer sig från den arkivteoretiska tolkningen. Tillgänglighet är ett av de främsta kraven som kunden ställer på informationen och som således är något som it-företagen prioriterar i deras tjänster.
APA, Harvard, Vancouver, ISO, and other styles
5

Bahi, Mouad. "High Performance by Exploiting Information Locality through Reverse Computing." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00768574.

Full text
Abstract:
The main resources for computation are time, space and energy. Reducing them is the main challenge in the field of processor performance.In this thesis, we are interested in a fourth factor which is information. Information has an important and direct impact on these three resources. We show how it contributes to performance optimization. Landauer has suggested that independently on the hardware where computation is run information erasure generates dissipated energy. This is a fundamental result of thermodynamics in physics. Therefore, under this hypothesis, only reversible computations where no information is ever lost, are likely to be thermodynamically adiabatic and do not dissipate power. Reversibility means that data can always be retrieved from any point of the program. Information may be carried not only by the data but also by the process and input data that generate it. When a computation is reversible, information can also be retrieved from other already computed data and reverse computation. Hence reversible computing improves information locality.This thesis develops these ideas in two directions. In the first part, we address the issue of making a computation DAG (directed acyclic graph) reversible in terms of spatial complexity. We define energetic garbage as the additional number of registers needed for the reversible computation with respect to the original computation. We propose a reversible register allocator and we show empirically that the garbage size is never more than 50% of the DAG size. In the second part, we apply this approach to the trade-off between recomputing (direct or reverse) and storage in the context of supercomputers such as the recent vector and parallel coprocessors, graphical processing units (GPUs), IBM Cell processor, etc., where the gap between processor cycle time and memory access time is increasing. We show that recomputing in general and reverse computing in particular helps reduce register requirements and memory pressure. This approach of reverse rematerialization also contributes to the increase of instruction-level parallelism (Cell) and thread-level parallelism in multicore processors with shared register/memory file (GPU). On the latter architecture, the number of registers required by the kernel limits the number of running threads and affects performance. Reverse rematerialization generates additional instructions but their cost can be hidden by the parallelism gain. Experiments on the highly memory demanding Lattice QCD simulation code on Nvidia GPU show a performance gain up to 11%.
APA, Harvard, Vancouver, ISO, and other styles
6

Kaza, Bhagavathi. "Performance Evaluation of Data Intensive Computing In The Cloud." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/450.

Full text
Abstract:
Big data is a topic of active research in the cloud community. With increasing demand for data storage in the cloud, study of data-intensive applications is becoming a primary focus. Data-intensive applications involve high CPU usage for processing large volumes of data on the scale of terabytes or petabytes. While some research exists for the performance effect of data intensive applications in the cloud, none of the research compares the Amazon Elastic Compute Cloud (Amazon EC2) and Google Compute Engine (GCE) clouds using multiple benchmarks. This study performs extensive research on the Amazon EC2 and GCE clouds using the TeraSort, MalStone and CreditStone benchmarks on Hadoop and Sector data layers. Data collected for the Amazon EC2 and GCE clouds measure performance as the number of nodes is varied. This study shows that GCE is more efficient for data-intensive applications compared to Amazon EC2.
APA, Harvard, Vancouver, ISO, and other styles
7

Maor, Amit. "Using a Data Warehouse as Part of a General Business Process Data Analysis System." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1383.

Full text
Abstract:
Data analytics queries often involve aggregating over massive amounts of data, in order to detect trends in the data, make predictions about future data, and make business decisions as a result. As such, it is important that a database management system (DBMS) handling data analytics queries perform well when those queries involve massive amounts of data. A data warehouse is a DBMS which is designed specifically to handle data analytics queries. This thesis describes the data warehouse Amazon Redshift, and how it was used to design a data analysis system for Laserfiche. Laserfiche is a software company that provides each of their clients a system to store and process business process data. Through the 2015-16 Harvey Mudd College Clinic project, the Clinic team built a data analysis system that provides Laserfiche clients with near real-time reports containing analyses of their business process data. This thesis discusses the advantages of Redshift’s data model and physical storage layout, as well as Redshift’s features directly benefit of the data analysis system.
APA, Harvard, Vancouver, ISO, and other styles
8

Terner, Olof, and Hedbjörk Villhelm Urpi. "Quantum Computational Speedup For The Minesweeper Problem." Thesis, Uppsala universitet, Teoretisk fysik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325945.

Full text
Abstract:
Quantum computing is a young but intriguing field of science. It combines quantum mechanics with information theory and computer science to potentially solve certain formerly computationally expensive tasks more efficiently. Classical computers are based on bits that can take on the value zero or one. The values are distinguished by voltage differences in transistors. Quantum computers are instead based on quantum bits, or qubits, that are represented physically by something that exhibits quantum properties, like for example electrons. Qubits also take on the value zero or one, which could correspond to spin up and spin down of an electron. However, qubits can also be in a superposition state between the quantum states corresponding to the value zero and one. This property is what causes quantum computers to be able to outperform classical computers at certain tasks. One of these tasks is searching through an unstructured database. Whereas a classical computer in the worst case has to search through the whole database in order to find the sought element, i.e. the computation time is proportional to the size of the problem, it can be shown that a quantum computer can find the solution in a time proportional to the square root of the size of the problem. This report aims to illustrate the advantages of quantum computing by explicitly solving the classical Windows game Minesweeper, which can be reduced to a problem resembling the unstructured database search problem. It is shown that solving Minesweeper with a quantum algorithm gives a quadratic speedup compared to solving it with a classical algorithm. The report also covers introductory material to quantum mechanics, quantum gates, the particular quantum algorithm Grover's algorithm and complexity classes, which is necessary to grasp in order to understand how Minesweeper can be solved on a quantum computer.
APA, Harvard, Vancouver, ISO, and other styles
9

Kaddour, Inan. "Mobile Cloud Computing: A Comparison Study of Cuckoo and AIOLOS Offloading Frameworks." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/785.

Full text
Abstract:
Currently, smart mobile devices are used for more than just calling and texting. They can run complex applications such as GPS, antivirus, and photo editor applications. Smart devices today offer mobility, flexibility, and portability, but they have limited resources and a relatively weak battery. As companies began creating mobile resource intensive and power intensive applications, they have realized that cloud computing was one of the solutions that they could utilize to overcome smart device constraints. Cloud computing helps decrease memory usage and improve battery life. Mobile cloud computing is a current and expanding research area focusing on methods that allow smart mobile devices to take full advantage of cloud computing. Code offloading is one of the techniques employed in cloud computing with mobile devices. This research compares two dynamic offloading frameworks to determine which one is better in terms of execution time and battery life improvement.
APA, Harvard, Vancouver, ISO, and other styles
10

Spikol, Daniel. "Playing and Learning Across Locations: : Indentifying Factors for the Design of Collaborative Mobile Learning." Licentiate thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-3698.

Full text
Abstract:

The research presented in this thesis investigates the design challenges associated with the development and use of mobile applications and tools for supporting collaboration in educational activities. These technologies provide new opportunities to promote and enhance collaboration by engaging learners in a variety of activities across different places and contexts. A basic challenge is to identify how to design and deploy mobile tools and services that could be used to support collaboration in different kinds of settings. There is a need to investigate how to design collaborative learning processes and to support flexible educational activities that take advantage of mobility. The main research question that I focus on is the identification of factors that influence the design of mobile collaborative learning.

The theoretical foundations that guide my work rely on the concepts behind computer supported collaborative learning and design-based research. These ideas are presented at the beginning of this thesis and provide the basis for developing an initial framework for understanding mobile collaboration. The empirical results from three different projects conducted as part of my efforts at the Center for Learning and Knowledge Technologies at Växjö University are presented and analyzed. These results are based on a collection of papers that have been published in two refereed international conference proceedings, a journal paper, and a book chapter. The educational activities and technological support have been developed in accordance with a grounded theoretical framework. The thesis ends by discussing those factors, which have been identified as having a significant influence when it comes to the design and support of mobile collaborative learning.

The findings presented in this thesis indicate that mobility changes the contexts of learning and modes of collaboration, requiring different design approaches than those used in traditional system development to support teaching and learning. The major conclusion of these efforts is that the learners’ creations, actions, sharing of experiences and reflections are key factors to consider when designing mobile collaborative activities in learning. The results additionally point to the benefit of directly involving the learners in the design process by connecting them to the iterative cycles of interaction design and research.

APA, Harvard, Vancouver, ISO, and other styles
11

Harper, Kevin M. "Challenging the Efficient Market Hypothesis with Dynamically Trained Artificial Neural Networks." UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/718.

Full text
Abstract:
A review of the literature applying Multilayer Perceptron (MLP) based Artificial Neural Networks (ANNs) to market forecasting leads to three observations: 1) It is clear that simple ANNs, like other nonlinear machine learning techniques, are capable of approximating general market trends 2) It is not clear to what extent such forecasted trends are reliably exploitable in terms of profits obtained via trading activity 3) Most research with ANNs reporting profitable trading activity relies on ANN models trained over one fixed interval which is then tested on a separate out-of-sample fixed interval, and it is not clear to what extent these results may generalize to other out-of-sample periods. Very little research has tested the profitability of ANN models over multiple out-of-sample periods, and the author knows of no pure ANN (non-hybrid) systems that do so while being dynamically retrained on new data. This thesis tests the capacity of MLP type ANNs to reliably generate profitable trading signals over rolling training and testing periods. Traditional error statistics serve as descriptive rather than performance measures in this research, as they are of limited use for assessing a system’s ability to consistently produce above-market returns. Performance is measured for the ANN system by the average returns accumulated over multiple runs over multiple periods, and these averages are compared with the traditional buy-and-hold returns for the same periods. In some cases, our models were able to produce above-market returns over many years. These returns, however, proved to be highly sensitive to variability in the training, validation and testing datasets as well as to the market dynamics at play during initial deployment. We argue that credible challenges to the Efficient Market Hypothesis (EMH) by machine learning techniques must demonstrate that returns produced by their models are not similarly susceptible to such variability.
APA, Harvard, Vancouver, ISO, and other styles
12

Walker, Geoffrey. "Communities of practice, networks & technologies : the dynamics of knowledge flows within third sector organisations in the North East of England." Thesis, Northumbria University, 2008. http://nrl.northumbria.ac.uk/3385/.

Full text
Abstract:
The purpose of this research is to assess the function, form and content of knowledge sharing in communities of practice, social networks and the use of collaborative technologies in Third Sector community networks in the North East of England. This is a significant area worthy of detailed examination due to the acknowledged relationship between communities of practice, social networks and the use of collaborative technologies. These three domains have been examined separately by others and suggestions have been made as to relationships between them but few, if any, studies appear to have used case-based evidence to explore how these relationships add value to knowledge sharing. The research addresses the following research question: To what extent does the use of collaborative technologies in communities of practice and social networks, in the Third Sector of the North East region, add value to face- to-face knowledge sharing and how may this be measured? In order to answer the research question a qualitative holistic case study approach based upon three case studies in Newcastle upon Tyne, South Tyneside and Sunderland has been utilised and grounded theory is used to formulate theory from the observed and analysed practice of the case studies under investigation. The conclusion is drawn that when value is added to knowledge sharing it is relative to the strength of several key variables, including, reciprocity, trust, the strength of network ties and the ability to integrate the use of collaborative technologies into ongoing activities. To aid analysis of the presence and strength of these variables a working paradigm has been designed and developed. Case studies are analysed through this paradigm leading to the development of a theory of knowledge sharing in the Third Sector.
APA, Harvard, Vancouver, ISO, and other styles
13

Kabir, Sanzida. "Software Licensing in Cloud Computing : A CASE STUDY ABOUT RELATIONSHIPS FROM A CLOUD SERVICE PROVIDER’S PERSPECTIVE." Thesis, KTH, Industriell Management, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200132.

Full text
Abstract:
One of the most important attribute a cloud service provider (CSP) offers their customers through their cloud services is scalability. Scalability gives customers the ability to vary the amount of capacity when required. A cloud service can be divided in three service layers, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Scalability of a certain service depends on software licenses on these layers. When a customer wants to increase the capacity it will be determined by the CSP's licenses bought from its suppliers in advance. If a CSP scales up more than what was agreed on, then there is a risk that the CSP needs to pay a penalty fee to the supplier. If the CSP invests in too many licenses that does not get utilized, then it will be an investment loss. A second challenge with software licensing is when a customer outsources their applications to the CSP’s platform. As each application comes with a set of licenses, there is a certain level of scalability that cannot be exceeded. If a customer wants the CSP scale up more than usual for an application then the customer need to inform the vendors. However, a common misunderstanding is that the customer expects the CSP to notify the vendor. Then there is a risk that the vendor never gets notified and the customer is in danger of paying a penalty fee. This in turn hurts the CSP’s relationship with the customer. The recommendation to the CSP under study is to create a successful customer relationship management (CRM) and a supplier relationship management (SRM). By creating a CRM with the customer will minimize the occurring misunderstandings and highlight the responsibilities when a customer outsources an application to the CSP. By creating a SRM with the supplier will help the CSP to maintain a flexible paying method that they have with a certain supplier. Furthermore, it will set an example to the remaining suppliers to change their inflexible paying method. By achieving a flexible payment method with the suppliers will make it easier for the CSP to find equilibrium between scalability and licenses.
Ett av de viktigaste attributen en molnleverantör (CSP) erbjuder sina kunder genom sina molntjänster är skalbarhet. Skalbarheten ger kunden möjligheten att variera mängden kapacitet efter behov. En molntjänst kan delas in i tre serviceskikter, Infrastruktur-as-a-Service (IaaS), Platform-as-a-Service (PaaS) och Software-as-a-Service (SaaS). Skalbarheten av en viss service beror på mjukvarulicenser på dessa skikt. När en kund vill öka kapaciteten bestäms detta av CSP:s förhandsköpta licenser från sina leverantörer. Om en CSP skalar upp mer än vad som avtalats, finns det en risk att CSP:n måste betala en straffavgift till leverantören. Investerar CSP i alltför många licenser som inte utnyttjas, blir det en investeringsförlust. En annan utmaning med mjukvarulicenser uppstår när en kund lägger ut sina applikationer på CSP:s plattform. Eftersom varje applikation levereras med en uppsättning av licenser, finns det en bestämd nivå av skalbarhet som inte kan överskridas. Om en kund vill att CSP:n ska skala upp mer än vanligt för en applikation, måste kunden informera applikationsförsäljaren själv. Dock sker det en missuppfattning då kunden förväntar sig att CSP:n ska meddela säljaren om detta. Det finns en risk att säljaren aldrig blir informerad och kunden hamnar i stor risk för att betala en straffavgift. Detta i sin tur skadar CSP:s relation med kunden. Rekommendationen till CSP:n är att skapa en framgångsrik kundrelationshantering (CRM) och leverantörrelationshantering (SRM). Genom att skapa en CRM med kunden kan förekommande missförstånd minimeras och belysa ansvarsområden som råder när en kund lägger ut sin applikation hos CSP:n. Genom att skapa en SRM med leverantörerna kommer det att hjälpa CSP:n att upprätthålla en flexibel betalmetod som de har med en viss leverantör. Dessutom kommer det att vara ett föredöme för de övriga leverantörerna att ändra sin icke flexibla betalningsmetod. Genom att uppnå en mer flexibel betalningsmetod med leverantörerna kommer det att bli enklare för en CSP att hitta en balans mellan skalbarhet och licenser.
APA, Harvard, Vancouver, ISO, and other styles
14

LaValley, Christopher Travis. "Holistic Model of Website Design Elements that Influence Trustworthiness." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/812.

Full text
Abstract:
Trustworthiness of a website relies foremost on a good first impression which includes the visitor’s perception of the user interface. The focus of this research is to investigate the effects of website design elements on user perception of trustworthiness of a site and provide a set of guidelines for website designers. The research design is based on Yosef Jabardeen’s (2009) “conceptual framework analysis”. In this research paper, a holistic model is developed to depict the relationships among website design elements and trustworthiness. The model was tested, validated and updated using the results of the repertory grid technique, a process that elicits perceptions about a topic from an individual. For this research, the topic was website trust, the objects were the website design elements, and the constructs were elicited perceptions regarding those website design elements. The repertory grid technique was applied in two stages to a set of participants made up of website users and website designers. Analysis yielded useful information regarding website design associations and correlations of perceptions. The research findings confirmed original suggestions regarding associations and produced an updated, validated model of website design elements. The research indicated that while all design elements had their importance regarding trust, those elements that provided for the function and security of the website rated the highest in importance and expectation. The validated model will aid website designers in understanding what elements are appealing to the visual senses and conjure credibility and trust. Most importantly, this new understanding may help designers to create websites that attract and retain new users and establishing a successful presence on the Internet.
APA, Harvard, Vancouver, ISO, and other styles
15

Dhoopa, Harish Priyanka. "Towards Designing Energy-Efficient Secure Hashes." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/598.

Full text
Abstract:
In computer security, cryptographic algorithms and protocols are required to ensure security of data and applications. This research investigates techniques to reduce the energy consumed by cryptographic hash functions. The specific hash functions considered are Message Digest-2 (MD2), Message Digest-5 (MD5), Secure Hash Algorithm-1 (SHA-1) and Secure Hash Algorithm-2 (SHA-2). The discussion around energy conservation in handheld devices like laptops and mobile devices is gaining momentum. Research has been done at the hardware and operating system levels to reduce the energy consumed by these devices. However, research on conserving energy at the application level is a new approach. This research is motivated by the energy consumed by anti-virus applications which use computationally intensive hash functions to ensure security. To reduce energy consumption by existing hash algorithms, the generic energy complexity model, designed by Roy et al. [Roy13], has been applied and tested. This model works by logically mapping the input across the eight available memory banks in the DDR3 architecture and accessing the data in parallel. In order to reduce the energy consumed, the data access pattern of the hash functions has been studied and the energy complexity model has been applied to hash functions to redesign the existing algorithms. These experiments have shown a reduction in the total energy consumed by hash functions with different degrees of parallelism of the input message, as the energy model predicted, thereby supporting the applicability of the energy model on the different hash functions chosen for the study. The study also compared the energy consumption by the hash functions to identify the hash function suitable for use based on required security level. Finally, statistical analysis was performed to verify the difference in energy consumption between MD5 and SHA2.
APA, Harvard, Vancouver, ISO, and other styles
16

Feary, Mark S. "Statistical frameworks and contemporary Māori development." Lincoln University, 2008. http://hdl.handle.net/10182/664.

Full text
Abstract:
Māori have entered a period of development that, more than ever before, requires them to explore complex options and make careful decisions about the way forward. This complexity stems from three particular areas. First, from having essentially two sets of rights, as New Zealanders and as Māori, and being active in the struggle to retain those rights. Second, from trying to define and determine development pathways that are consistent with their traditional Māori values, and which align with their desire to participate in and enjoy a modern New Zealand and a global society. Third, from attempting development within a political and societal environment that is governed by a different and dominant culture. Māori, historically and contemporarily, have a culture that leads them to very different views of the world and development pathways than pakeha New Zealanders (D. Marsden, 1994, p. 697). Despite concerted effort and mis placed belief the Māori world view has survived and is being adopted by Māori youth. The Māori worldview sometimes collides with the view of the governing pakeha culture of New Zealand, which values rights, assets and behaviours differently. Despite these differences and the complexities it remains important to measure progress and inform debate about best practice and future options. In this regard, statistical information is crucial, and is generally recognised as one of the currencies of development (World Summit of the Information Society, 2003). Māori increasingly desire to measure and be informed about the feasibility and progress of their development choices in a way that is relevant to their values and culture. Where a Māori view of reality is not present there is a high risk that decisions and actions will reflect a different worldview, will fail to deal with cultural complexities, and ultimately will not deliver the intended development outcomes.
APA, Harvard, Vancouver, ISO, and other styles
17

Schmidt, David. "Knot Flow Classification and its Applications in Vehicular Ad-Hoc Networks (VANET)." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/etd/3723.

Full text
Abstract:
Intrusion detection systems (IDSs) play a crucial role in the identification and mitigation for attacks on host systems. Of these systems, vehicular ad hoc networks (VANETs) are difficult to protect due to the dynamic nature of their clients and their necessity for constant interaction with their respective cyber-physical systems. Currently, there is a need for a VANET-specific IDS that meets this criterion. To this end, a spline-based intrusion detection system has been pioneered as a solution. By combining clustering with spline-based general linear model classification, this knot flow classification method (KFC) allows for robust intrusion detection to occur. Due its design and the manner it is constructed, KFC holds great potential for implementation across a distributed system. The purpose of this thesis was to explain and extrapolate the afore mentioned IDS, highlight its effectiveness, and discuss the conceptual design of the distributed system for use in future research.
APA, Harvard, Vancouver, ISO, and other styles
18

Mupparaju, Naveen. "Performance Evaluation and Comparison of Distributed Messaging Using Message Oriented Middleware." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/456.

Full text
Abstract:
Message Oriented Middleware (MOM) is an enabling technology for modern event- driven applications that are typically based on publish/subscribe communication [Eugster03]. Enterprises typically contain hundreds of applications operating in environments with diverse databases and operating systems. Integration of these applications is required to coordinate the business process. Unfortunately, this is no easy task. Enterprise Integration, according to Brosey et al. (2001), "aims to connect and combines people, processes, systems, and technologies to ensure that the right people and the right processes have the right information and the right resources at the right time"[Brosey01]. Communication between different applications can be achieved by using synchronous and asynchronous communication tools. In synchronous communication, both parties involved must be online (for example, a telephone call), whereas in asynchronous communication, only one member needs to be online (email). Middleware is software that helps two applications communicate with one another. Remote Procedure Calls (RPC) and Object Request Brokers (ORB) are two types of synchronous middleware—when they send a request they must wait for an immediate reply. This can decrease an application’s performance when there is no need for synchronous communication. Even though asynchronous distributed messaging using message oriented middleware is widely used in industry, there is not enough work done in evaluating the performance of various open source Message oriented middleware. The objective of this work was to benchmark and evaluate three different open source MOM’s performance in publish/subscribe and point-to-point domains, functional comparison and qualitative study from developers perspective.
APA, Harvard, Vancouver, ISO, and other styles
19

Mahanga, Mwaka. "Unknown Exception Handling Tool Using Humans as Agents." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/563.

Full text
Abstract:
In a typical workflow process, exceptions are the norm. Exceptions are defined as deviations from the normal sequence of activities and events. Exceptions can be divided into two broad categories: known exceptions (i.e., expected and predefined deviations) and unknown exceptions (i.e., unexpected and undefined deviations). Business Process Execution Language (BPEL) has become the de facto standard for executing business workflows with the use of web services. BPEL includes exception handling methods that are sufficient for known exception scenarios. Depending on the exception and the specifics of the exception handling tools, processes may either halt or move to completion. Instances of processes that are halted or left incomplete due to unhandled exceptions affect the performance of the workflow process, as they increase resource utilization and process completion time. However, designing efficient process handlers to avoid the issue of unhandled exceptions is not a simple task. This thesis provides a tool that handles unknown exceptions using provisions for exception handling with the involvement of human activities by using the BPEL4PEOPLE specification. BPEL4PEOPLE, an extension of BPEL, offers the ability to specify human activities within BPEL processes. The approach considered in this thesis involves humans in exception handling tools by providing an alternate sub process within a given business process. A prototype application has been developed implementing the tool that handles unknown exceptions. The prototype application monitors the progress of an automated workflow process and permits human involvement to reroute the course of a workflow process when an unknown exception occurs. The utility of the prototype and the tool using the Scenario Walkthrough and Inspection Methods (SWIMs) are demonstrated. We demonstrate the utility of the tool through loan application process scenarios, and offer a walkthrough of the system by using examples of instances with examples of known and unknown exceptions, as well as a claims analysis of process instances results.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Sammarraie, Mareh Fakhir. "An Empirical Investigation of Collaborative Web Search Tool on Novice's Query Behavior." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/764.

Full text
Abstract:
In the past decade, research efforts dedicated to studying the process of collaborative web search have been on the rise. Yet, a limited number of studies have examined the impact of collaborative information search processes on novices’ query behaviors. Studying and analyzing factors that influence web search behaviors, specifically users’ patterns of queries when using collaborative search systems can help with making query suggestions for group users. Improvements in user query behaviors and system query suggestions help in reducing search time and increasing query success rates for novices. This thesis investigates the influence of collaboration between experts and novices as well as the use of a collaborative web search tool on novices’ query behavior. We used SearchTeam as our collaborative search tool. This empirical study involves four collaborative team conditions: SearchTeam and expert-novice team, SearchTeam and novice-novice team, traditional and expert-novice team, and traditional and novice-novice team. We analyzed participants’ query behavior in two dimensions: quantitatively (e.g. the query success rate), and qualitatively (e.g. the query reformulation patterns). The findings of this study reveal that the successful query rate is higher in expert-novice collaborative teams, who used the collaborative search tools. Participants in expert-novice collaborative teams who used the collaborative search tools, required less time to finalize all tasks compared to expert-novice collaborative teams, who used the traditional search tools. Self-issued queries and chat logs were major sources of terms that novice participants in expert-novice collaborative teams who used the collaborative search tools used. Novices as part of expert-novice pairs who used the collaborative search tools, employed New and Specialization more often as query reformulation patterns. The results of this study contribute to the literature by providing detailed investigation regarding the influence of utilizing collaborative search tool (SearchTeam) in the context of software troubleshooting and development. This study highlights the possible collaborative information seeking (CIS) activities that may occur among software developers’ interns and their mentors. Furthermore, our study reveals that there are specific features, such as awareness and built-in instant messaging (IM), offered by SearchTeam that can promote the CIS activities among participants and help increase novices’ query success rates. Finally, we believe the use of CIS tools, designed to support collaborative search actions in big software development companies, has the potential to improve the overall novices’ query behavior and search strategies.
APA, Harvard, Vancouver, ISO, and other styles
21

Vijayakumar, Sruthi. "Hadoop Based Data Intensive Computation on IAAS Cloud Platforms." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/567.

Full text
Abstract:
Cloud computing is a relatively new form of computing which uses virtualized resources. It is dynamically scalable and is often provided as pay for use service over the Internet or Intranet or both. With increasing demand for data storage in the cloud, the study of data-intensive applications is becoming a primary focus. Data intensive applications are those which involve high CPU usage, processing large volumes of data typically in size of hundreds of gigabytes, terabytes or petabytes. The research in this thesis is focused on the Amazon’s Elastic Cloud Compute (EC2) and Amazon Elastic Map Reduce (EMR) using HiBench Hadoop Benchmark suite. HiBench is a Hadoop benchmark suite and is used for performing and evaluating Hadoop based data intensive computation on both these cloud platforms. Both quantitative and qualitative comparisons of Amazon EC2 and Amazon EMR are presented. Also presented are their pricing models and suggestions for future research.
APA, Harvard, Vancouver, ISO, and other styles
22

Bhogal, Varun. "Analysis of BFSA Based Anti-Collision Protocol in LF, HF, and UHF RFID Environments." UNF Digital Commons, 2014. http://digitalcommons.unf.edu/etd/511.

Full text
Abstract:
Over the years, RFID (radio frequency identification) technology has gained popularity in a number of applications. The decreased cost of hardware components along with the recognition and implementation of international RFID standards have led to the rise of this technology. One of the major factors associated with the implementation of RFID infrastructure is the cost of tags. Low frequency (LF) RFID tags are widely used because they are the least expensive. The drawbacks of LF RFID tags include low data rate and low range. Most studies that have been carried out focus on one frequency band only. This thesis presents an analysis of RFID tags across low frequency (LF), high frequency (HF), and ultra-high frequency (UHF) environments. Analysis was carried out using a simulation model created using OPNET Modeler 17. The simulation model is based on the Basic Frame Slotted ALOHA (BFSA) protocol for non-unique tags. As this is a theoretical study, environmental disturbances have been assumed to be null. The total census delay and the network throughput have been measure for tags ranging from 0 to 1500 for each environment. A statistical analysis has been conducted in order to compare the results obtained for the three different sets.
APA, Harvard, Vancouver, ISO, and other styles
23

Franke, Jörn. "Coordination des activités réparties dans des situations dynamiques : le cas de la gestion de crise inter-organisationnel." Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00642820.

Full text
Abstract:
De nombreuses catastrophes de diverses envergures frappent régulièrement des populations partout dans le monde. Parmi les exemples marquant on peut citer l'ouragan Katrina en 2005, le tremblement de terre en Haïti en 2010 ou plus récemment le Tsunami au Japon et la catastrophe de Fukujima qui a suivie. Au cours de ces catastrophes, plusieurs centaines d'organisations, comme la police, les pompiers ou les organisations d'aide humanitaire, interviennent pour sauver les gens et aider à revenir à une vie normale. Ces organisations ont besoin de se coordonner pour faire face à une situation dynamique avec des ressources limitées et une vision partielle de la situation. L'évolution de la situation entraîne souvent des changements d'objectif et de plan. Un des problèmes typique est d'obtenir un aperçu sur les relations entre ce qui a été fait, ce qui se passe actuellement et quelles sont les prochaines étapes. Ce problème est particulièrement difficile sur le plan inter-organisationnel : Chaque organisation coordonne la réponse de sa propre perspective et s'appuie sur les informations fournies par d'autres organisations. Notre objectif dans cette thèse est d'étudier comment supporter la coordination des activités par des personnes de différentes organisations dans une situation dynamique par un système d'information. L'idée de base est de tirer profit d'une approche basée sur les processus, où les activités et leurs relations sont rendues explicites. Nous présentons un cadre pour la coordination des activités dans des situations dynamiques. Il permet la modélisation ad hoc des relations entre ce qui a été fait, ce qui se passe actuellement et quelles sont les prochaines étapes. Les écarts par rapport au modèle et comment les activités ont été réalisées sont affichées à l'utilisateur pour mettre en évidence l'impact de l'évolution des objectifs. Nous étendons ce cadre au niveau inter-organisationnel. Certaines activités peuvent être partagées entre différentes organisations. Tout n'est pas partagé entre tout le monde pour tenir compte du respect de la vie privée, de la réglementation, des raisons stratégiques ou autres. Les activités partagées sont reproduites dans les espaces de travail de ces organisations. Nous décrivons comment des vues divergentes sur les activités et leurs relations peuvent etre détectées et traitées afin de revenir éventuellement à une vue convergente. Les concepts sont mis en œuvre comme une extension d'un service de collaboration distribuée ouvert. Ils ont été évalués par des gestionnaires de catastrophes expérimentés. Par ailleurs, nous avons conçu une expérience visant à évaluer l'utilisation d'outils pour aborder ces question. Nous avons effectué plusieurs expériences pour valider cette expérience. D'autres expériences pourront fournir une validation plus complété du modèle proposé dans cette thèse.
APA, Harvard, Vancouver, ISO, and other styles
24

Birchell, Shannon Lloyd. "Trapping ACO applied to MRI of the Heart." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/862.

Full text
Abstract:
The research presented here supports the ongoing need for automatic heart volume calculation through the identification of the left and right ventricles in MRI images. The need for automated heart volume calculation stems from the amount of time it takes to manually processes MRI images and required esoteric skill set. There are several methods for region detection such as Deep Neural Networks, Support Vector Machines and Ant Colony Optimization. In this research Ant Colony Optimization (ACO) will be the method of choice due to its efficiency and flexibility. There are many types of ACO algorithms using a variety of heuristics that provide advantages in different environments and knowledge domains. All ACO algorithms share a foundational attribute, a heuristic that acts in conjunction with pheromones. These heuristics can work in various ways, such as dictating dispersion or the interpretation of pheromones. In this research a novel heuristic to disperse and act on pheromone is presented. Further, ants are applied to more general problem than the normal objective of finding edges, highly qualified region detection. The reliable application of heuristic oriented algorithms is difficult in a diverse environment. Although the problem space here is limited to MRI images of the heart, there are significant difference among them: the topology of the heart is different by patient, the angle of the scans changes and the location of the heart is not known. A thorough experiment is conducted to support algorithm efficacy using randomized sampling with human subjects. It will be shown during the analysis the algorithm has both prediction power and robustness.
APA, Harvard, Vancouver, ISO, and other styles
25

Sivasubramaniam, Ravishankar. "Performance Evaluation of LINQ to HPC and Hadoop for Big Data." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/463.

Full text
Abstract:
There is currently considerable enthusiasm around the MapReduce paradigm, and the distributed computing paradigm for analysis of large volumes of data. The Apache Hadoop is the most popular open source implementation of MapReduce model and LINQ to HPC is Microsoft's alternative to open source Hadoop. In this thesis, the performance of LINQ to HPC and Hadoop are compared using different benchmarks. To this end, we identified four benchmarks (Grep, Word Count, Read and Write) that we have run on LINQ to HPC as well as on Hadoop. For each benchmark, we measured each system’s performance metrics (Execution Time, Average CPU utilization and Average Memory utilization) for various degrees of parallelism on clusters of different sizes. Results revealed some interesting trade-offs. For example, LINQ to HPC performed better on three out of the four benchmarks (Grep, Read and Write), whereas Hadoop performed better on the Word Count benchmark. While more research that is extensive has focused on Hadoop, there are not many references to similar research on the LINQ to HPC platform, which is slowly evolving during the writing of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
26

Beckhusen, Benedict. "Mobile Apps and the ultimate addiction to the Smartphone : A comprehensive study on the consequences of society’s mobile needs." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-31159.

Full text
Abstract:
The smartphone is omnipresent and is cherished and held close by people. It allows for constant connection within a digitally connected society, as well as for many other purposes such as leisure activity or informational purpose. Within the Information Systems studies deeper investigation is required as to what impact this “taken – for – granted” mobile access to information and mobile apps has for individuals and society and if a “technological addiction”can be developed when using the smartphone for everything during the day on such a constant basis. The aim of this study was to understand the role of the smartphone in society and to shed light on this unclear relationship between the constant use of a smartphone and its development towards an addictive quality. To reach a conclusion, in depth – interviews were conducted with participants about their relationship to the smartphone and their smartphone use based on questions derived from literature on mobile communication technologies and the types of digital addictions existing. The results are that the smartphone is a device that seamlessly integrates into our daily lives in that we unconsciously use it as a tool to make our daily tasks more manageable, and enjoyable. It also supports us in getting better organized, to be in constant touch with family and friends remotely, and to be more mobile which is a useful ability in today’s mobility driven society. Smartphones have been found to inhabit a relatively low potential to addiction. Traits of voluntary behaviour, habitual behaviour, and mandatory behaviour of smartphone use have been found. All of these behaviours are not considered a true addiction. In the end, it seems that the increase of smartphone use is mainly due to the way we communicate nowadays digitally,and the shift in how we relate to our social peers using digital means.
APA, Harvard, Vancouver, ISO, and other styles
27

Chawla, Lovelesh. "Use of IBM Collaborative Lifecycle Management Solution to Demonstrate Traceability for Small, Real-World Software Development Project." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/606.

Full text
Abstract:
The Standish Group Study of 1994 showed that 53 percent of software projects failed outright and another 31 percent were challenged by extreme budget and/or time overrun. Since then different responses to the high rate of software project failures have been proposed. SEI’s CMMI, the ISO’s 9001:2000 for software development, and the IEEE’s JSTD-016 are some examples of such responses. Traceability is the one common feature that these software development standards impose. Over the last decade, software and system engineering communities have been researching subjects such as developing more sophisticated tooling, applying information retrieval techniques capable of semi-automating the trace creation and maintenance process, developing new trace query languages and visualization techniques that use trace links, applying traceability in specific domains such as Model Driven Development, product line systems and agile project environment. These efforts have not been in vain. The 2012 CHAOS results show an increase in project success rate of 39% (delivered on time, on budget, with required features and functions), and a decrease of 18% in the number of failures (cancelled prior to completion or delivered and never used). Since research has shown traceability can improve a project’s success rate, the main purpose of this thesis is to demonstrate traceability for a small, real-world software development project using IBM Collaborative Lifecycle Management. The objective of this research was fulfilled since the case study of traceability was described in detail as applied to the design and development of the Value Adjustment Board Project (VAB) of City of Jacksonville using the scrum development approach within the IBM Rational Collaborative Lifecycle Management Solution. The results may benefit researchers and practitioners who are looking for evidence to use the IBM CLM solution to trace artifacts in a small project.
APA, Harvard, Vancouver, ISO, and other styles
28

Soni, Neha. "An Empirical Performance Analysis Of IaaS Clouds With CloudStone Web 2.0 Benchmarking Tool." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/583.

Full text
Abstract:
Web 2.0 applications have become ubiquitous over the past few years because they provide useful features such as a rich, responsive graphical user interface that supports interactive and dynamic content. Social networking websites, blogs, auctions, online banking, online shopping and video sharing websites are noteworthy examples of Web 2.0 applications. The market for public cloud service providers is growing rapidly, and cloud providers offer an ever-growing list of services. As a result, developers and researchers find it challenging when deciding which public cloud service to use for deploying, experimenting or testing Web 2.0 applications. This study compares the scalability and performance of a social-events calendar application on two Infrastructure as a Service (IaaS) cloud services – Amazon EC2 and HP Cloud. This study captures and compares metrics on three different instance configurations for each cloud service such as the number of concurrent users (load), as well as response time and throughput (performance). Additionally, the total price of the three different instance configurations for each cloud service is calculated and compared. This comparison of the scalability, performance and price metrics provides developers and researchers with an insight into the scalability and performance characteristics of the three instance configurations for each cloud service, which simplifies the process of determining which cloud service and instance configuration to use for deploying their Web 2.0 applications. This study uses CloudStone – an open-source, three-tier web application benchmarking tool that simulates Web 2.0 application activities – as a realistic workload generator and to capture the intended metrics. The comparison of the collected metrics indicate that all of the tested Amazon EC2 instance configurations provide better scalability and lower latency at a lower cost than the respective HP Cloud instance configurations; however, the tested HP Cloud instance configurations provide a greater storage capacity than the Amazon EC2 instance configurations, which is an important consideration for data-intensive Web 2.0 applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Wouters, Erik Henricus. "Secure Intermittent Computing." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254588.

Full text
Abstract:
Low-power embedded systems exist in many forms. Using batteries for these devices generally imposes high maintenance costs and there are many applications where a grid connection is not feasible [1]. A solution for powering this class of embedded systems is energy harvesting. This can mean the use of energy from ambient RF-signals to power the device, or another form of ambient energy [2].These batteryless devices are generally unable to harvest enough power for continuous operation and therefore employ some sort of checkpointing mechanism to copy (parts of) the main memory to non-volatile storage. State-ofthe-art checkpointing mechanisms employ no security [3–19], or employ encryption to protect the checkpoints [20, 21].In this thesis, the use of TrustZone to secure the checkpoints is compared to the use of the Advanced Encryption Standard (AES). A model was developed to analyze the energy overhead of different security mechanisms based on a large number of experiments. The results show that securing checkpoints with software based AES-128 encryption has a 2.5 times higher energy overhead than securing these using TrustZone. The level of security for these mechanisms was also evaluated. It is shown that TrustZone security is indeed able to protect the checkpoints while they are stored in non-volatile storage, while the software based AES implementation was not secure against known attacks from previous research [22–24].
Lågdrivna inbyggda systemer finns i många former. Användning av batterier för dessa enheter innebär generellt höga underhållskostnader och det finns många applikationer där en nätanslutning inte är möjligt [1]. Energi skörd är en lösning för att driva denna klass av inbyggda systemer. Det kan medföra användningen av energi från omgivande RF-signaler för att driva apparaten, eller genom att använda en annan form av omgivande energi [2].Dessa batteri-fri apparater är oförmögen att skörda tillräckligt energi för en kontinuerlig användning. Därför batteri-fri apparater använder någon form av kontrollpunktsmekanism (checkpointing) för att kopiera (delar av) huvudminne till icke-flyktig lagring. De senaste kontrollpunktsmekanismerna använder ingen säkerhet för att skydda kontrollpunkterna [3–19], eller använder i stället kryptering för att skydda kontrollpunkterna [20, 21].I denna thesis jämförs användningen för att säkra kontrollpunkterna mellan TrustZone och den Advanced Encryption Standard (AES). En modell har utvecklats för att analysera energi overhead för olika säkerhetsmekanismer baserat på ett stort antal experiment. Resultaten visar att skydda checkpunkter med programvarubaserad AES-128 har en energi overhead på 2.5 gånger högre än med TrustZone-skyddat. Säkerhetsnivån för dessa mekanismer utvärderades också. Det visas att TrustZone-skyddat verkligen kan skydda checkpunkterna när de lagras i icke-flyktig lagringsutrymmen, tväremot det programbaserade genomförandet inte var trygg mot kända attacker från föregående forskning.
APA, Harvard, Vancouver, ISO, and other styles
30

Craveiro, António Manuel Balazeiro Cascão. "O hipercorpo-tecnologias da carne : do culturista ao cyborg." Master's thesis, Instituições portuguesas -- UP-Universidade do Porto -- -Faculdade de Ciências do Desporto e de Educação Física, 2000. http://dited.bn.pt:80/29212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Fangjie, and Israr Khan. "Sharing Elderly Healthcare information on Cloud Computing." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4404.

Full text
Abstract:
Context: Due to rapid increase in the population of elderly people, sharing healthcare information has become an essential requirement for the development of e-health system. By conducting a research in e-health and cloud computing we have explored the advantages and disadvantages of sharing healthcare information for elderly people through cloud computing. Objectives: The main purpose of this research is to investigate the suitability of cloud computing to share healthcare information. The study is conducted by exploring the advantages and disadvantages of cloud computing for e-Health systems. Investigating a suitable cloud computing platform is also one of the objectives of this research. Methods: In order to identify and get deeper understanding of these issues, we have performed literature review of E-health, EHI and Cloud Computing technologies and we have develop a prototype application as experiment. Results: Based on the finding of literature review, we came to know that E-health was huge field. It needs a huge infrastructure to establish. The healthcare information in e-health requires quick and easy sharing. In EHI research, we defined EHI and find reasons for sharing elderly healthcare information. At cloud computing research, we knew the concept of cloud computing and found out advantages and disadvantages of implementing e-health in cloud computing. From literature review, we developed sharing application which was used to share EHI in cloud computing. In experiment, we proved our supposition and discussed advantages and disadvantages of sharing EHI in cloud computing by using Google App Engine (GAE). Conclusions: We conclude that cloud computing meets the requirements of sharing EHI, but it also has some limitations due to its architecture and network condition. In this research we have identified further research area that can help in enhancing security and privacy in cloud environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Vallejo, Benítez Cano Jaime. "D-Wave Systems Quantum Computing : State-of-the-Art and Performance Comparison with Classical Computing." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302029.

Full text
Abstract:
The aim of this project is to study Quantum Computing state-of-art and to compare it with classical computing methods. The research is focused on D-Wave Systems’ Quantum Computing approach, exploring its architectures: Chimera and Pegasus; tools, and its Quantum Annealing process. In order to study weaknesses and strengths of each method, the problem that has been analysed is the well-known TSP (Travelling Salesman Problem). Performance and capabilities comparison has involved four different set-ups to solve the problem: • D-Wave’s Exact solver (using CPU exclusively). • Pure QPU implementation using D-Wave’s Dimod solver. • GPU adapted parallelization of Classical Brute-force algorithm with NVIDIA CUDA API. • D-Wave’s Hybrid solver (which combines QPU together with classical techniques). The results of this project reveal that pure QPU implementation is faster than every other but only works for small examples yet, its scalability is limited by QPU architectures. QPU Hybrid solution seems to be the fastest and more scalable solution. Both CPU and GPU approaches are fast for small problems but when scaling, they have a solid limit that is impossible to break. This is due to the O(N!) complexity of the Brute-force algorithm. Comparison between Pegasus and Chimera architectures reveals that Pegasus performs much better due to the more complex topology and connectivity between their qubits.
Målet för detta projekt är att undersöka det nya området kvantberäkningar, för att jämföra det med klassiska beräkningsmetoder. Arbetet har fokuserat primärt på att utforska datorarkitekturerna Chimera och Pegasus som D-Wave Systems tagit fram. Utöver det studeras även deras beräkningsredskap och deras process för Quantum Annealing. För att ta reda på styrkor och svagheter hos de undersökta metoderna har det välstuderade problemet Handelsresande (TSP) valts som testproblem. Prestanda och förmågor har jämförts över fyra olika fall: • D-Wave’s Exact solver (använder enbart CPU). • Ren QPU implementation som använder D-Wave’s Dimod solver. • Klassisk totalsökning (parallelliserad för GPU med NVIDIA CUDA API). • D-Wave’s Hybrid solver (kombinerar QPU med klassiska lösningstekniker). Resultaten av detta projekt visar att den rena QPU-implementationen är snabbare än de andra, dock är den för närvarande bara körbar på små probleminstanser då dess skalbarhet är begränsad till tillgängliga QPU-arkitekturer. Hybridlösningen Hybrid solver verkar vara den lösning som är snabbast och mest skalbar överlag. Vidare så var både CPU och GPU snabba för små probleminstanser men p.g.a att tidskomplexiteten för en totalsökning av TSP är O(N!) når man ett tak på problemstorlek som ingen av dem behärskar att ta sig förbi. Jämförelsen mellan Pegasus och Chimera visade att Pegasus presterar bättre av de två, tack vare en högre komplexitet i både dess topologi, och i kopplingarna mellan dess kvantbitar.
APA, Harvard, Vancouver, ISO, and other styles
33

Paladi, Nicolae. "Trusted Computing and Secure Virtualization in Cloud Computing." Thesis, Security Lab, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-24035.

Full text
Abstract:
Large-scale deployment and use of cloud computing in industry is accompanied and in the same time hampered by concerns regarding protection of data handled by cloud computing providers. One of the consequences of moving data processing and storage off company premises is that organizations have less control over their infrastructure. As a result, cloud service (CS) clients must trust that the CS provider is able to protect their data and infrastructure from both external and internal attacks. Currently however, such trust can only rely on organizational processes declared by the CS provider and can not be remotely verified and validated by an external party. Enabling the CS client to verify the integrity of the host where the virtual machine instance will run, as well as to ensure that the virtual machine image has not been tampered with, are some steps towards building trust in the CS provider. Having the tools to perform such verifications prior to the launch of the VM instance allows the CS clients to decide in runtime whether certain data should be stored- or calculations should be made on the VM instance offered by the CS provider. This thesis combines three components -- trusted computing, virtualization technology and cloud computing platforms -- to address issues of trust and security in public cloud computing environments. Of the three components, virtualization technology has had the longest evolution and is a cornerstone for the realization of cloud computing. Trusted computing is a recent industry initiative that aims to implement the root of trust in a hardware component, the trusted platform module. The initiative has been formalized in a set of specifications and is currently at version 1.2. Cloud computing platforms pool virtualized computing, storage and network resources in order to serve a large number of customers customers that use a multi-tenant multiplexing model to offer on-demand self-service over broad network. Open source cloud computing platforms are, similar to trusted computing, a fairly recent technology in active development. The issue of trust in public cloud environments is addressed by examining the state of the art within cloud computing security and subsequently addressing the issues of establishing trust in the launch of a generic virtual machine in a public cloud environment. As a result, the thesis proposes a trusted launch protocol that allows CS clients to verify and ensure the integrity of the VM instance at launch time, as well as the integrity of the host where the VM instance is launched. The protocol relies on the use of Trusted Platform Module (TPM) for key generation and data protection. The TPM also plays an essential part in the integrity attestation of the VM instance host. Along with a theoretical, platform-agnostic protocol, the thesis also describes a detailed implementation design of the protocol using the OpenStack cloud computing platform. In order the verify the implementability of the proposed protocol, a prototype implementation has built using a distributed deployment of OpenStack. While the protocol covers only the trusted launch procedure using generic virtual machine images, it presents a step aimed to contribute towards the creation of a secure and trusted public cloud computing environment.
TESPEV
CNS
APA, Harvard, Vancouver, ISO, and other styles
34

Renbi, Abdelghani. "Power and Energy Efficiency Evaluation for HW and SW Implementation of nxn Matrix Multiplication on Altera FPGAs." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-10545.

Full text
Abstract:

In addition to the performance, low power design became an important issue in the design process of mobile embedded systems. Mobile electronics with rich features most often involve complex computation and intensive processing, which result in short battery lifetime and particularly when low power design is not taken in consideration. In addition to mobile computers, thermal design is also calling for low power techniques to avoid components overheat especially with VLSI technology. Low power design has traced a new era. In this thesis we examined several techniques to achieve low power design for FPGAs, ASICs and Processors where ASICs were more flexible to exploit the HW oriented techniques for low power consumption. We surveyed several power estimation methodologies where all of them were prone to at least one disadvantage. We also compared and analyzed the power and energy consumption in three different designs, which perform matrix multiplication within Altera platform and using state-of-the-art FPGA device. We concluded that NIOS II\e is not an energy efficient alternative to multiply nxn matrices compared to HW matrix multipliers on FPGAs and configware is an enormous potential to reduce the energy consumption costs.

APA, Harvard, Vancouver, ISO, and other styles
35

Bui, Thai Le Quy. "Using Spammers' Computing Resources for Volunteer Computing." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1629.

Full text
Abstract:
Spammers are continually looking to circumvent counter-measures seeking to slow them down. An immense amount of time and money is currently devoted to hiding spam, but not enough is devoted to effectively preventing it. One approach for preventing spam is to force the spammer's machine to solve a computational problem of varying difficulty before granting access. The idea is that suspicious or problematic requests are given difficult problems to solve while legitimate requests are allowed through with minimal computation. Unfortunately, most systems that employ this model waste the computing resources being used, as they are directed towards solving cryptographic problems that provide no societal benefit. While systems such as reCAPTCHA and FoldIt have allowed users to contribute solutions to useful problems interactively, an analogous solution for non-interactive proof-of-work does not exist. Towards this end, this paper describes MetaCAPTCHA and reBOINC, an infrastructure for supporting useful proof-of-work that is integrated into a web spam throttling service. The infrastructure dynamically issues CAPTCHAs and proof-of-work puzzles while ensuring that malicious users solve challenging puzzles. Additionally, it provides a framework that enables the computational resources of spammers to be redirected towards meaningful research. To validate the efficacy of our approach, prototype implementations based on OpenCV and BOINC are described that demonstrate the ability to harvest spammer's resources for beneficial purposes.
APA, Harvard, Vancouver, ISO, and other styles
36

Sudhindaran, Daniel Sushil. "Generating a Normalized Database Using Class Normalization." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/727.

Full text
Abstract:
Relational databases are the most popular databases used by enterprise applications to store persistent data to this day. It gives a lot of flexibility and efficiency. A process called database normalization helps make sure that the database is free from redundancies and update anomalies. In a Database-First approach to software development, the database is designed first, and then an Object-Relational Mapping (ORM) tool is used to generate the programming classes (data layer) to interact with the database. Finally, the business logic code is written to interact with the data layer to persist the business data to the database. However, in modern application development, a process called Code-First approach evolved where the domain classes and the business logic that interacts with the domain classes are written first. Then an Object Relational Mapping (ORM) tool is used to generate the database from the domain classes. In this approach, since database design is not a concern, software programmers may ignore the process of database normalization altogether. To help software programmers in this process, this thesis takes the theory behind the five database normal forms (1NF - 5NF) and proposes Five Class Normal Forms (1CNF - 5CNF) that software programmers may use to normalize their domain classes. This thesis demonstrates that when the Five Class Normal Forms are applied manually to a class by a programmer, the resulting database that is generated from the Code-First approach is also normalized according to the rules of relational theory.
APA, Harvard, Vancouver, ISO, and other styles
37

Hu, Yan. "Cloud Computing for Interoperability in Home-Based Healthcare." Licentiate thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00605.

Full text
Abstract:
The care of chronic disease has become a main challenge for healthcare institutions around the world. As the incidence and prevalence of chronic diseases continue to increase, traditional hospital-based healthcare is less able to meet the needs of every patient. Treating chronic disease heavily depends on the patient’s daily behaviors, so patient-centered healthcare is encouraged. To improve patients’ quality of life, moving the base of healthcare from hospital to home is imperative. Home-based chronic disease care involves many different healthcare organizations and healthcare providers. Therefore, interoperability is a key approach to provide efficient and convenient home-based healthcare services. This thesis aims to reveal the interoperability issues in the current healthcare system and to point out an appropriate technical solution to overcome them. We start with collecting perspectives from both healthcare providers and healthcare recipients through interviews and online surveys to understand the situations and problems they face. In our research study, we mainly use two current techniques―peer-to-peer (P2P) networks and cloud computing―to design prototypes for sharing healthcare data, developing both a P2P-based solution and a cloud-based solution. Comparing these two techniques, we found the cloud-based solution covered most of the deficiencies in healthcare interoperability. Although there are different types of interoperability, such as pragmatic, semantic and syntactic, we explored alternative solutions specifically for syntactic interoperability. To identify the state of the art and pinpoint the challenges and possible future directions for applying a cloud-based solution, we reviewed the literature on cloud-based eHealth solutions. We suggest that a hybrid cloud model, which contains access controls and techniques for securing data, would be a feasible solution for developing a citizen-centered, home-based healthcare system. Patients’ healthcare records in hospitals and other healthcare centers could be kept in private clouds, while patients’ daily self-management data could be published in a trusted public cloud. Patients, as the owners of their health data, should then decide who can access their data and the conditions for sharing. Finally, we propose an online virtual community for home-based chronic disease healthcare―a cloud-based, home healthcare platform. The requirements of the platform were mainly determined from the responses to an online questionnaire delivered to a target group of people. This platform integrates healthcare providers and recipients within the same platform. Through this shared platform, interoperability among different healthcare providers, as well as with healthcare recipients’ self-management regimens, could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
38

"Answer Set Programming and Other Computing Paradigms." Doctoral diss., 2013. http://hdl.handle.net/2286/R.I.17828.

Full text
Abstract:
abstract: Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages.
Dissertation/Thesis
Ph.D. Computer Science 2013
APA, Harvard, Vancouver, ISO, and other styles
39

Ng-Kruelle, Seok Hian. "The price of convenience : implications of socially pervasive computing for personal privacy." 2006. http://arrow.unisa.edu.au:8081/1959.8/46369.

Full text
Abstract:
Literature has identified the need to study socially pervasive ICT in context in order to understand how user acceptability of innovation varies according to different inputs. This thesis contributes to the existing body of knowledge on innovation studies (Chapter 2) and proposes a methodology for a conceptual model, for representing dynamic contextual changes in longitudinal studies. The foundation for this methodology is the 'Price of Convenience' (PoC) Model (Chapter 4). As a theory development Thesis, it deals with two related studies of socially pervasive ICT implementation: (1) voluntary adoption of innovations and (2) acceptance of new socially pervasive and ubiquitous ICT innovations (Chapters 6 and 7).
APA, Harvard, Vancouver, ISO, and other styles
40

Beckett, Jason. "Forensic computing : a deterministic model for validation and verification through an ontological examination of forensic functions and processes." 2010. http://arrow.unisa.edu.au:8081/1959.8/93190.

Full text
Abstract:
This dissertation contextualises the forensic computing domain in terms of validation of tools and processes. It explores the current state of forensic computing comparing it to the traditional forensic sciences. The research then develops a classification system for the disciplines functions to establish the extensible base for which a validation system is developed.
Thesis (PhD)--University of South Australia, 2010
APA, Harvard, Vancouver, ISO, and other styles
41

Sankaranarayanan, Suresh. "Studies in agent based IP traffic congestion management in diffserv networks." 2006. http://arrow.unisa.edu.au:8081/1959.8/46358.

Full text
Abstract:
The motivation for the research carried out was to develop a rule based traffic management scheme for DiffServ networks with a view to introducing QoS (Quality of Service). This required definition of rules for congestion management/control based on the type and nature of IP traffic encountered, and then constructing and storing these rules to enable future access for application and enforcement. We first developed the required rule base and then developed the software based mobile agents using the Java (RMI) application package, for accessing these rules for application and enforcement. Consequently, these mobile agents act as smart traffic managers at nodes/routers in the computer based communication network and manage congestion.
APA, Harvard, Vancouver, ISO, and other styles
42

Hao, Yanan. "Efficient web services discovery and composition." Thesis, 2009. https://vuir.vu.edu.au/15232/.

Full text
Abstract:
As an emerging cross discipline area for distributed computing, Service-Oriented Computing (SOC) paradigm promises to bridge the gap between Business Services and IT Services, enable technology to help people perform business processes more efficiently and effectively, and enable businesses and organizations to collaborate globally through standard services. With the rapid development and popularity of Internet and e-commerce, business organizations are investigating ways to expose their current software components into web services so as to make use of distributed service computing resources. Business organizations are also investigating ways on how to incorporate services running on different platforms and hosted by service providers outside of their boundaries into more complex, orchestrated services. In this thesis, we investigate the problem of efficient web services discovery and composition in service oriented environments. Firstly, we present an efficient IR-Style mechanism for discovering and ranking web services automatically, given a textual description of desired services. We introduce the notion of preference degree for a web service, and suggest relevance and importance as two desired properties for measuring its preference degree. Also, various algorithms are given to obtain service relevance and importance. The key part for computing service importance is a new schema tree matching algorithm, which catches not only structures, but even better semantic information of schemas defined in web services. Moreover, we develop an approach to identify associations between web-service operations based on service operations matching. This approach uses the concept of attribute closure to obtain sets of operations. Each set is composed of associated web-service operations. Experimental results show the proposed IR-style search strategy is efficient and practical.
APA, Harvard, Vancouver, ISO, and other styles
43

Barretto, Sistine. "Designing guideline-based workflow-integrated electronic health records." 2005. http://arrow.unisa.edu.au/vital/access/manager/Repository/unisa:28366.

Full text
Abstract:
The recent trend in health care has been on the development and implementation of clinical guidelines to support and comply with evidence-based care. Evidence-based care is established with a view to improve the overall quality of care for patients, reduce costs, and address medico-legal issues. One of the main questions addressed by this thesis is how to support guideline-based care. It is recognised that this is better achieved by taking into consideration the provider workflow. However, workflow support remains a challenging (and hence rarely seen) accomplishment in practice, particularly in the context of chronic disease management (CDM). Our view is that guidelines can be knowledge-engineered into four main artefacts: electronic health record (EHR) content, computer-interpretable guideline (CiG), workflow and hypermedia. The next question is then how to coordinate and make use of these artefacts in a health information system (HIS). We leverage the EHR since we view this as the core component to any HIS.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography