Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Web measurement.

Dissertationen zum Thema „Web measurement“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Web measurement" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kaplan, Murad. „Predicting Performance for Reading News Online from within a Web Browser Sandbox“. Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-theses/17.

Der volle Inhalt der Quelle
Annotation:
Measuring Internet performance for home users can provide useful information for improving network performance. Such measurements typically require users to install special software on their machines, a major impediment to use. To overcome this impediment, we designed and implemented several scripting techniques to predict Internet performance within the tightly constrained sandbox environment of a Web browser. Our techniques are integrated into a Web site project called "How's My Network" that provides performance predictions for common Internet activities, with this thesis concentrating on the performance of online news, social networks, and online shopping. We started our approach by characterizing news sites to understand their structures. After that, we designed models to predict the user's performance for reading news online. We then implement these models using Javascript and evaluate their results. We find out that news sites share common characteristics in their structures with outliers for some. Predicting the page load time according to number objects coming from dominant domain, the one providing the most number of objects, gives more accurate predictions than using total number of objects across all domains. The contributions of this work include the design of new approaches for predicting Web browser performance, and the implementation and evaluation of the effectiveness of our approach to predict Web browser performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Maharshi, Shivam. „Performance Measurement and Analysis of Transactional Web Archiving“. Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78371.

Der volle Inhalt der Quelle
Annotation:
Web archiving is necessary to retain the history of the World Wide Web and to study its evolution. It is important for the cultural heritage community. Some organizations are legally obligated to capture and archive Web content. The advent of transactional Web archiving makes the archiving process more efficient, thereby aiding organizations to archive their Web content. This study measures and analyzes the performance of transactional Web archiving systems. To conduct a detailed analysis, we construct a meaningful design space defined by the system specifications that determine the performance of these systems. SiteStory, a state-of-the-art transactional Web archiving system, and local archiving, an alternative archiving technique, are used in this research. We experimentally evaluate the performance of these systems using the Greek version of Wikipedia deployed on dedicated hardware on a private network. Our benchmarking results show that the local archiving technique uses a Web server’s resources more efficiently than SiteStory for one data point in our design space. Better performance than SiteStory in such scenarios makes our archiving solution favorable to use for transactional archiving. We also show that SiteStory does not impose any significant performance overhead on the Web server for the rest of the data points in our design space.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Choi, Hyoung-Kee. „Measurement, characterization, and modeling of world wide web traffic“. Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/14917.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ma, Jie Carleton University Dissertation Engineering Systems and Computer. „Measurement and performance analysis of World Wide Web applications“. Ottawa, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Barford, Paul R. „Modeling, measurement and performance of World Wide Web transactions“. Thesis, Boston University, 2001. https://hdl.handle.net/2144/36753.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Boston University
PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
The size, diversity and continued growth of the World Wide Web combine to make its understanding difficult even at the most basic levels. The focus of our work is in developing novel methods for measuring and analyzing the Web which lead to a deeper understanding of its performance. We describe a methodology and a distributed infrastructure for taking measurements in both the network and end-hosts. The first unique characteristic of the infrastructure is our ability to generate requests at our Web server which closely imitate actual users. This ability is based on detailed analysis of Web client behavior and the creation of the Scalable URL Request Generator (SURGE) tool. SURGE provides us with the flexibility to test different aspects of Web performance. We demonstrate this flexibility in an evaluation of the 1.0 and 1.1 versions of the Hyper Text Transfer Protocol. The second unique aspect of our approach is that we analyze the details of Web transactions by applying critical path analysis (CPA). CPA enables us to precisely decompose latency in Web transactions into propagation delay, network variation, server delay, client delay and packet loss delays. We present analysis of pe1formance data collected in our infrastructure. Our results show that our methods can expose surprising behavior in Web servers, and can yield considerable insight into the causes of delay variability in Web transactions.
2031-01-01
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lee, Hsin-Tsang. „IRLbot: design and performance analysis of a large-scale web crawler“. Texas A&M University, 2008. http://hdl.handle.net/1969.1/85914.

Der volle Inhalt der Quelle
Annotation:
This thesis shares our experience in designing web crawlers that scale to billions of pages and models their performance. We show that with the quadratically increasing complexity of verifying URL uniqueness, breadth-first search (BFS) crawl order, and fixed per-host rate-limiting, current crawling algorithms cannot effectively cope with the sheer volume of URLs generated in large crawls, highly-branching spam, legitimate multi-million-page blog sites, and infinite loops created by server-side scripts. We offer a set of techniques for dealing with these issues and test their performance in an implementation we call IRLbot. In our recent experiment that lasted 41 days, IRLbot running on a single server successfully crawled 6:3 billion valid HTML pages (7:6 billion connection requests) and sustained an average download rate of 319 mb/s (1,789 pages/s). Unlike our prior experiments with algorithms proposed in related work, this version of IRLbot did not experience any bottlenecks and successfully handled content from over 117 million hosts, parsed out 394 billion links, and discovered a subset of the web graph with 41 billion unique nodes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Mostafavi, Seyed Hooman. „A Longitudinal Assessment of Website Complexity“. Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23831.

Der volle Inhalt der Quelle
Annotation:
Nowadays, most people use several websites on a daily basis for various purposes like social networking, shopping, reading news, etc. which shows the significance of these websites in our lives. Due to this phenomenon, businesses can make a lot of profit by designing high quality websites to attract more people. An important aspect of a good website is its page load time. There has been a lot of studies which analyzed this aspect of the websites from different perspectives. In this thesis, we characterize and examine the complexity of a wide range of popular websites in order to discover the trends in their complexity metrics, like their number, size and type of the objects and number and type of the contacted servers for delivering the objects, over the past six year. Moreover, we analyze the correlation between these metrics and the page load times.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shasha, Ziphozakhe Theophilus. „Measurement of the usability of web-based hotel reservation systems“. Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2353.

Der volle Inhalt der Quelle
Annotation:
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2016.
The aim of this research project was to determine what the degree of usability is of a sample of online reservation systems of Cape Town hotels. The literature has indicated that the main aim of website usability is to make the engagement process with a website a more efficient and enjoyable experience. Researchers noted that well designed, high-quality websites, with grammatically accurate content, create a trustworthy online presence. User-friendly sites also attract far more traffic. Previous research has also shown that a loss of potential sales is possible due to users being unable to find what they want, if poor website design has been implemented. Loss of potential income through repeat visits is also a possibility, due to a negative user experience. The research instrument that was employed in this research is usability testing. It is a technique used to evaluate product development that incorporates user feedback in an attempt to create instruments and products that meet user needs, and to decrease costs. The research focused on Internet-based hotel reservation systems. Only the usability was measured. Both standard approaches were used in this research project, in a combined quantitative and qualitative research design. In conclusion, the purpose of this research was to determine the degree of usability of specified Cape Town hotel online reservation systems. The outcomes of this study indicated interesting patterns in that reservation systems met user requirements more often than expected. However, the figures of acceptability obtained were still below the generally accepted norms for usability. The amount of time spent to complete a booking also decreased, as users worked on more than one reservation system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Freire, André Pimenta. „Disabled people and the Web : user-based measurement of accessibility“. Thesis, University of York, 2012. http://etheses.whiterose.ac.uk/3873/.

Der volle Inhalt der Quelle
Annotation:
Being able to use websites is an important aspect of every-day life to most people, including disabled people. However, despite the existence of technical guidelines for accessibility for more than a decade, disabled users still find problems using websites. However, our knowledge of what problems people with disabilities are encountering is quite low. The aim of the work presented in this thesis was to conduct a study that characterises the problems that print-disabled users (blind, partially sighted, dyslexic users) are encountering on the web. This characterisation includes the categorisation of user problems based on how they impact the user. Further, frequency and severity of the main types of problems were analysed to determine what were the most critical problems that are effecting users with print-disabilities. A secondary goal was to investigate the relationship between user-based measures of accessibility and measures related to technical guidelines, especially the Web Content Accessibility Guidelines (WCAG) 1.0 and 2.0 from the World Wide Web Consortium (W3C). This was done to both identify gaps in the current guidelines, as well understanding where technical guidelines are currently not sufficient for addressing user problems. The study involved task-based user evaluations of 16 websites by a panel of 64 users, being 32 blind, 19 partially sighted and 13 dyslexics and manual audits of the conformance of websites to WCAG 1.0 and 2.0. The evaluations with print-disabled users yielded 3,012 instances of user problems. The analysis of these problems yielded the following key results. Navigation problems caused by poor information architecture were critical to all user groups. All print-disabled users struggled with the navigation bars and overall site structure. Blind users mentioned problems with keyboard accessibility, lack of audio description of videos and problems with form labelling often. However, beyond these seemingly low-level perception and execution problems, there were more complex interaction problems such as users not being informed when error feedback was added dynamically to a page in a location distant from the screen reader. For partially sighted users, problems with the presentation of text, images and controls were very critical, especially those related to colour contrast and size. For dyslexic users, problems with language and lack of search features and spelling aids were among the most critical problems. Comparisons between user problems and WCAG 1.0 and WCAG 2.0 did not show any significant relationship between user-based measures of accessibility and most measures based on technical guidelines. The comparisons of user problems to technical guidelines showed that many user problems were not covered by the guidelines, and that some guidelines were not effective to avoid user problems. The conclusions reinforced the importance of involving disabled users in the design and evaluation of websites as a key activity to improve web accessibility, and moving away from the technical conformance approach of web accessibility. Many of the problems are too complex to address from the point of view of a simple checklist. Moreover, when proposals are made for new techniques to address known user problems on websites, they must be tested in advance with a set of users to ensure that the problem is actually being addressed. The current status quo of proposing implementations based on expert opinion, or limited user studies, has not yielded solutions to many of the current problems print-disabled users encounter on the web.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Miller, Matthew J., und Lawrence C. Freudinger. „A WEB COMPATIBLE FILE SERVER FOR MEASUREMENT AND TELEMETRY NETWORKS“. International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/605589.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
There is a gulf that separates measurement and telemetry applications from the full benefits of Internet style communication. Whereas the Web provides ubiquitous infrastructure for the distribution of file-based “static” data, there is no general Web solution for real-time streaming data. At best, there are proprietary products that target consumer multimedia and resort to custom point-to-point data connections. This paper considers an extension of the static file paradigm to a dynamic file and introduces a streaming data solution integrated with the existing file-based infrastructure of the Web. The solution approach appears to maximize platform and application independence leading to improved application interoperability potential for large or complex measurement and telemetry networks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Genc, Ahmet Sakir. „Web Site Evaluation“. Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607782/index.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis focuses on web site evaluation by using structural evaluation and scope of business based content comparison. Firstly, web site measurement techniques and evaluation methods are reviewed. Then a structural evaluation and content comparison method introduced. This thesis also includes a web based implementation of these methods for evaluating web sites which is partially automated for structural evaluation method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Janc, Artur Adam. „Network Performance Evaluation within the Web Browser Sandbox“. Digital WPI, 2009. https://digitalcommons.wpi.edu/etd-theses/112.

Der volle Inhalt der Quelle
Annotation:
With the rising popularity of Web-based applications, the Web browser platform is becoming the dominant environment in which users interact with Internet content. We investigate methods of discovering information about network performance characteristics through the use of the Web browser, requiring only minimal user participation (navigating to a Web page). We focus on the analysis of explicit and implicit network operations performed by the browser (JavaScript XMLHTTPRequest and HTML DOM object loading) as well as by the Flash plug-in to evaluate network performance characteristics of a connecting client. We analyze the results of a performance study, focusing on the relative differences and similarities between download, upload and round-trip time results obtained in different browsers. We evaluate the accuracy of browser events indicating incoming data, comparing their timing to information obtained from the network layer. We also discuss alternative applications of the developed techniques, including measuring packet reception variability in a simulated streaming protocol. Our results confirm that browser-based measurements closely correspond to those obtained using standard tools in most scenarios. Our analysis of implicit communication mechanisms suggests that it is possible to make enhancements to existing “speedtest” services by allowing them to reliably determine download throughput and round-trip time to arbitrary Internet hosts. We conclude that browser-based measurement using techniques developed in this work can be an important component of network performance studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Amherd, Damian. „Performance Measurement in Blogs Entwicklung eines Blog Analytics-Systems /“. St. Gallen, 2008. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/05607619001/$FILE/05607619001.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Wei, Peng. „Web and knowledge-based decision support system for measurement uncertainty evaluation“. Thesis, Brunel University, 2009. http://bura.brunel.ac.uk/handle/2438/10114.

Der volle Inhalt der Quelle
Annotation:
In metrology, measurement uncertainty is understood as a range in which the true value of the measurement is likely to fall in. The recent years have seen a rapid development in evaluation of measurement uncertainty. ISO Guide to the Expression of Uncertainty in Measurement (GUM 1995) is the primary guiding document for measurement uncertainty. More recently, the Supplement 1 to the "Guide to the expression of uncertainty in measurement" – Propagation of distributions using a Monte Carlo method (GUM SP1) was published in November 2008. A number of software tools for measurement uncertainty have been developed and made available based on these two documents. The current software tools are mainly desktop applications utilising numeric computation with limited mathematical model handling capacity. A novel and generic web-based application, web-based Knowledge-Based Decision Support System (KB-DSS), has been proposed and developed in this research for measurement uncertainty evaluation. A Model-View-Controller architecture pattern is used for the proposed system. Under this general architecture, a web-based KB-DSS is developed based on an integration of the Expert System and Decision Support System approach. In the proposed uncertainty evaluation system, three knowledge bases as sub-systems are developed to implement the evaluation for measurement uncertainty. The first sub-system, the Measurement Modelling Knowledge Base (MMKB), assists the user in establishing the appropriate mathematical model for the measurand, a critical process for uncertainty evaluation. The second sub-system, GUM Framework Knowledge Base, carries out the uncertainty evaluation process based on the GUM Uncertainty Framework using symbolic computation, whilst the third sub-system, GUM SP1 MCM Framework Knowledge Base, conducts the uncertainty calculation according to the GUM SP1 Framework numerically based on Monte Carlo Method. The design and implementation of the proposed system and sub-systems are discussed in the thesis, supported by elaboration of the implementation steps and examples. Discussions and justifications on the technologies and approaches used for the sub-systems and their components are also presented. These include Drools, Oracle database, Java, JSP, Java Transfer Object, AJAX and Matlab. The proposed web-based KB-DSS has been evaluated through case studies and the performance of the system has been validated by the example results. As an established methodology and practical tool, the research will make valuable contributions to the field of measurement uncertainty evaluation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Schauer, Marek. „Oblíbenost JavaScriptových API internetového prohlížeče“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445496.

Der volle Inhalt der Quelle
Annotation:
In this work we present the design and implementation of a platform for automated measurement of the use of JavaScript APIs in a web browser. This platform is based on OpenWPM, which is used to instrument the web browser. In our architecture, the browser is extended with a modified Web API Manager extension, which allows to capture calls to JavaScript methods and log information about these calls. The platform was used to perform measurements on a 10,000 websites. From the analysis of the data obtained by the measurement, we found that the most used APIs over measured websites are APIs specified in HTML and DOM standards, High Resolution Time API and Web Cryptography API. Within the APIs that were implemented in Mozilla Firefox after 2016, we identified the Intersection Observer API, Background Tasks API and Resize Observer API as the most frequently used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zaveri, Amrapali. „Linked Data Quality Assessment and its Application to Societal Progress Measurement“. Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-167021.

Der volle Inhalt der Quelle
Annotation:
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Zhang, Aixiu (Monica). „Transactional Distance in Web-based College Learning Environments: Toward Measurement and Theory Construction“. VCU Scholars Compass, 2003. http://scholarscompass.vcu.edu/etd_retro/94.

Der volle Inhalt der Quelle
Annotation:
Michael Moore's theory of transactional distance, developed in the age of correspondence schools, contributed greatly to theory building in distance education. The theory needs revision, however, when applied to web-based learning environments, specifically by defining transactional distance to include students' relationships with other elements in the learning environment that prohibit their active engagement with learning. The new theoretical model of transactional distance has four dimensions: transactional distance between student and student (TDSS), transactional distance between student and teacher (TDST), transactional distance between student and content (TDSC), and transactional distance between student and interface: online course management system (TDSI). A preliminary item pool of more than 200 items to measure the constructs of TD, TDST, TDSS, TDSC, and TDSI was generated and sent to a panel of experts for review. Items that the reviewers considered weak or very weak in terms of relevance to the constructs and/or clarity and conciseness were eliminated. After a pilot test and further revisions, the proposed scale of transactional distance was administered to a sample of 100 college students. Confirmatory factor analyses and exploratory analyses indicated that the measurement models, especially after modifications, possessed good fit for the data, and the modified scales possessed factorial validity. Reliability analyses indicated that the scales possessed strong internal consistency, with Cronbach alpha coefficients ranging from 0.8169 to 0.9530. Structural equation modeling procedures tested for the causal relationship between the four dimensions and students' general sense of transactional distance in web-based courses. Results indicate that the proposed model of transactional distance is acceptable. The strongest factor that affected students' sense of transactional distance and engagement with learning was found to be transactional distance between student and students (TDSS), followed by transactional distance between student and teacher (TDST), and then by transactional distance between student and content (TDSC). The findings have implications for the development of a revised theory of transactional distance in online education, and provide strong support for constructivist learning theories and social learning theories, reinforcing the importance of establishing learning communities in online learning environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Saroiu, Stefan. „Measurement and analysis of internet content delivery systems /“. Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6962.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Fan, Haiyan. „Web personalization - a typology, instrument, and a test of a predictive model“. Thesis, [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1581.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Chen, Xiaowei. „Measurement, analysis and improvement of BitTorrent Darknets“. HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1545.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Arslan, Muhammad, und Muhammad Assad Riaz. „A Roadmap for Usability and User Experience Measurement during early phases of Web Applications Development“. Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3350.

Der volle Inhalt der Quelle
Annotation:
Web usability and User Experience (UX) play a vital role in the success and failure of web applications. However, the usability and UX measurement during the software development life cycle provide many challenges. Based on a systematic literature review, this thesis discusses the current usability and user experience evaluation and measurement methods and the defined measures as well as their applicability during the software development life cycle. The challenges of using those methods also identified. In order to elaborate more on the challenges, we conducted informal interviews within a software company. Based on the findings, we defined a usability and user experience measurement and evaluation roadmap for web applications development companies. The roadmap contains a set of usability evaluation and measurement methods as well as measures that we found suitable to be used during the early stages (requirement, design, and development) of web application development lifecycle. To validate the applicability of the defined roadmap, a case study was performed on a real time market oriented real estate web application. The results and the discussions of the findings as well as the future research directions are presented.
Användbarhet och User Experience (UX) spelar en avgörande roll i framgång och misslyckande av webbapplikationer. Dock användbarhet och UX mätning under programvarans livscykel ge många utmaningar. Baserat på en systematisk litteraturstudie, denna uppsats diskuterar aktuella användbarhet och användarupplevelse utvärdering och mätmetoder och angivna åtgärder samt som deras tillämplighet under mjukvaruutveckling liv cykel. Utmaningarna i att använda dessa metoder identifierades också. För att utarbeta mer på de utmaningar, vi genomfört informella intervjuer inom ett mjukvaruföretag. Baserat på resultaten, definierade vi en användbarhet och användarcentrerad erfarenheter mätning och utvärdering färdplan för webben tillämpningar utvecklingsbolag. Färdplanen innehåller en uppsättning av användbarhet utvärdering och mätmetoder som samt de åtgärder som vi fann lämpliga att använda under tidigt skede (krav, design och utveckling) i webb applikationsutveckling livscykel. Att godkänna tillämpning av de definierade färdplanen, en fallstudie som utförts på ett realtid marknadsorienterad verklig egendom webbapplikation. Resultaten och diskussionerna i resultaten samt de framtida inriktningar forskning presenteras.
+46 707446847, +46 704423894
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Erdal, Feride. „Web Market Analysis: Static, Dynamic And Content Evaluation“. Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614694/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Importance of web services increases as the technology improves and the need for the challenging e-commerce strategies increases. This thesis focuses on web market analysis of web sites by evaluating from the perspectives of static, dynamic and content. Firstly, web site evaluation methods and web analytic tools are introduced. Then evaluation methodology is described from three perspectives. Finally, results obtained from the evaluation of 113 web sites are presented as well as their correlations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Cassidy, T. „The measurement of woollen card web weight per unit area variation by the obscuration of light“. Thesis, Open University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234906.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Tian, Ran. „Examining the Complexity of Popular Websites“. Thesis, University of Oregon, 2015. http://hdl.handle.net/1794/19347.

Der volle Inhalt der Quelle
Annotation:
A significant fraction of today's Internet traffic is associated with popular web sites such as YouTube, Netflix or Facebook. In recent years, major Internet websites have become more complex as they incorporate a larger number and more diverse types of objects (e.g. video, audio, code) along with more elaborate ways from multiple servers. These not only affect the loading time of pages but also determine the pattern of resulting traffic on the Internet. In this thesis, we characterize the complexity of major Internet websites through large-scale measurement and analysis. We identify thousands of the most popular Internet websites from multiple locations and characterize their complexities. We examine the effect of the relative popularity ranking and business type of the complexity of websites. Finally we compare and contrast our results with a similar study conducted 4 years earlier and report on the observed changes in different aspects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Bhuiya, Md Omar F. „DESIGN AND OPTIMIZATION OF A STRIPLINE RESONATOR SENSOR FOR MEASUREMENT OF RUBBER THICKNESS IN A MOVING WEB“. University of Akron / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=akron1164650416.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Toledo, Sonia M. „A measurement of self-esteem and social comparison among Facebook users“. Scholarly Commons, 2015. https://scholarlycommons.pacific.edu/uop_etds/203.

Der volle Inhalt der Quelle
Annotation:
This study examines the social networking website, Facebook and uses an experimental design to determine the relationship between two variables: self-esteem and social comparison. The study also examines the relationship between the variables of identification and similarity as a process of social comparison. Sixty-five students from a small, private University located in the northwestern United States participated in a classical pretest-posttest experiment consisting of two groups. The treatment in this study was a Facebook account that was designed to induce feelings of upward social comparison amongst the participants through the use of status updates, photos and comments from a female college student. The self-esteem and social comparison levels of the participants were measured after viewing the Facebook treatment to determine whether or not the participants experienced a change in their self-esteem, as a result of engaging in upward social comparison with the Facebook treatment. The degree to which the participants identified with the Facebook treatment and also the degree to which the participants viewed the Facebook treatment as someone who was similar to themselves was also measured to determine whether or not identification and similarity play a role in social comparison. The results revealed that the participants did not experience a significant decrease in self-esteem after viewing the Facebook treatment. Furthermore, a correlation analysis indicated no significant relationship between the identification and similarity variables and the self-esteem and social comparison variables. However, additional findings revealed a significant correlation between high self-esteem and downward social comparison. Implications and suggestions for future research are also discussed regarding the relationship between self-esteem and social comparison on Facebook.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Callahan, Thomas Richard. „A Longitudinal Evaluation of HTTP Traffic“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1333637889.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Beyan, Oya Deniz. „A New Ontology And Knowledge Base System For Performance Measurement In Health Care“. Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612664/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Performance measurement makes up the core of all health care systems in transition. Many countries and institutions monitor different aspects of health care delivery systems for differing purposes. Health care deliverers are compared, rated, and given incentives with respect to their measured performance. However, a global health care domain is currently striving for attaining commonly accepted performance measurement models and base-standards that can be used in information systems. The objective of this thesis is to develop an ontological framework to represent performance measurement and apply this framework to interpret performance measurement studies semantically. More specifically, this study made use of a formal ontology development methodology by utilizing web ontology and semantic web rule languages with description logic in order to develop a commonly accepted health care performance measurement ontology and knowledge base system. In the ontology developed, dimensions, classes, attributes, rules and relationships used in health care delivery and performance measurement domain are defined while forming an initial knowledge base for performance measurement studies and indicators. Furthermore, we applied the developed performance measurement ontology to the knowledge base while driving those related performance indicators for predefined categories. The ontology is evaluated within the features of the Turkish health care system. Health care deliverer categories are identified and by executing inference rules on the knowledge base, related indicators are retrieved. Results are evaluated by domain experts coming from regulatory and care provider institutions. The major benefit of the developed ontology is that it presents a sharable and extensible knowledge base that can be used in the newly emerging performance measurement domain. Moreover, this conceptualization and knowledge base system serve as a semantic indicator search tool that can be used in different health care settings.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Zelik, Daniel Justin. „On the Measurement and Visualization of Analysis Activity: A Study of Successful Strategies for Web-based Information Analysis“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345488802.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Catherine, Catherine. „Simulation and Measurement of Non-Functional Properties of Web Services in a Service Market to Improve User Value“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-31351.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Pate, Karen Denise. „Developing a Framework for Evaluation of Corporate Non-Transactional Business-to-Consumer Web Sites: A Descriptive Study“. NSUWorks, 2010. http://nsuworks.nova.edu/gscis_etd/356.

Der volle Inhalt der Quelle
Annotation:
During the soaring information economy of the last decade, organizations spent large sums of money on the development of Web sites without much knowledge of their performance value. In time, organizations realized that measuring Web site performance to determine value was fundamental. For transactional, e-commerce Business-to-Consumer (B2C) Web sites, this effort is straightforward because value is attached to sales. Measuring performance to determine the value of non-transactional B2C Web sites (i.e., sites that provide information, not sales) is more complex. This study examined the underexplored subject of evaluating non-transactional Web sites. Performance was defined as outcomes ranging from site visitor attributes to business impacts. Value was defined as the degree to which the site contributed to achieving business objectives. The resulting qualitative, exploratory study involved 45-60 minute semi-structured interviews conducted with 15 employees from four corporations across diverse industries regarding evaluation of non-transactional sites. Each interview was recorded with participant consent and transcribed. Interview results were aggregated, analyzed, and grouped based on themes and patterns. Logical groupings of participant opinions on topics such as associating Web initiatives to company business strategy, how Web success is defined, comfort with subjective measurement, and value placed on subjective measurement were identified and placed on several continuums. The study's result is a three phase process to evaluate non-transactional Web sites. Phase one is comprised of four components: 1) identify the company's Web belief system, 2) clarify the company's level of expectation for non-transactional Web sites, 3) determine which viewpoint (business, customer, or both) the company will use to evaluate Web site effectiveness and success, and 4) identify the purpose of evaluating the performance of Web sites. Phase two includes two components: 1) select applicable metrics and 2) collect appropriate data. To supplement Phase two, three tools/guides were developed: 1) expectations/evaluation considerations matrix, 2) sample business viewpoint metrics and 3) sample customer viewpoint metrics. Phase three consists of two components: 1) analyze the data and identify insights and 2) act upon the results. Together, this three phase process and accompanying tools constitute a practical framework for evaluating non-transactional Web sites.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Benáček, Martin. „Klimatizační komora pro teplotní zkoušky“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229062.

Der volle Inhalt der Quelle
Annotation:
Diploma thesis deals with measurement of temperature. The main goal of the thesis is to suggest and realize a Conditioning Chamber for Thermal Testing of small equipments and sensitive elements. The thesis is divided into theoretical and practical part. In the theoretical part are mentioned possibilities of miscue of measurement, equipments and sensitive elements for the measurement of temperature and their dividing. Furthermore, the measurement software and hardware is described for controlling and operating of the system. The practical part is detailing concept and realization of the setting of the Conditioning Chamber, which is controlled by graphical measurement software Control Web 6. The thesis is finished by exemplary measurement and is being service of practical approximation of the measurement test.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Marang, Ah Zau. „Analysis of web performance optimization and its impact on user experience“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231445.

Der volle Inhalt der Quelle
Annotation:
User experience (UX) is one of the most popular subjects in the industry nowadays and plays a significant role in the business success. As the growth of a business depends on customers, it is essential to emphasize on the UX that can help to enhance customer satisfaction. There has been statements that the overall end-user experience is to a great extent influenced by page load time, and that UX is primarily associated with performance of applications. This paper analyzes the effectiveness of performance optimization techniques and their impact on user experience. Principally, the web performance optimization techniques used in this study were caching data, fewer HTTP requests, Web Workers and prioritizing content. A profiling method Manual Logging was utilized to measure performance improvements. A UX survey consists of User Experience Questionnaire (UEQ) and three qualitative questions, was conducted for UX testing before and after performance improvements. Quantitative and qualitative methods were used to analyze collected data. Implementations and experiments in this study are based on an existing tool, a web-based application. Evaluation results show an improvement of 45% on app load time, but no significant impact on the user experience after performance optimizations, which entails that web performance does not really matter for the user experience. Limitation of the performance techniques and other factors that influence the performance were found during the study.
Användarupplevelse (UX) är idag en av de mest populära ämnena inom IT-branschen och spelar en viktig roll i affärsframgångar. Eftersom tillväxten av ett företag beror på kunder är det viktigt att betona på UX som kan bidra till att öka kundnöjdheten. Det har konstaterats att den övergripande slutanvändarupplevelsen i stor utsträckning påverkas av sidladdningstiden och att UX huvudsakligen är förknippad med applikationernas prestanda. I denna studie analyseras effektiviteten av optimeringstekniker och deras inverkan på användarupplevelse. Huvudsakligen, optimeringstekniker som användes i denna studie var cache-lösning, färre HTTP-förfrågningar, Web Workers och prioritering av innehåll. Profileringsmetoden "Manual Logging" användes för att mäta prestandaförbättringar. En enkätutvärdering som består av User Experience Questionnaire (UEQ) och tre kvalitativa frågor, genomfördes med fokus på användarupplevelsen före och efter prestandaförbättringar. Kvantitativa och kvalitativa metoder användes för att analysera insamlade data. Implementeringar och experiment i denna studie är baserade på ett befintligt verktyg, en webbaserad applikation. Utvärderingsresultatet visar en förbättring på 45% när det gäller sidladdningstid men ingen signifikant inverkan på användarupplevelsen efter prestandaförbättringar, vilket innebär att webbprestanda inte spelar någon roll för användarupplevelsen. Begränsning av optimeringstekniker och andra faktorer som påverkar prestationen hittades under studien.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Attoff, Tove. „Encouraging knowledge sharing in a web- based platform : A study concerning how to encourage engineers to share knowledge in a web-based platform for knowledge sharing and to use the platform as a tool for measuring the performance of work procedures“. Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170451.

Der volle Inhalt der Quelle
Annotation:
A design department in the company Sandvik AB in Kista, Stockholm, has built a web-based platform for knowledge sharing that allows the employees to share knowledge, regardless of their role and position in the hierarchical structure of the organization. The web-based platform gathers disseminated information and provides a way of finding needed information in an easy way and thus enables increasing productivity and efficiency of the employees. The purpose of the research was to find out what the potential users find encouraging and motivating in order to use a web-based platform for knowledge sharing and a functionality for performance measurement. Compared to what technically is possible today, there is lack of transparency in the company. The knowledge that exists within the company is dispersed and difficult to access for the employees in the organization. By gathering the knowledge and information and enabling the employees to share knowledge, they could potentially increase their performance of work assignments. The problem that this report addresses is that currently there are not good enough ways of measuring and keeping track of the performance of work assignments or routines in some departments in the company. The qualitative method of semi-structured interviews has been used for gathering data in this research. The data have been analyzed with the method of content analysis. The result of the research is that there are several aspects that need to be considered when encouraging and motivating users to share knowledge in a web-based platform for knowledge sharing and to use it as a tool for performance measurement. The main aspects identified in this research are corporate culture, choice of performance measures, managers’ responsibility, visibility and usage of the performance data and availability of the web-based platform. These aspects concern the attitude of the company and how to encourage and motivate the users to want to use the web-based platform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Christian, Leah Melani. „How mixed-mode surveys are transforming social research : the influence of survey mode on measurement in web and telephone surveys“. Online access for everyone, 2007. http://www.dissertations.wsu.edu/Dissertations/Summer2007/l_christian_070807.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Rehn, Michael. „Garbage Collected CRDTs on the Web : Studying the Memory Efficiency of CRDTs in a Web Context“. Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413299.

Der volle Inhalt der Quelle
Annotation:
In today's connected society, where it is common to have several connected devices per capita, it is more important than ever that the data you need is omnipresent, i.e. its available when you need it, no matter where you are. We identify one key technology and platform that could be the future—peer-to-peer communication and the Web. Unfortunately, guaranteeing consistency and availability between users in a peer-to-peer network, where network partitions are bound to happen, can be a challenging problem to solve. To solve these problems, we turned to a promising category of data types called CRDTs—Conflict Free Replicated Data Types. By following the scientific tradition of reproduction, we build upon previous research of a CRDT framework, and adjust it work in a peer-to-peer Web environment, i.e. it runs on a Web browser. CRDTs makes use of meta-data to ensure consistency, and it is imperative to remove this meta-data once it no longer has any use—if not, memory usage grows unboundedly making the CRDT impractical for real-world use. There are different garbage collection techniques that can be applied to remove this meta-data. To investigate whether the CRDT framework and the different garbage collection techniques are suitable for the Web, we try to reproduce previous findings by running our implementation through a series of benchmarks. We test whether our implementation works correctly on the Web, as well as comparing the memory efficiency between different garbage collection techniques. In doing this, we also proved the correctness of one of these techniques. The results from our experiments showed that the CRDT framework was well-adjusted to the Web environment and worked correctly. However, while we could observe similar behaviour between different garbage collection techniques as previous research, we achieved lower relative memory savings than expected. An additional insight was that for long-running systems that often reset its shared state, it might be more efficient to not apply any garbage collection technique at all. There is still much work to be done to allow for omnipresent data on the Web, but we believe that this research contains two main takeaways. The first is that the general CRDT framework is well-suited for the Web and that it in practice might be more efficient to choose different garbage collection techniques, depending on your use-case. The second take-away is that by reproducing previous research, we can still advance the current state of the field and generate novel knowledge—indeed, by combining previous ideas in a novel environment, we are now one step closer to a future with omnipresent data.
I dagens samhälle är vi mer uppkopplade än någonsin. Tack vare det faktum att vi nu ofta har fler än en uppkopplad enhet per person, så är det viktigare än någonsin att ens data är tillgänglig på alla ens enheter–oavsett vart en befinner sig. Två tekniker som kan möjliggöra denna ``allnärvaro'' av data är Webben, alltså kod som körs på en Webbläsare, tillsammans med peer-to-peer-kommunikation; men att säkerställa att distribuerad data både är tillgänglig och likadan för alla enheter är svårt, speciellt när enhetens internetanslutning kan brytas när som helst. Conflict-free replicated data-types (CRDT:er) är en lovande klass av datatyper som löser just dessa typer av problem i distribuerade system; genom att använda sig av meta-data, så kan CRDT:er fortsätta fungera trots att internetanslutningen brutits. Dessutom är de garanterade att konvergera till samma sluttillstånd när anslutningen upprättas igen. Däremot lider CRDT:er av ett speciellt problem–denna meta-data tar upp mycket minne trots att den inte har någon användning efter en stund. För att göra datatypen mer minneseffektiv så kan meta-datan rensas bort i en process som kallas för skräpsamling. Vår idé var därför att reproducera tidigare forskning om ett ramverk för CRDT:er och försöka anpassa denna till att fungera på Webben. Vi reproducerar dessutom olika metoder för skräpsamling för att undersöka om de, för det första fungerar på Webben, och för det andra är lika effektiv i denna nya miljö som den tidigare forskningen pekar på. Resultaten från våra experiment visade att CRDT-ramverket och dess olika skräpsamlingsmetoder kunde anpassas till att fungera på Webben. Däremot så noterade vi något högre relativ minnesanvändning än vad vi har förväntat oss, trots att beteendet i stort var detsamma som den tidigare forskningen. En ytterligare upptäckt vad att i vissa specifika fall så kan det vara mer effektivt att inte applicera någon skräpsamling alls. Trots att det är mycket arbete kvar för att använder CRDT:er peer-to-peer på Webben för att möjliggöra ``allnärvarande'' data, så innehåller denna uppsats två huvudsakliga punkter. För det första så fungerar det att anpassa CRDT-ramverket och dess olika skräpsamlingsmetoder till Webben, men ibland är det faktiskt bättre att inte applicera någon skräpsamling alls. För det andra så visas vikten av att reproducera tidigare forskning–inte bara visar uppsatsen att tidigare CRDT-forskning kan appliceras i andra miljöer, dessutom kan ny kunskap hämtas ur en sådan reproducering.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Battisti, Gerson. „Modelo de gerenciamento para infra-estrutura de medições de desempenho em redes de computadores“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/12671.

Der volle Inhalt der Quelle
Annotation:
A avaliação do comportamento das redes de computadores através de métricas de desempenho é útil para avaliação de protocolos, para o aperfeiçoamento de aplicações, na escolha de conteúdo, entre outros. A obtenção destas métricas, porém, continua sendo uma tarefa complexa porque na maioria das vezes não se tem instrumentos adequados para tal. O desempenho de uma rede é fácil de ser obtido quando as conexões envolvidas estão limitadas a uma Intranet, pois os administradores têm acesso irrestrito a todos os equipamentos envolvidos. Por outro lado, quando os equipamentos pertencerem a domínios administrativos diferentes, a avaliação de desempenho passa a ser uma tarefa difícil de ser efetuada. As infra-estruturas de medição de desempenho foram propostas como uma forma de dotar as redes de computadores com instrumentos que facilitem a mensuração do desempenho. Uma infra-estrutura de medição de desempenho é um conjunto de equipamentos, espalhados por diversas redes, designados para interagirem uns com os outros quantificando o desempenho dos enlaces entre eles. As vantagens da utilização de infra-estruturas de medição de desempenho são claras, porém a sua utilização em larga escala ainda é problemática. Isso ocorre por problemas de escalabilidade e da inabilidade de gerenciamento dos equipamentos que compõem a infra-estrutura, presentes em diferentes domínios administrativos. Este trabalho avaliou um conjunto de infra-estruturas de medição de desempenho e concluiu que o tratamento dado ao gerenciamento de tais infra-estruturas é variado. De forma resumida, elas não possuem um conjunto de funções de gerenciamento padronizado. Com base nesta avaliação, o trabalho apresenta um modelo de gerenciamento para infra-estruturas de medição de desempenho. O modelo proposto mantém a independência administrativa dos pontos presentes na infra-estrutura, mas permite a interação entre os mesmos por meio de funções de gerenciamento específicas. Um protótipo foi implementado e implantado em um ambiente real para validação do modelo de gerenciamento proposto. Durante o período de avaliação do protótipo foi possível confirmar a importância do gerenciamento dos componentes das infra-estruturas de medição. Os problemas ocorridos durante esse período foram rapidamente detectados e solucionados. A experiência de uso do protótipo possibilitou observar os benefícios que podem ser obtidos com o gerenciamento de infra-estruturas de medição de desempenho.
The evaluation of computer network behavior through performance metrics is useful in protocol evaluation, application improvement and content choice, among others. The process of obtaining those metrics continues to be a complex task because most of the time there are no tools available for such. The performance of a network is easy to be obtained when the involved connections are limited to an Intranet. In this case, administrators have unrestricted access to all the involved equipments. On the other hand, when the equipments belong to different administrative domains, the performance evaluation becomes a very difficult task. The measurement infrastructures appear as a form of endowing the computer networks with instruments that facilitate measuring performance. A measurement infrastructure is a group of equipments, spread through several networks, designated to interact with each other quantifying the performance of the connections among them. The advantages of the use of measurement infrastructures are clear, although its use in wide scale is still problematic. That happens mainly due to scalability problems and to inability of equipment management belonging to different administrative domains. This work evaluated a set of measurement infrastructures and concluded that the treatment given to the management of such infrastructures is varied. In summary, there is not a set of standardized management functions. Based on this evaluation, this work presents a management model for measurement infrastructures. The proposed model maintains the administrative independence of the present points in the infrastructure but, also, allows the interaction among them by means of specific management functions. A prototype was implemented and deployed in a real environment for validation of the proposed management model. During the evaluation prototype period it was possible to confirm the importance of the management of the components of the measurement infrastructures. The experience of use of the prototype made possible to observe the benefits that can be obtained with the management of measurement infrastructures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Palacín, Mateo Manuel. „The Internet ecosystem: a traffic measurement analysis“. Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/326736.

Der volle Inhalt der Quelle
Annotation:
The analysis of the interconnection status-quo between content providers and Internet Service Providers (ISPs) is essential to understand the evolution of the Internet ecosystem. In the last years we have witnessed a spectacular increase of Internet traffic, especially multimedia content, which has driven both content providers and operators to rethink their interconnection models. This thesis performs an extensive traffic analysis from two perspectives to understand the rationale behind the Internet players. First, we analyse the traffic from the perspective of the evolution of the Internet protocols. Analyzing the protocols we pretend to observe whether the traffic pattern has changed while new applications have emerged and the demand have exploded. Second, we collect a dataset of Internet traces to evaluate the connectivity between access ISPs and the most popular content providers. Analyzing the Internet traces we want to identify the correlations in the interconnection models that different Internet players use.
El análisis de las interconexiones entre proveedores de contenidos y operadores de Internet es esencial para entender la evolución del ecosistema de Internet. En los últimos años hemos sido testigos de un espectacular crecimiento del tráfico multimedia que ha llevado tanto a los proveedores de contenidos como a los operadores a replantearse sus modelos de interconexión. En esta tesis se realiza un análisis del tráfico de Internet des de dos perspectivas. Primero, se analiza el tráfico desde una perspectiva de los protocolos. Gracias al análisis de los protocolos se observa cómo ha cambiado el tráfico debido a la irrupción de nuevas aplicaciones y al incremento de la demanda de Internet. Segundo, se han realizado medidas que nos permite evaluar la conectividad entre diferentes operadores y proveedores de contenidos. El análisis de estos datos nos permite identificar correlaciones entre los diferentes modelos de interconexión que utilizan los operadores.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

CARVALHO, Sidartha Azevedo Lobo de. „MDEM: um ambiente para avaliação do consumo de energia em multidispositivos baseado na web“. Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16319.

Der volle Inhalt der Quelle
Annotation:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-05T15:06:42Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Sidartha Carvalho Dissertacao de Mestrado.pdf: 2847376 bytes, checksum: e22fe197fc67ebb8e11ceb85baa66bc8 (MD5)
Made available in DSpace on 2016-04-05T15:06:42Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Sidartha Carvalho Dissertacao de Mestrado.pdf: 2847376 bytes, checksum: e22fe197fc67ebb8e11ceb85baa66bc8 (MD5) Previous issue date: 2015-02-26
FACEPE
Diante do crescimento das vendas de smartphones, em específico com o sistema operacional Android, em suas variadas versões e diversidade de aparelhos, um problema chamado de fragmentação foi destacado. Com o incremento de dispositivos multinúcleos e multiprocessadores, crescem as restrições na potência e ocorrência de superaquecimento. Alguns trabalhos consultados se especializam em resolver o problema da fragmentação, porém não se preocupam com dados energéticos, enquanto outros somente relatam técnicas que permitem a diminuição no consumo energético, mas nenhum possibilita a integração de testes multidispositivos, com o intuito de solucionar o problema da fragmentação, com foco energético. Este trabalho oferece a modelagem e implementação de um ambiente de medição que auxilia na avaliação do consumo energético de dispositivos com o sistema operacional Android. O ambiente proposto permite que seja feita uma análise energética de dispositivos em diferentes frequências no processador e redes de dados de forma simultânea e com manipulação pela Web. Uma infraestrutura genérica de medição, a qual permite a coleta de tensão e corrente de dispositivos alimentados por bateria foi criada juntamente com um canal que permite a comunicação com um computador para análise dos dados. No tangente ao software, foi construída uma plataforma Web para manipulação de componentes do dispositivo e replicação de testes de forma automatizada. Para testar o uso do ambiente proposto, foram executados oito testes envolvendo Navegação Web, Streaming de Vídeo e Carga de Processamento nos smartphones disponíveis. Os resultados dos testes são apresentados e discutidos em detalhe.
Recently we can observe a growth in smartphone sales, in particular those running Android operating system. In several versions and devices, a problem known as fragmentation was featured. Along with the increase of multi-core and multi-processor devices, growing restrictions on power and overheating were highlighted. Some studies focus on solving the fragmentation problem and do not care about energy analysis, while other reports only techniques which allow a reduction in energy consumption, but none enables the integration of multi-devices tests in order to solve the problem of fragmentation with energy focus. This study provides the modeling and implementation of a measurement environment that helps the evaluation of the energy consumption of devices with the Android operating system. The proposed environment makes possible an analysis of the energy in devices at different processor frequencies and data networks on multiple devices simultaneously with Web access. A generic and cheap infrastructure measurement, which allows the collection of voltage and current of battery-powered devices and a channel that allows communication with a computer for data analysis were built. In regard to the software, a Web platform for manipulation of device peripherals and automated tests replication was constructed. In order to test the use of the proposed environment, eight tests were performed covering Web Browsing, Video Streaming and CPU load on the available smartphones, and then the results were explained and discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Colombo, Regina Maria Thienne. „Proposta de uma metodologia de medição e priorização de segurança de acesso para aplicações WEB“. Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3136/tde-23122014-142055/.

Der volle Inhalt der Quelle
Annotation:
Em um mundo tecnológico e globalmente interconectado, em que indivíduos e organizações executam transações na web com frequência, a questão da segurança de software é imprescindível, ela é necessária em diversos nichos: segurança das redes de computadores, dos computadores e dos softwares. A implantação de um sistema de segurança que abrange todos os aspectos é extensa e complexa, ao mesmo tempo em que a exploração de vulnerabilidades e ataques é exponencialmente crescente. Por causa da natureza do software e de sua disponibilidade na web, a garantia de segurança nunca será total, porém é possível planejar, implementar, medir e avaliar o sistema de segurança e finalmente melhorá-la. Atualmente, o conhecimento específico em segurança é detalhado e fragmentado em seus diversos nichos, a visão entre os especialistas de segurança é sempre muito ligada ao ambiente interno da computação. A medição de atributos de segurança é um meio de conhecer e acompanhar o estado da segurança de um software. Esta pesquisa tem como objetivo apresentar uma abordagem top-down para medição da segurança de acesso de aplicações web. A partir de um conjunto de propriedades de segurança reconhecidas mundialmente, porém propriedades estas intangíveis, é proposta uma metodologia de medição e priorização de atributos de segurança para conhecer o nível de segurança de aplicações web e tomar as ações necessárias para sua melhoria. Define-se um modelo de referência para segurança de acesso e o método processo de análise hierárquica apoia a obtenção de atributos mensuráveis e visualização do estado da segurança de acesso de uma aplicação web.
In a technological world and globally interconnected, in which individuals and organizations perform transactions on the web often, the issue of software security is essential, it is needed in several niches: security of computer networks, computers and software. The implementation of a security system that covers all aspects is extensive and complex, while the exploitation of vulnerabilities and attacks are increasing exponentially. Because of the nature of software and its availability on the web, ensure security will never be complete, but it is possible to plan, implement, measure and evaluate the security system and ultimately improve it. Currently, the specific knowledge in security is detailed and fragmented into its various niches; the view among security experts is always connected to the internal environment of computing. The measurement of security attributes is a way to know and monitor the state of software security. This research aims to present a top-down approach for measuring the access security of web applications. From a set of security properties globally recognized, however these intangible properties, I propose a measurement methodology and prioritization of security attributes to meet the security level of web applications and take necessary actions for improvement. It is defined a reference model for access security and a method of analytic hierarchy process to support the achievement of measurable attributes and status of the access security of a web application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Sherrow, Breanna Lynn. „The Effects of MindPlay Virtual Reading Coach (MVRC) on the Spelling Growth of Students in Second Grade“. Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/560804.

Der volle Inhalt der Quelle
Annotation:
First, this study was conducted to determine the effects of MVRC on the spelling development of second-graders. Second, this study sought to determine if spelling trajectories vary by gender, English Language Learner (ELL) enrollment and/or Special Education (SPED) enrollment. Lastly, students' spelling tests were evaluated with two different spelling scoring methods: traditional standardized scoring, correct and incorrect, and Curriculum-Based Measurement-spelling (CBM), correct letter sequences, to determine which method was more sensitive to growth from pre-test to post-test. Students were pre-tested and post-tested with two measures from the Woodcock-Johnson IV Achievement, Test 3: Spelling and Test 16: Spelling of Sounds. Participants included 159 students, 83 students were enrolled in the experimental condition and 76 students were enrolled in the comparison condition. Using a multilevel model for repeated measures, the researcher estimated the between group-model analyses for Test 3: Spelling and Test 16: Spelling. Students who participated in the experimental condition, receiving MVRC, had significantly different spelling scores than their peers in the comparison group. For Test 3: Spelling, the experimental group increased on average by 1.786 words compared to the comparison group. For Test 16: Spelling of Sounds, the experimental group increased on average by 1.741 words compared to the comparison group. Student spelling trajectories did vary by gender, ELL enrollment, and SPED enrollment. However, these differences were not found to be significant. Neither traditional scoring norCBM-spelling scoring was found to be the more sensitive scoring method for growth for both tests. Instead, CBM-spelling was more sensitive for Test 3: Spelling, while traditional scoring was more sensitive for Test 16: Spelling of Sounds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Pavliš, Michal. „Model měření výšky hladiny“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228750.

Der volle Inhalt der Quelle
Annotation:
This master's thesis is engaged in its theoretical part of the description and explanation principles and possibility how to the level measurement. There are clear up individual kinds and types of sensors to the level measurement. Further is described software and hardware which is employing to the measurement or control many systems and circuitry. A practical part consists in suggestion and self realization a model of level measurement including his controlling. Last but not least too creation partly instruction to attendance laboratory workspace and further creation several measuring and control software in the system named Control Web 6. Work is above all focusing on model approach of this problems needs for purposes laboratory teaching.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Metzler, Anke [Verfasser], Marek [Akademischer Betreuer] Fuchs und Mick P. [Akademischer Betreuer] Couper. „The effect of assigning sample members to their preferred device on nonresponse and measurement in Web surveys / Anke Metzler ; Marek Fuchs, Mick P. Couper“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1203801610/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Metzler, Anke [Verfasser], Marek [Akademischer Betreuer] Fuchs und Mick [Akademischer Betreuer] Couper. „The effect of assigning sample members to their preferred device on nonresponse and measurement in Web surveys / Anke Metzler ; Marek Fuchs, Mick P. Couper“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1203801610/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Olebjörk, Karin. „Måttning : Problematiken kring måttning inför val av storlek vid e-handel“. Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-14798.

Der volle Inhalt der Quelle
Annotation:
Denna rapport vill belysa kunders problem med att hitta passande storlek när de handlar kläder i en webbutik. Fokusområdet är att undersöka om det är möjligt för kunder att ta tillförlitliga kroppsmått eller plaggmått med måttband, för att välja passande klädstorlek vid e-handel.   Först genomfördes en marknadsundersökning för att ta reda på hur klädföretag utformar sina storleksguider. Därefter utfördes en undersökning där 20 militärer deltog. De fick i uppgift att följa instruktioner för att ta kroppsmått på sig själva, ta samma mått på varandra och måtta av två utvalda plagg. Avslutningsvis fick de låta sig bli mätta av en designtekniker. Resultatet visade mycket stor spridning och flera av måtten hade ett spann från -6 cm till +6 cm i jämfört med att en professionell genomförde mätningen. En upptäckt var att vissa mätresultat blev bättre när någon annan än personen själv utförde kroppsmätningen. Deltagarna fick som avslut svara på några frågor. Hela 60 % av deltagarna i testgruppen trodde att en filminstruktion skulle ha underlättat mätningen. Slutsatsen är att det är viktigt att informera kunder hur de bör ta hjälp med att ta sina mått i användandet av storleksguiden, och att en instruktionsfilm kan underlätta mätningen. Ett komplement till kroppsmått kan vara plaggmått i storleksguiden.
The purpose of this report is to highlight customers' problems in finding the right size when shopping for clothes in an online store. The focus area is to investigate whether it is possible for customers to take accurate body measurements or garment measurements with a measuring tape in order to select appropriate clothing size in e-commerce.   A market investigation on the internet was conducted to find out how clothing companies design their size guides. Then a study took place with 20 participant soldiers. They were asked to follow the instructions for taking body measurements of themselves, to take measurements of each other and to measure two selected garments. Finally, they where measured by a garment technologist. The result showed a very large difference, and several of the measures had a span from -6 cm to +6 cm. One finding was that some measurement results were improved when someone else than themselves performed the body measuring. Afterwards the participants were asked to answer a few questions. Around 60 % of the them believed that a film instruction could facilitate the body measuring. The conclusion is that it is important to inform the customers in the size guide that it is important to get help to get as accurate measurements as possible and that an instructional film can facilitate the measuring. A complement to body measurements can be garments measurements in the size guide.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Rahaman, Sazzadur. „From Theory to Practice: Deployment-grade Tools and Methodologies for Software Security“. Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/99849.

Der volle Inhalt der Quelle
Annotation:
Following proper guidelines and recommendations are crucial in software security, which is mostly obstructed by accidental human errors. Automatic screening tools have great potentials to reduce the gap between the theory and the practice. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. The main technical enabler for CryptoGuard is a set of detection algorithms that refine program slices by leveraging language-specific insights, where TaintCrypt relies on symbolic execution-based path-sensitive analysis to reduce false positives. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host, manage, and maintain the PCI certification testbeds.
Doctor of Philosophy
Automatic screening tools have great potentials to reduce the gap between the theory and the practice of software security. However, the goal of scalable automated code screening is largely hindered by the practical difficulty of reducing false positives without compromising analysis quality. To enable compile-time security checking of cryptographic vulnerabilities, I developed highly precise static analysis tools (CryptoGuard and TaintCrypt) that developers can use routinely. Both CryptoGuard and TaintCrypt uncovered numerous vulnerabilities in real-world software, which proves the effectiveness. Oracle has implemented our cryptographic code screening algorithms for Java in its internal code analysis platform, Parfait, and detected numerous vulnerabilities that were previously unknown. I also designed a specification language named SpanL to easily express rules for automated code screening. SpanL enables domain experts to create domain-specific security checking. Unfortunately, tools and guidelines are not sufficient to ensure baseline security in internet-wide ecosystems. I found that the lack of proper compliance checking induced a huge gap in the payment card industry (PCI) ecosystem. I showed that none of the PCI scanners (out of 6), we tested are fully compliant with the guidelines, issuing certificates to merchants that still have major vulnerabilities. Consequently, 86% (out of 1,203) of the e-commerce websites we tested, are non-compliant. To improve the testbeds in the light of our work, the PCI Security Council shared a copy of our PCI measurement paper to the dedicated companies that host the PCI certification testbeds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Bissi, Wilson. „WS-TDD: uma abordagem ágil para o desenvolvimento de serviços WEB“. Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1829.

Der volle Inhalt der Quelle
Annotation:
Test Driven Development (TDD) é uma prática ágil que ganhou popularidade ao ser definida como parte fundamental na eXtreme Programming (XP). Essa prática determina que os testes devem ser escritos antes da implementação do código. TDD e seus efeitos têm sido amplamente estudados e comparados com a prática Test Last Development (TLD) em diversos trabalhos. Entretanto, poucos estudos abordam TDD no desenvolvimento de Web Services (WS), devido à complexidade em testar as dependências entre os componentes distribuídos e as particularidades da Service Oriented Architecture (SOA). Este trabalho tem por objetivo definir e validar uma abordagem para o desenvolvimento de WS baseada na prática de TDD, denominada WS-TDD. Essa abordagem guia os desenvolvedores no uso de TDD durante o desenvolvimento de WS, sugerindo ferramentas e técnicas para lidar com as dependências e as particularidades de SOA, com foco na criação dos testes unitários e integrados automatizados na linguagem Java. No intuito de definir e validar a abordagem proposta, quatro métodos de pesquisa foram executados: (i) questionário presencial; (ii) experimento; (iii) entrevista presencial com cada participante do experimento e (iv) triangulação dos resultados com as pessoas que participaram nos três métodos anteriores. De acordo com os resultados obtidos, a WS-TDD mostrou-se mais eficiente quando comparada a TLD, aumentando a qualidade interna do software e a produtividade dos desenvolvedores. No entanto, a qualidade externa do software diminuiu, apresentando um maior número de defeitos quando comparada a TLD. Por fim, é importante destacar que a abordagem proposta surge como uma alternativa simples e prática para a adoção de TDD no desenvolvimento de WS, trazendo benefícios a qualidade interna e contribuindo para aumentar a produtividade dos desenvolvedores. Porém, a qualidade externa do software diminuiu ao utilizar a WS-TDD.
Test Driven Development (TDD) is an agile practice that gained popularity when defined as a fundamental part in eXtreme Programming (XP). This practice determines that the tests should be written before implementing the code. TDD and its effects have been widely studied and compared with the Test Last Development (TLD) in several studies. However, few studies address TDD practice in the development of Web Services (WS), due to the complexity of testing the dependencies among distributed components and the specific characteristics of Service Oriented Architecture (SOA). This study aims to define and validate an approach to develop WS based on the practice of TDD, called WS-TDD. This approach guides developers to use TDD to develop WS, suggesting tools and techniques to deal with SOA particularities and dependencies, focusing on the creation of the unitary and integrated automated tests in Java. In order to define and validate the proposed approach, four research methods have been carried out: (i) questionnaire; (ii) practical experiment; (iii) personal interview with each participant in the experiment and (iv) triangulation of the results with the people who participated in the three previous methods. According to the obtained results, WS-TDD was more efficient compared to TLD, increasing internal software quality and developer productivity. However, the external software quality has decreased due to a greater number of defects compared to the TLD approach. Finally, it is important to highlight that the proposed approach is a simple and practical alternative for the adoption of TDD in the development of WS, bringing benefits to internal quality and contributing to increase the developers’ productivity. However, the external software quality has decreased when using WS-TDD.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lisboa, Alana Regina Biagi Silva. „Aplicação de clustering e métricas à análise de LOG para avaliação automática de usabilidade de aplicações internet ricas“. Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/3222.

Der volle Inhalt der Quelle
Annotation:
O desenvolvimento de sistemas com eficácia, eficiência e satisfação dos usuários tem feito com que a usabilidade se torne uma característica importante na avaliação de qualidade de um produto de software. A análise da interação do usuário com o sistema é uma das formas de mensurar essa característica. Novas tecnologias permitem a criação de sistemas com foco na qualidade de interação do usuário, com grande parte do processamento ocorrendo no lado cliente, como as Aplicações Internet Ricas ou simplesmente RIAs, de Rich Internet Applications. No entanto, os dados de interação do usuário armazenados no servidor web são insuficientes para extrair conhecimento útil sobre a qualidade da interação. Este trabalho apresenta uma abordagem para avaliação automática de usabilidade através da aplicação de clustering e métricas na análise de log de RIAs. A abordagem proposta utiliza a Web Application Usage Tracking Tool, WAUTT, para capturar a interação do usuário e o algoritmo X-Means para realizar o agrupamento dos dados. O resultado da avaliação automática emprega métricas que fornecem informação quantitativa a respeito da usabilidade de sistemas web e que poderão ser utilizadas pelo avaliador para auxiliar na tomada de decisão. As informações obtidas com as métricas foram comparadas a um método de avaliação tradicional, a avaliação heurística, para corroborar os resultados obtidos pela abordagem proposta. O clustering foi uma técnica útil para reduzir o volume de dados e permitira ao avaliador concentrar os seus esforços em determinados grupos de usuários com comportamentos semelhantes. A aplicação de métricas aos clusters permitiu realizar uma avaliação de usabilidade quantitativa para auxiliar a tomada de decisão em relação a algumas das subcaracterísticas de usabilidade. A avaliação automática de usabilidade realizada tem potencial para auxiliar no desenvolvimento de sistemas web.
The development of systems with effectiveness, efficiency and user satisfaction has made that usability become an important feature in quality assessment of software products. The analysis of user interaction data with the system is one way to measure this feature. New technologies allow the creation of systems with focus on quality of user interaction and a large part of the processing occurs on the client side, as Rich Internet Applications, from RIAs for short. However, user interaction data stored on the webserver are insufficient to extract useful knowledge about the quality of interaction. This dissertation presents an approach for automatic usability evaluation by applying clustering and metrics in log analysis of RIAs. The proposed approach uses the Web Application Usage Tracking Tool, WAUTT, to capture user interaction and the X-means algorithm to perform clustering. The result of the automatic evaluation applies metrics that provide quantitative information about the usability of web systems and may be used to assist evaluators in making decisions. Information obtained from metrics were compared to a traditional evaluation method, heuristic evaluation, to corroborate the results obtained by the proposed approach. Clustering was a useful technique to reduce the volume of data and allowed the evaluator focus their efforts on certain groups of users with similar behaviors. The application of metrics enabled perform a quantitative usability evaluation to assist the decision making process about some usability subcharacteristics. The automatic usability evaluation conducted as the proposed approach has the potential to assist in developing web systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Solorio, Rigoberto. „A WEB-BASED TEMPERATURE MONITORING SYSTEM FOR THE COLLEGE OF ARTS AND LETTERS“. CSUSB ScholarWorks, 2015. https://scholarworks.lib.csusb.edu/etd/129.

Der volle Inhalt der Quelle
Annotation:
In general, server rooms have restricted access requiring that staff possess access codes, keys, etc. Normally, only administrators are provided access to protect the physical hardware and the data stored in the servers. Servers also have firewalls to restrict outsiders from accessing them via the Internet. Servers also cost a lot of money. For this reason, server rooms also need to be protected against overheating. This will prolong the lifecycle of the units and can prevent data loss from hardware failure. The California State University San Bernardino (CSUSB), Specifically the College of Arts and Letters server room has faced power failures that affected the Air Conditioning Unit (AC) and as a result the room became overheated for a long time, causing hardware failure to server units. This is why this project is important for the College and needs to be implemented as soon as possible. The administrator’s old method of controlling server room temperature was by manually adjusting the temperature box inside of the server room. Now it can be controlled and monitored using remote access. The purpose of A Web-Based Temperature Monitoring System for the College of Arts and Letters proposed in this project is to allow users to monitor the server room temperature through a website by using any computer or mobile device that has Internet access. Also, this system notifies users when the room attains a critical temperature by sending an email/text to the server room administrator. A Web-Based Temperature Monitoring System for the College of Arts and Letters project is for the exclusive use of the College of Arts & Letters (CAL) server room. The administrator is the only person that can grant access to others by creating a proper account. For this project three prototypes will be implemented, first to measure the current server room temperature, the second to show the temperature history of the room, and third to use the built-in search system to locate times that given temperatures were attained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Gyrard, Amélie. „Concevoir des applications internet des objets sémantiques“. Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0018/document.

Der volle Inhalt der Quelle
Annotation:
Selon les prévisions de Cisco , il y aura plus de 50 milliards d'appareils connectés à Internet d'ici 2020. Les appareils et les données produites sont principalement exploitées pour construire des applications « Internet des Objets (IdO) ». D'un point de vue des données, ces applications ne sont pas interopérables les unes avec les autres. Pour aider les utilisateurs ou même les machines à construire des applications 'Internet des Objets' inter-domaines innovantes, les principaux défis sont l'exploitation, la réutilisation, l'interprétation et la combinaison de ces données produites par les capteurs. Pour surmonter les problèmes d'interopérabilité, nous avons conçu le système Machine-to-Machine Measurement (M3) consistant à: (1) enrichir les données de capteurs avec les technologies du web sémantique pour décrire explicitement leur sens selon le contexte, (2) interpréter les données des capteurs pour en déduire des connaissances supplémentaires en réutilisant autant que possible la connaissance du domaine définie par des experts, et (3) une base de connaissances de sécurité pour assurer la sécurité dès la conception lors de la construction des applications IdO. Concernant la partie raisonnement, inspiré par le « Web de données », nous proposons une idée novatrice appelée le « Web des règles » afin de partager et réutiliser facilement les règles pour interpréter et raisonner sur les données de capteurs. Le système M3 a été suggéré à des normalisations et groupes de travail tels que l'ETSI M2M, oneM2M, W3C SSN et W3C Web of Things. Une preuve de concept de M3 a été implémentée et est disponible sur le web (http://www.sensormeasurement.appspot.com/) mais aussi embarqué
According to Cisco's predictions, there will be more than 50 billions of devices connected to the Internet by 2020.The devices and produced data are mainly exploited to build domain-specific Internet of Things (IoT) applications. From a data-centric perspective, these applications are not interoperable with each other.To assist users or even machines in building promising inter-domain IoT applications, main challenges are to exploit, reuse, interpret and combine sensor data.To overcome interoperability issues, we designed the Machine-to-Machine Measurement (M3) framework consisting in:(1) generating templates to easily build Semantic Web of Things applications, (2) semantically annotating IoT data to infer high-level knowledge by reusing as much as possible the domain knowledge expertise, and (3) a semantic-based security application to assist users in designing secure IoT applications.Regarding the reasoning part, stemming from the 'Linked Open Data', we propose an innovative idea called the 'Linked Open Rules' to easily share and reuse rules to infer high-level abstractions from sensor data.The M3 framework has been suggested to standardizations and working groups such as ETSI M2M, oneM2M, W3C SSN ontology and W3C Web of Things. Proof-of-concepts of the flexible M3 framework have been developed on the cloud (http://www.sensormeasurement.appspot.com/) and embedded on Android-based constrained devices
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie