To see the other types of publications on this topic, follow the link: Web metrics.

Dissertations / Theses on the topic 'Web metrics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Web metrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Asif, Sajjad. "Investigating Web Size Metrics for Early Web Cost Estimation." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16036.

Full text
Abstract:
Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects. Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation. Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project. Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation. Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.
APA, Harvard, Vancouver, ISO, and other styles
2

Arenberg, Tom. "Impact of Web Metrics on News Decisions." Thesis, The University of Alabama, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10255179.

Full text
Abstract:

Many news organizations are trying to maximize their online audience in an attempt to bring greater exposure to their work and attract advertising. Grounded in Resource Dependency Theory and System of Professions theory, this comparative case study of two divergent news organizations sought to identify how degree of pursuit of audience metrics affects the nature of an organization’s journalism. The study showed that differences in degree of pursuit led to differences in the nature of news content and in the nature of determinations of newsworthiness. A greater emphasis on metrics led one organization toward a lower percentage of civic issue stories, less story depth, a better understanding of online traffic creation, greater use of text and ideas from public relations professionals, and less use of traditional journalistic abstract knowledge to determine newsworthiness. Crucially, however, in the newsroom of greater metric use, a commitment to the traditional journalistic norm of civic duty served to reduce the differences between the organizations. The implications for journalism are discussed.

APA, Harvard, Vancouver, ISO, and other styles
3

Zahran, Dalal Ibrahem. "Web engineering for the evaluation of large complex web systems : methodologies in web metrics." Thesis, Edinburgh Napier University, 2016. http://researchrepository.napier.ac.uk/Output/978652.

Full text
Abstract:
Roaming the Internet, users sometimes encounter severe problems or feel dissatisfied using a particular site. E-government websites are the public gateways to access information and services but there is still no agreement on how to assess a government's online presence. Failure of e-government projects in achieving their goals is common and there is uncertainty about how best to evaluate an e-government website. It has been argued that existing evaluation frameworks have some methodological limitations and they mostly neglected citizens. There is a lack of an engineering approach for building web systems and the literature on measuring the quality of website is limited. There is an uncertainty in the selection of evaluation methods and some risks of standardizing inadequate evaluation practices. Managing the complexity of web applications, Web Engineering is emerging as a new discipline for the development and evaluation of web systems to promote high-quality websites. But web quality is still a debatable issue and web metrics is considered a valuable area of ongoing research. Therefore this research focuses on the methodological issues underlying web metrics and how to develop an applicable set of measurement for designing websites. The main aim is to create new metrics for web engineering and develop a generalizable measurement framework for local e-government since research in this field is limited. This study adopted a positivist quantitative research and used triangulation web evaluation methods (heuristic evaluation, user testing, automatic link checkers, and Alexa) to test multiple-case study of Saudi city websites. The proposed E-City Usability Framework is unique in integrating 3-dimension measures (website usability, e-services, and the number and type of e-services), and in using multi-orientations to cover several aspects of e-government: output (information and services), outcomes (citizen-centricity indicators), model, and model-based assessments. Existing e-government models were criticized, and the findings employed in developing the proposed framework. The best web evaluation methods were heuristic evaluation and user testing, while link checkers and Alexa proved to be unreliable tools; nevertheless, they can be used as a useful complementary approach. Saudi city websites were ranked by website quality, e-services, and overall evaluation. Common usability problems in these websites were found to be: the sites were not citizen-centered, limited e-services and information, no e-transaction, no emergency alerts, no municipal budget, and no city council reports. They also suffered from broken links, an inactive city map, a poor eComplaint section, and a nonfunctioning search facility.
APA, Harvard, Vancouver, ISO, and other styles
4

Miehling, Mathew J. "Correlation of affiliate performance against web evaluation metrics." Thesis, Edinburgh Napier University, 2014. http://researchrepository.napier.ac.uk/Output/7250.

Full text
Abstract:
Affiliate advertising is changing the way that people do business online. Retailers are now offering incentives to third-party publishers for advertising goods and services on their behalf in order to capture more of the market. Online advertising spending has already over taken that of traditional advertising in all other channels in the UK and is slated to do so worldwide as well [1]. In this highly competitive industry, the livelihood of a publisher is intrinsically linked to their web site performance. Understanding the strengths and weaknesses of a web site is fundamental to improving its quality and performance. However, the definition of performance may vary between different business sectors or even different sites in the same sector. In the affiliate advertising industry, the measure of performance is generally linked to the fulfilment of advertising campaign goals, which often equates to the ability to generate revenue or brand awareness for the retailer. This thesis aims to explore the correlation of web site evaluation metrics to the business performance of a company within an affiliate advertising programme. In order to explore this correlation, an automated evaluation framework was built to examine a set of web sites from an active online advertising campaign. A purpose-built web crawler examined over 4,000 sites from the advertising campaign in approximately 260 hours gathering data to be used in the examination of URL similarity, URL relevance, search engine visibility, broken links, broken images and presence on a blacklist. The gathered data was used to calculate a score for each of the features which were then combined to create an overall HealthScore for each publishers. The evaluated metrics focus on the categories of domain and content analysis. From the performance data available, it was possible to calculate the business performance for the 234 active publishers using the number of sales and click-throughs they achieved. When the HealthScores and performance data were compared, the HealthScore was able to predict the publisher's performance with 59% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Dash, Suvendu Kumar. "Context-based metrics for evaluating changes to web pages." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/79.

Full text
Abstract:
The web provides a lot of fluid information but this information changes, moves, and even disappears over time. Bookmark lists, portals, and paths are collections where the building blocks are web pages, which are susceptible to these changes. A lot of research, both in industry and in academia, focuses on organizing this vast amount of data. In this thesis, I present context-based algorithms for measuring changes to a document. The methods proposed use other documents in a collection as the context for evaluating changes in the web pages. These metrics will be used in maintaining paths as the individual pages in paths change. This approach will enhance the evaluations of change made by the currently existing Path Manager, in the Walden's Paths project that is being developed in the Center for the Study of Digital Libraries at Texas A&M University.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Yanlong. "Quality Modelling and Metrics of Web-based Information Systems." Thesis, Oxford Brookes University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492158.

Full text
Abstract:
In recent years, the World Wide Web has become a major platform for software applications. Web-based information systems have been involved in many areas of our everyday life, such as education, entertainment, business, manufacturing, communication, etc. As web-based systems are usually distributed, multimedia, interactive and cooperative, and their production processes usually follow ad-hoc approaches, the quality of web-based systems has become a major concern. . Existing quality models and metrics do not fully satisfy the needs of quality management of Web-based systems. This study has applied and adapted software quality engineering methods and principles to address the following issues, a quality modeling method for derivation of quality models of Web-based information systems; and the development, implementation and validation of quality metrics of key quality' attributes of Web-based information systems, which include navigability and timeliness. The quality modeling method proposed in this study has the following strengths. It is more objective and rigorous than existing approaches. The quality analysis can be conducted in the early stage of system life cycle on the design. It is easy to use and can provide insight into the improvement of the design of systems. Results of case studies demonstrated that the quality modeling method is applicable and practical. Practitioners can use the modeling method to develop their own quality models. This study is amongst the first comprehensive attempts to develop quality measurement for Web-based information systems. First, it identified the relationship between website structural complexity and navigability. Quality metrics of navigability were defined, investigated and implemented. Empirical studies were conducted to evaluate the metrics. Second, this study investigated website timeliness and attempted to find direct and indirect measures for the quality attribute. Empirical studies for validating such metrics were also conducted. This study also suggests four areas of future research that may be fruitful.
APA, Harvard, Vancouver, ISO, and other styles
7

Weischedel, Birgit, and n/a. "The use of web metrics for online strategic decision-making." University of Otago. Department of Marketing, 2005. http://adt.otago.ac.nz./public/adt-NZDU20060809.132936.

Full text
Abstract:
"I know but one freedom, and that is the freedom of the mind" Antoine de Saint-Exupery. Web metrics offer significant potential for online businesses to incorporate high-quality, real-time information into their strategic marketing decision-making (SDM) process. This SDM process is affected by the firm�s strategic direction, which is critical for web businesses. A review of the widely researched strategy and SDM literature identified that managers use extensive information to support and improve strategic decisions and make informed decisions. Offline SDM processes might be appropriate for the online environment but the limited literature on web metrics has not researched information needs for online SDM. Even though web metrics can be a valuable tool for web businesses to inform strategic marketing decisions, and their collection might be less expensive and easier than offline measures, virtually no published research has combined web metrics and SDM concepts into one research project. To address this gap in the literature, the thesis investigated the differences and commonalities of online and offline SDM process approaches, the use of web metrics categories for online SDM stages, and the issues encountered during that process through four research questions. A preliminary conceptual model based on the literature review was refined through preliminary research, which addressed the research questions and investigated the current state of web metrics. After investigating various methodologies, a multi-stage qualitative methodology was selected. The use of qualitative methods represents a contribution to knowledge regarding methodological approaches to online research. Four stages within the online SDM process were shown to benefit from the use of web metrics: the setting of priorities, the setting of objectives, the pretest stage and the review stage. The results identified the similarity of online and offline SDM processes; demonstrated that Traffic, Transactions, Customer Feedback and Consumer Behaviour categories provide basic metrics used by most companies; identified the Environment, Technology, Business Results and Campaigns categories as supplementary categories that are applied according to the marketing objectives; and investigated the results based on different types of companies (website classification, channel focus, size and cluster association). Three clusters were identified that relate to the strategic importance of the website and web metrics. Modifying the initial conceptual model, six issues were distinguished that affect the use of web metrics: the adoption and use of web metrics by managers; the integration of multiple sources of metrics; the establishment of industry benchmarks; data quality; the differences to offline measures; as well as resource constraints that interfere with the appropriate web metrics analysis. Links to offline marketing strategy literature and established business concepts were explored and explanations provided where the results confirmed or modified these concepts. Using qualitative methods, the research assisted in building theory of web metrics and online SDM processes. The results show that offline theories apply to the online environment and conventional concepts provide guidance for online processes. Dynamic aspects of strategy relate to the online environment, and qualitative research methods appear suitable for online research. Publications during this research project: Weischedel, B., Matear, S. and Deans, K. R. (2003) The Use of E-metrics in Strategic Marketing Decisions - A Preliminary Investigation. Business Excellence �03 - 1st International Conference on Performance Measures, Benchmarking and Best Practices in the New Economy, Guimaraes, Portugal; June 10-13, 2003. Weischedel, B., Deans, K. R. and Matear, S. (2004) Emetrics - An Empirical Study of Marketing Performance Measures for Web Businesses. Performance Measurement Association Conference 2004, Edinburgh, UK; July 28-30, 2004. Weischedel, B., Matear, S. and Deans, K. R. (2005) "A Qualitative Approach to Investigating Online Strategic Decision-Making" Qualitative Market Research, Vol. 8 No 1, pp. 61-76. Weischedel, B., Matear, S. and Deans, K. R. (2005) "The Use of Emetrics in Strategic Marketing Decisions - A Preliminary Investigation" International Journal of Internet Marketing and Advertising, Vol. 2 Nos 1/2, p. 109-125.
APA, Harvard, Vancouver, ISO, and other styles
8

Noruzi, Alireza. "Web Presence and Impact Factors for Middle-Eastern Countries." Information Today, Inc, 2006. http://hdl.handle.net/10150/106515.

Full text
Abstract:
This study investigates the Web presence and Web Impact Factor (WIF) for country code top-level domains (ccTLDs) of Middle-Eastern countries, and sub-level domains (SLDs) related to education and academic institutions in these countries. Counts of links to the web sites of Middle-Eastern countries were calculated from the output of Yahoo search engine. In this study, we compute the WIF at two levels: top-level domains, and sub-level domains. The results show that the Middle-Eastern countries, apart from Turkey, Israel and Iran, have a low web presence. On the other hand, their web sites have a low inlink WIF. Specific features of sites may affect a countryâ s Web Impact Factor. For linguistic reasons, Middle-Eastern web sites (Persian, Kurdish, Turkish, Arabic, and Hebrew languages) may not receive and attract the attention that they deserve from the World Wide Web community.
APA, Harvard, Vancouver, ISO, and other styles
9

Dibrova, Alisa. "Web analytics. Website analysis with Google Analytics and Yandex Metrics." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22200.

Full text
Abstract:
The scope of my research is web analytics. This paper describes the process of usability analysis of the website belonging to a company Sharden Hus situated in Stockholm. From the many existing tools of web analysis I chose two the most popular ones, Google Analytics and Yandex Metrics. In similar projects that I have read, the website redesign was based on both quantitative, statistical, and qualitative (user interviews, user tests) data. In contrast to the previously carried out projects on websites improvement with the help of similar tools, I decided to base the changes on the website only on quantitative data obtained with Google and Yandex counters. This was done in order to determine whether and how Google and Yandex tools can improve the website performance. And to see if web analytics counters may provide with sufficient statistical data enough for it's correct interpretation by a web analytics designer which would lead to the improvement of the web site performance.The results of my study showed that Google and Yandex counters isolated from qualitative methods can improve the website performance. In particular, the number of visits from the territory of Sweden was increased to almost double; the overall bounce rate reduced; the number of visits to the page containing order forms significantly increased.
APA, Harvard, Vancouver, ISO, and other styles
10

Atterlönn, Anton, and Benjamin Hedberg. "GUI Performance Metrics Framework : Monitoring performance of web clients to improve user experience." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247940.

Full text
Abstract:
When using graphical user interfaces (GUIs), the main problems that frustrates users are long response times and delays. These problems create a bad impression of the GUI, as well as of the company that created it.When providing a GUI to users it is important to provide intuition, ease of use and simplicity while still delivering good performance. However, some factors that play a major role regarding the performance aspect is outside the developers’ hands, namely the client’s internet connection and hardware. Since every client has a different combination of internet connection and hardware, it can be a hassle to satisfy everyone while still providing an intuitive and responsive GUI.The aim of this study is to find a way to monitor performance of a web GUI, where performance comprises response times and render times, and in doing so, enable the improvement of response times and render times by collecting data that can be analyzed.A framework that monitors the performance of a web GUI was developed as a proof of concept. The framework collects relevant data regarding performance of the web GUI and stores the data in a database. The stored data can then be manually analyzed by developers to find weak points in the system regarding performance. This is achieved without interfering with the GUI or impacting the user experience negatively.
När man använder grafiska gränssnitt upplevs lång responstid och fördröjning som de främsta problemen. Dessa problem är frustrerande och ger användare en negativ syn på både det grafiska gränssnittet och företaget som skapat det.Det är viktigt att grafiska gränssnitt är intuitiva, lättanvända och lättfattliga samtidigt som de levererar hög prestanda. Det finns faktorer som påverkar dessa egenskaper som är utanför programmerarnas händer, t.ex. användarens internetuppkoppling och hårdvara. Eftersom varje användare har olika kombinationer av internetuppkoppling och hårdvara är det svårt att tillfredsställa alla och samtidigt tillhandahålla ett intuitivt och responsivt gränssnitt.Målet med denna studie är att hitta ett sätt att övervaka prestandan av ett grafiskt gränssnitt där begreppet prestanda omfattar responsiviteten och hastigheten av den grafiska renderingen, och genom detta möjliggöra förbättring av responstider och renderingstider.Ett ramverk som övervakar prestandan av ett grafiskt gränssnitt utvecklades. Ramverket samlar in relevant prestandamässig data om det grafiska gränssnittet och sparar datan i en databas. Datan som sparats kan sedan bli manuellt analyserad av utvecklare för att hitta svagheter i systemets prestanda. Detta uppnås utan att störa det grafiska gränssnittet och utan att ha någon negativ påverkan på användarupplevelsen.
APA, Harvard, Vancouver, ISO, and other styles
11

Cutshall, Robert C. "An investigation of success metrics for the design of e-commerce Web sites." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4465/.

Full text
Abstract:
The majority of Web site design literature mainly concentrates on the technical and functional aspects of Web site design. There is a definite lack of literature, in the IS field, that concentrates on the visual and aesthetic aspects of Web design. Preliminary research into the relationship between visual design and successful electronic commerce Web sites was conducted. The emphasis of this research was to answer the following three questions. What role do visual design elements play in the success of electronic commerce Web sites? What role do visual design principles play in the success of electronic commerce Web sites? What role do the typographic variables of visual design play in the success of electronic commerce Web sites? Forty-three undergraduate students enrolled in an introductory level MIS course used a Likert-style survey instrument to evaluate aesthetic aspects of 501 electronic commerce Web pages. The instrument employed a taxonomy of visual design that focused on three dimensions: design elements, design principles, and typography. The data collected were correlated against Internet usage success metrics data provided by Nielsen/NetRatings. Results indicate that 22 of the 135 tested relationships were statistically significant. Positive relationships existed between four different aesthetic dimensions and one single success measure. The other 18 significant relationships were negatively correlated. The visual design elements of space, color as hue, and value were negatively correlated with three of the success measures. The visual design principles of contrast, emphasis radiated through contrast, and contrast shape were negatively correlated with three of the success measures. Finally, the typographic variables of placement and type size were both negatively correlated with two of the success measures. This research provides support to the importance of visual design theory in Web site design. This preliminary research should be viewed as a realization of the need for Web sites to be designed with both visual design theory and usability in mind.
APA, Harvard, Vancouver, ISO, and other styles
12

Magapu, Akshay Kumar, and Nikhil Yarlagadda. "Performance, Scalability, and Reliability (PSR) challenges, metrics and tools for web testing : A Case Study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12801.

Full text
Abstract:
Context. Testing of web applications is an important task, as it ensures the functionality and quality of web applications. The quality of web application comes under non-functional testing. There are many quality attributes such as performance, scalability, reliability, usability, accessibility and security. Among these attributes, PSR is the most important and commonly used attributes considered in practice. However, there are very few empirical studies conducted on these three attributes. Objectives. The purpose of this study is to identify metrics and tools that are available for testing these three attributes. And also to identify the challenges faced while testing these attributes both from literature and practice. Methods. In this research, a systematic mapping study was conducted in order to collect information regarding the metrics, tools, challenges and mitigations related to PSR attributes. The required information is gathered by searching in five scientific databases. We also conducted a case study to identify the metrics, tools and challenges of the PSR attributes in practice. The case study is conducted at Ericsson, India where eight subjects were interviewed. And four subjects working in other companies (in India) were also interviewed in order to validate the results obtained from the case company. In addition to this, few documents of previous projects from the case company are collected for data triangulation. Results. A total of 69 metrics, 54 tools and 18 challenges are identified from systematic mapping study. And 30 metrics, 18 tools and 13 challenges are identified from interviews. Data is also collected through documents and a total of 16 metrics, 4 tools and 3 challenges were identified from these documents. We formed a list based on the analysis of data that is related to tools, metrics and challenges. Conclusions. We found that metrics available from literature are overlapping with metrics that are used in practice. However, tools found in literature are overlapping only to some extent with practice. The main reason for this deviation is because of the limitations that are identified for the tools, which lead to the development of their own in-house tool by the case company. We also found that challenges are partially overlapped between state of art and practice. We are unable to collect mitigations for all these challenges from literature and hence there is a need for further research to be done. Among the PSR attributes, most of the literature is available on performance attribute and most of the interviewees are comfortable to answer the questions related to performance attribute. Thus, we conclude there is a lack of empirical research related to scalability and reliability attributes. As of now, our research is dealing with PSR attributes in particular and there is a scope for further research in this area. It can be implemented on the other quality attributes and the research can be done in a larger scale (considering more number of companies).
APA, Harvard, Vancouver, ISO, and other styles
13

Freire, André Pimenta. "Acessibilidade no desenvolvimento de sistemas web: um estudo sobre o cenário brasileiro." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06052008-101644/.

Full text
Abstract:
A universalização do acesso ao conteúdo disponibilizado em sistemas Web tem se tornado crucial para que todas as pessoas, independente de deficiências ou de outras restrições possam ter acesso a ele. Diversos estudos indicam que, apesar da instituição de leis federais sobre acessibilidade para conteúdo Web em diversos países, muitos sítios ainda apresentam problemas. A falta de conscientização das pessoas envolvidas em projetos de desenvolvimento Web sobre a acessibilidade e a não utilização de técnicas adequadas para desenvolvimento de aplicações têm um impacto considerável sobre a acessibilidade. Levantamentos foram realizados com o objetivo de identificar características dos desenvolvedores sobre o conhecimento e uso de técnicas para acessibilidade. Entretanto, os estudos realizados investigaram somente o uso de um conjunto restrito de técnicas e, além disso, também não investigaram a correlação entre as respostas obtidas pelos participantes e o nível de acessibilidade das páginas desenvolvidas por eles. Neste trabalho, propõe-se efetuar um levantamento sobre a percepção de acessibilidade e uso de técnicas para desenvolvimento de sistemas Web considerando acessibilidade com pessoas envolvidas em projetos de desenvolvimento Web no Brasil de diferentes áreas de atuação. Este levantamento foi acompanhado de avaliações de acessibilidade automatizadas com uso de métricas sobre sítios desenvolvidos pelos participantes, para verificar a influência dos fatores investigados na acessibilidade dos sítios e na percepção de acessibilidade dos participantes. O levantamento realizado contou com a participação de 613 participantes de todo o Brasil. Os resultados indicaram que no Brasil a percepção da acessibilidade por pessoas que participam de projetos de desenvolvimento Web ainda é bastante limitada. Mais do que promover o treinamento das pessoas envolvidas em projetos sobre questões técnicas, é necessário promover maior conscientização sobre a acessibilidade e sobre os problemas que pessoas com diferentes restrições e habilidades enfrentam ao utilizar a Web.
Universal access to content in Web based systems is an essential aspect to enable everyone to have access to it, regardless of disabilities or any other restrictions. Several studies indicate that, although federal legislation regarding to Web accessibility have been promulgated in many countries, accessibility is still an issue for many Web sites. The limited awareness of accessibility by people involved in Web development and the lack of appropriate use of development techniques in the development of applications have a deep impact on accessibility. A few surveys have been carried out to identify the main characteristics of Web developers regarding accessibility concepts and techniques for accessibility. However, the studies reported up to this date have only investigated the use of a restricted set of techniques by developers. Besides, they have not addressed the analysis of the correlation between the answers provided by the subjects and the accessibility level of their Web pages. The proposal of the work presented in this master\'s thesis is the development of a survey on the accessibility awareness and on the use of techniques for accessibility by people involved in the development of Web based systems. Automatic metric based accessibility evaluations on the Web sites developed by the subjects were carried out to support the investigation of the impacts that the issues investigated have on the Web pages accessibility and on the accessibility awareness. The survey was answered by 613 subjects from all Brazilian states. The results show that in Brazil the accessibility awareness is still very limited. Training people involved in Web projects on technical issues alone is not enough. It is necessary to promote a wider awareness of accessibility and of the problems people with different restrictions and abilities deal with when using the Web
APA, Harvard, Vancouver, ISO, and other styles
14

Cheatham, Michelle Andreen. "The Properties of Property Alignment on the Semantic Web." Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1407775249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Birkestedt, Sara, and Andreas Hansson. "Can web-based statistic services be trusted?" Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5282.

Full text
Abstract:
A large number of statistic services exist today, which shows that there is a great interest in knowing more about the visitors on a web site. But how reliable is the result the services are giving? The hypothesis examined in the thesis is: Web-based statistic services do not show an accurate result The purpose of the thesis is to find out how accurate the web-based statistic services are regarding unique visitors and number of pages viewed. Our hope is that this thesis will bring more knowledge about the different statistic services that exists today and the problems surrounding them. We will also draw attention to the importance of knowing how your statistic software works to be able to interpret the results correctly. To investigate this, we chose to do practical tests on a selection of web-based statistic services. The services registered the traffic from the same web site during a test period. During the same period a control program registered the same things and stored the result in a database. In addition to the test, we have done an interview with a person working with web statistics. The investigation showed that there are big differences between the results from the web-based statistic services in the test and that none of them showed an accurate result, neither for the total number of page views nor unique visitors. This led us to the conclusion that web-based statistic services do not show an accurate result, which verifies our hypothesis. Also the interview confirmed that there is a problem with measuring web statistics.
Ett stort antal statistiksystem existerar idag för att ta reda på information om besökare på webbplatser. Men hur pålitliga är egentligen dessa tjänster? Syftet med uppsatsen är att ta reda på hur pålitliga de är när det gäller att visa antal unika besökare och totalt antal sidvisningar. Hypotesen vi har formulerat är: Webb-baserade statistiksystem visar inte ett korrekt resultat. För att testa detta har vi gjort praktiska tester av fem olika webb-baserade statistiktjänster som användes på samma webbplats under samma period. Informationen som dessa tjänster registrerade lagrade vi i en databas, samtidigt som vi använde ett eget kontrollprogram för att mäta samma uppgifter. Vi har också genomfört en intervju med en person som arbetar med webbstatistik på ett webbföretag. Undersökningen visar att resultatet mellan de olika tjänsterna skiljer sig mycket, både jämfört med varandra och med kontrollprogrammet. Detta gällde både antal sidvisningar och unika besökare. Detta leder till slutsatsen att systemen inte visar korrekta uppgifter, vilket gör att vi kan verifiera vår hypotes. Även intervjun som utfördes visade på de problem som finns med att mäta besökarstatistik.
APA, Harvard, Vancouver, ISO, and other styles
16

Imran, Muhammad. "The impact of consolidating web based social networks on trust metrics and expert recommendation systems." Thesis, University of Southampton, 2015. https://eprints.soton.ac.uk/385200/.

Full text
Abstract:
Individuals are typically members of a variety of web-based social networks (both explicit and implied), but existing trust inference mechanisms typically draw on only a single network to calculate trust between any two individuals. This reduces both the likelihood that a trust value can be calculated (as both people have to be members of the same network), and the quality of any trust inference that can be drawn (as it will be based on only a single network, typically representing a single type of relationship). To make trust calculations on Multiple Distributed (MuDi) social networks, those networks must first be consolidated into a single network. Two challenges that arise when consolidating MuDi networks are their heterogeneity, due to different name representation techniques used for participants, and the variability of trust information, due to the different trust evaluation criteria, across the different candidate networks. Semantic technologies are vital to deal with the heterogeneity issues as they permit data to be linked from multiple resources and help them to be modelled in a uniform representation using ontologies. The inconsistency of multiple trust values from different networks is handled using data fusion techniques, as simpler aggregation techniques of summation and weighted averages tend to distort trust data. To test the proposed semantic framework, two set of experiments were run. Simulation experiments generated pairs of networks with varying percentages of Participant Overlap (PO) and Tie Overlap (TO), with trust values added to the links between participants in the networks. It analysed different data fusion techniques aiming to identify which best preserved the integrity of trust from each individual network with varying values of PO and TO. A real world experiment used the findings of the simulation experiment on the best trust aggregation techniques and applied the framework to real trust data between participants that was extracted from a pair of professional social networks. The trust values generated from consolidated MuDi networks were then compared with the real life trust between users, collected using a survey, with the aim of analysing whether aggregated trust is closer to real life trust than using each of the individual networks. Analysis of the simulation experiment showed that the Weighted Ordered Weighted Averaging (WOWA) data fusion technique better aggregated trust data and, unlike the other techniques, preserved the integrity of trust from each individual network for varying PO and TO (p � 0.05). The real world experiment partially proved the hypothesis of generating better trust values from consolidated MuDi networks and showed improved results for participants who are part of both networks (p � 0.05), while disproving the claim for those in the cross-region (with one user present in both networks and the other in a single network) and single-network users (p > 0.05).
APA, Harvard, Vancouver, ISO, and other styles
17

Ponomarenko, Maksym. "Maintenance of the Quality Monitor Web-Application." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-28816.

Full text
Abstract:
Applied Research in System Analysis (ARiSA) is a company specialized in the development of the customer-specific quality models and applied research work. In order to improve the quality of the projects and to reduce maintenance costs, ARiSA developed Quality Monitor (QM) – a web application for quality analysis. QM application has been originally developed as a basic program to enable customers to evaluate the quality of the sources. Therefore, the business logic of the application was simplified and certain limitations were imposed on it, which in its turn leads to a number of issues related to user experience, performance and architecture design. These aspects are important for both application as a product, and for its future promotion. Moreover, this is important for customers, as end users. Main application issues, which were added to the maintenance list are: manual data upload, insufficient server resources to handle long-running and resource consuming operations, no background processing and status reporting, simplistic presentation of analysis results and known usability issues, weak integration between analysis back-ends and front-end. ­­­­­­­­­­­In order to address known issues and to make improvements of the existing limitations, a maintenance phase of QM application is initiated. First of all, it is intended to stabilize current version and improve user experience. It also needed for refactoring and implementation of more efficient data uploads processing in the background. In addition, extended functionality of QM would fulfill customer needs and transform application from the project into a product. Extended functionality includes: automated data upload from different build processes, new data visualizations, and improvement of the current functionality according to customer comments. Maintenance phase of QM application has been successfully completed and master thesis goals are met. Current version is more stable and more responsive from user experience perspective. Data processing is more efficient, and now it is implemented as background analysis with automatic data import. User interface has been updated with visualizations for client-side interaction and progress reporting. The solution has been evaluated and tested in close cooperation with QM application customers. This thesis describes requirements analysis, technology stack with choice rationale and implementation to show maintenance results.
APA, Harvard, Vancouver, ISO, and other styles
18

Tibbo, Helen R., and Lokman I. Meho. "Finding Finding Aids on the World Wide Web." Society of American Archivists, 2001. http://hdl.handle.net/10150/105720.

Full text
Abstract:
Reports results of a study to explore how well six popular Web search engines performed in retrieving specific electronic finding aids mounted on the World Wide Web. A random sample of online finding aids was selected and then searched using AltaVista, Excite, Fast Search, Google, Hotbot and Northern Light, employing both word and phrase searching. As of February 2000, approximately 8 percent of repositories listed at the 'Repositories of Primary Resources' Web site had mounted at least four full finding aids on the Web. The most striking finding of this study was the importance of using phrase searches whenever possible, rather than word searches. Also of significance was the fact that if a finding aid were to be found using any search engine, it was generally found in the first ten or twenty items at most. The study identifies the best performers among the six chosen search engines. Combinations of search engines often produced much better results than did the search engines individually, evidence that there may be little overlap among the top hits provided by individual engines.
APA, Harvard, Vancouver, ISO, and other styles
19

Fan, Michael. "Web application in radiotherapy: the standardization of treatment planning and development of quantitative plan quality metrics." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119750.

Full text
Abstract:
Treatment planning standardization efforts were made for the Stereotactic Radiosurgery(SRS) program at the Jewish General Hospital, Montreal. A standardized dose objective template for SRS was made in collaboration with physicians in our clinic. A web based platform was made for radiotherapy research and data analysis. The web platform was made with considerations for ease of distribution and customizability. A plan report module was made for the web platform to automatically analyze dose statistics and generate SRS plan reports. The plan report module was well received by dosimetrists in our clinic and reduced the labour required in plan evaluation. A total of 35 approved treatment plans were imported into the web application for analysis. A quantitative metric, Quality Index (QI), was developed to measure SRS plan compliance to the standardized plan evaluation template. The results show increased average QI and decreased QI variation between pre-release and post-release of the web application. The validation of QI as a quality indicator of a treatment plan warrants further study.
Des efforts ont ete fait envers la standarisation de la planification de traitements pour le programme de radiochirurgie stereotactique (SRS) a l'Hopital General Juifde Montreal. Un modele standardise d'objectifs de dose pour la SRS a ete construit en collaboration avec les medecins clinique. Une plate-forme web a ete programme afin de faire l'analyse des donnees de recherche. Le web a ete choisis comme hote de la plate-forme en raison de la facilite de distribution et d'adaptation des applications web. Un module de redaction de rapports automatique est inclue dans la plate-forme pour effectuer une analyse statistique de la dose pour chaque plan et ensuite rediger un rapport. Le module a ete implante en clinique avec succes et une reduction marquee de la charge de travail requise pour l'evaluation des plans de traitement pour la SRS. Un total de 35 plans ont ete approuve pour l'importation dans la plateforme web pour l'analyse de donnees. Un metrique quantificatif surnomme L'index de Qualite (QI) a ete developpe pour evaluer l'adherence des plans au modele standard construit. Les resultats demontrent une hausse du QI ainsi qu'une baisse de la variance entre chaque plans suite a l'implantation de la plate-forme web. D'autres etudes doivent etre performes pour valider le QI en tant qu'indicateur de qualite de plans.
APA, Harvard, Vancouver, ISO, and other styles
20

Hejl, Radomír. "Analytika obsahových webů." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-124785.

Full text
Abstract:
The thesis deals with web analytics of content based websites. Its primary aim is to design metrics of a web analysis and range of the metrics. This allows a proprietor of the content based websites to evaluate the state of the web and also its changes. The following is a practical example of handling website metrics and how to evaluate a web redesign with the help of these metrics. The first and second chapter lists literature of web analysis and specifies a purpose of the thesis and its target group. In the paragraphs that follow, I explain the theoretical starting-points and major concepts in further detail. In the third chapter I describe the main targets of content based websites because con-sequently defined metrics should reflect these targets and aim for them. Then I underline some specific problems of content based websites analysis. The fifth chapter forms the crux of this work. First, I define right metrics and then present the very design of metrics for analysis of content based websites. The proposed metrics describe interpretation of values, possibilities of segmentation and also relation to other metrics. In the fifth chapter there is an example of some metrics applied to real data of two content based websites with a description of how to work with these metrics.
APA, Harvard, Vancouver, ISO, and other styles
21

Tonkin, Emma. "Searching the long tail: Hidden structure in social tagging." dLIST, 2006. http://hdl.handle.net/10150/105565.

Full text
Abstract:
In this paper we explore a method of decomposition of compound tags found in social tagging systems and outline several results, including improvement of search indexes, extraction of semantic information, and benefits to usability. Analysis of tagging habits demonstrates that social tagging systems such as del.icio.us and flickr include both formal metadata, such as geotags, and informally created metadata, such as annotations and descriptions. The majority of tags represent informal metadata; that is, they are not structured according to a formal model, nor do they correspond to a formal ontology. Statistical exploration of the main tag corpus demonstrates that such searches use only a subset of the available tags; for example, many tags are composed as ad hoc compounds of terms. In order to improve accuracy of searching across the data contained within these tags, a method must be employed to decompose compounds in such a way that there is a high degree of confidence in the result. An approach to decomposition of English-language compounds, designed for use within a small initial sample tagset, is described. Possible decompositions are identified from a generous wordlist, subject to selective lexicon snipping. In order to identify the most likely, a Bayesian classifier is used across term elements. To compensate for the limited sample set, a word classifier is employed and the results classified using a similar method, resulting in a successful classification rate of 88%, and a false negative rate of only 1%.
APA, Harvard, Vancouver, ISO, and other styles
22

Chignell, Mark, Jacek Gwizdka, and Richard Bodner. "Discriminating Meta-Search: A Framework for Evaluation." Elsevier, 1999. http://hdl.handle.net/10150/105146.

Full text
Abstract:
DOI: 10.1016/S0306-4573(98)00065-X
There was a proliferation of electronic information sources and search engines in the 1990s. Many of these information sources became available through the ubiquitous interface of the Web browser. Diverse information sources became accessible to information professionals and casual end users alike. Much of the information was also hyperlinked, so that information could be explored by browsing as well as searching. While vast amounts of information were now just a few keystrokes and mouseclicks away, as the choices multiplied, so did the complexity of choosing where and how to look for the electronic information. Much of the complexity in information exploration at the turn of the twenty-first century arose because there was no common cataloguing and control system across the various electronic information sources. In addition, the many search engines available differed widely in terms of their domain coverage, query methods, and efficiency. Meta-search engines were developed to improve search performance by querying multiple search engines at once. In principle, meta-search engines could greatly simplify the search for electronic information by selecting a subset of first-level search engines and digital libraries to submit a query to based on the characteristics of the user, the query/topic, and the search strategy. This selection would be guided by diagnostic knowledge about which of the first-level search engines works best under what circumstances. Programmatic research is required to develop this diagnostic knowledge about first-level search engine performance. This paper introduces an evaluative framework for this type of research and illustrates its use in two experiments. The experimental results obtained are used to characterize some properties of leading search engines (as of 1998). Significant interactions were observed between search engine and two other factors (time of day, and Web domain). These findings supplement those of earlier studies, providing preliminary information about the complex relationship between search engine functionality and performance in different contexts. While the specific results obtained represent a time-dependent snapshot of search engine performance in 1998, the evaluative framework proposed should be generally applicable in the future.
APA, Harvard, Vancouver, ISO, and other styles
23

Manuel, Sue. "Strategic management and development of UK university library websites." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/10958.

Full text
Abstract:
This research assessed website management and development practices across the United Kingdom university library sector. As a starting point, the design and features of this group of websites was recorded against criteria drawn from the extant literature. This activity established core content and features of UK library websites as: a search box or link for searching the library catalogue, electronic resources or website; a navigation column on the left and breadcrumb trail to aid information location and website orientation; homepage design was repeated on library website sub-pages; university brand elements appeared in the banner; and a contact us link was provided for communication with library personnel. Library websites conformed to 14 of the 20 homepage usability guidelines examined indicating that web managers were taking steps to ensure that users were well served by their websites. Areas for improvement included better navigation support (sitemap/index), greater adoption of new technologies and more interactive features. Website management and development practices were established through national survey and in-depth case studies. These illustrated the adoption of a team approach to website management and development; formal website policy and strategy were not routinely created; library web personnel and their ability to build effective links with colleagues at the institution made a valuable contribution to the success of a library website; corporate services and institutional practices played an important part in library website development; library staff were actively engaged in consultations with their website audience; and a user focused approach to website development prevailed. User studies and metric data were considered in the website evaluation and development process. However, there were some issues with both data streams and interpreting metric data to inform website development. Evaluation and development activities were not always possible due to staff/time shortages, technical constraints, corporate website templates, and, to a lesser extent, lack of finance.
APA, Harvard, Vancouver, ISO, and other styles
24

Thorve, Swapna. "EpiViewer: An Epidemiological Application For Exploring Time Series Data." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/86829.

Full text
Abstract:
Visualization plays an important role in epidemic time series analysis and forecasting. Viewing time series data plotted on a graph can help researchers identify anomalies and unexpected trends that could be overlooked if the data were reviewed in tabular form. However,there are challenges in reviewing data sets from multiple data sources (data can be aggregated in different ways and measure different criteria which can make a direct comparison between time series difficult. In the face of an emerging epidemic, the ability to visualize time series from various sources and organizations and to reconcile these datasets based on different criteria could be key in developing accurate forecasts and identifying effective interventions. Many tools have been developed for visualizing temporal data; however, none yet supports all the functionality needed for easy collaborative visualization and analysis of epidemic data. In this thesis, we develop EpiViewer, a time series exploration dashboard where users can upload epidemiological time series data from a variety of sources and compare, organize, and track how data evolves as an epidemic progresses. EpiViewer provides an easy-to-use web interface for visualizing temporal datasets either as line charts or bar charts. The application provides enhanced features for visual analysis, such as hierarchical categorization, zooming, and filtering, to enable detailed inspection and comparison of multiple time series on a single canvas. Finally, EpiViewer provides a built-in statistical Epi-features module to help users interpret the epidemiological curves.
Master of Science
We present EpiViewer, a time series exploration dashboard where users can upload epidemiological time series data from a variety of sources and compare, organize, and track how data evolves as an epidemic progresses. EpiViewer is a single page web application that provides a framework for exploring, comparing, and organizing temporal datasets. It offers a variety of features for convenient filtering and analysis of epicurves based on meta-attribute tagging. EpiViewer also provides a platform for sharing data between groups for better comparison and analysis.
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Jinmeng. "Comparing Web Accessibility between Major Retailers and Novelties for E-Commerce." Thesis, Tekniska Högskolan, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-49001.

Full text
Abstract:
Purpose –Comparing the accessibility of e-commerce for General Merchandise, between major retailers and novelties, in the conformance of web accessibility guidelines, and the implementation of rich internet applications specifications. So that to figure out whether the differences exist between them, if so, what are the differences, and analyzing upcoming e-commerce and established e-commerce.  Method – The descriptive and quantitative case studies were used to evaluated 45 websites respectively from major retailers and novelties in General Merchandise of e-commerce, where sample websites were selected from Alexa’s rank lists. Then the WAVE tool, Accessibility Quantitative Metric (WAQM), and Fona assessment were applied for analyzing cases for representing the accessibilities and answering the research questions of purpose.  Findings – ARIA specifications as a kind of technological solution really had positive functions on web accessibility when only focusing on accessibility guidelines, because the novelty websites with less ARIA attributes resulting in lower accessibility levels generally, even though there were also many other elements that can affect accessibility.  Implications – In the main branch of General Merchandise, the degree of web accessibility in major retailer websites was better than that in novelties, which means as far as e-commerce is concerned, the accessibility of mature websites that had been established for a long time was contemporarily stronger than that of new websites with creative products.  Limitations – From the perspective of the sample, there were limitations in sample sources, size, websites languages, while in the technical aspect, the evaluation of dynamic contents just aims at keyboard navigation, and the tool of Fona assessment also had restrictions.
APA, Harvard, Vancouver, ISO, and other styles
26

Píč, Karel. "Webová aplikace pro hodnocení kvality obrazu a videa." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399186.

Full text
Abstract:
This work will be a web application for image and video quality assessment, especially those with a high dynamic range (HDR). The metrics are used for the evaluation, the results of which will be used for scientific purposes. Broader public citizens will be acquainted with this issue. This app will have an assistant available for a simple understanding of the whole application. The application also tries to be autonomous and dynamic. For example, new metrics can be added from the administration environment and editing on the server side is not required. The administrator will be able to upload a metric, a local library, a startup script, and define the parameters. The parameters provide metrics necessary data from the processed image or video. Only installation of system applications and libraries will be required on the server side.
APA, Harvard, Vancouver, ISO, and other styles
27

Lima, Alysson Alves de. "Seleção automatizada de serviços web baseada em métricas funcionais e estruturais." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9037.

Full text
Abstract:
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-29T15:06:59Z No. of bitstreams: 1 arquivototal.pdf: 2340814 bytes, checksum: bdce9ccd956442015a53d57f06a00741 (MD5)
Made available in DSpace on 2017-06-29T15:06:59Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2340814 bytes, checksum: bdce9ccd956442015a53d57f06a00741 (MD5) Previous issue date: 2016-08-23
Software Engineering is a discipline that encompasses all aspects of the production of a software system, from the early stages of the system specification to maintenance, when the system is already being used. A very interesting area in Software Engineering is software reuse, which impacts positively on reducing time, costs and risks in software development processes. Therefore, it can be stated that software reuse improves not only the software development process, but also the product itself. One of the main approaches for software reuse is service oriented development, which adopts the Service-Oriented Architecture (SOA) paradigm. In SOA, services represent a natural evolution of component-based development, and therefore can be defined as loosely coupled, reusable software components, that encapsulate discrete functionality, can be distributed and remotely accessed through coding. It is important to highlight that while SOA is an architectural paradigm for developing software systems, Web Services represent the most widely existing technology adopted to implement SOA exploring protocols based on Internet standards and on XML. With the growth of the market and the use of web services, the tendency is always increase the number of services available for assembly applications in different contexts, making impractical the task of manually selecting the services required to compose a software system. Consequently, one can state that the effort needed to select the required services tends to increase more and more, creating a problem with a large and complex search space, making it necessary the automation of the selection process based on metaheuristic search techniques. In this context, the proposed work aims to automate the web services selection process using techniques of Search-Based Software Engineering, in which the selection strategy is guided by structural and functional metrics that have the purpose of evaluating the similarity between the specifications and respective implementations of candidate services as well as their dependencies, thus reducing the effort of adaptation and integration of web services developed by different suppliers.
Engenharia de Software é uma disciplina que engloba todos os aspectos da produção de um sistema de software, incluindo desde os estágios iniciais da especificação do sistema até sua manutenção, quando o sistema já está sendo utilizado. Uma área de estudo bastante interessante da Engenharia de Software é o reuso de software, que impacta positivamente na redução do tempo, dos custos e dos riscos provenientes de um processo de desenvolvimento de software. Portanto, é possível afirmar que o reuso de software melhora, não apenas o processo de desenvolvimento de software, mas também o próprio produto. Uma das principais abordagens de reuso de software é o desenvolvimento orientado a serviços, que adota o paradigma da Arquitetura Orientada a Serviços (SOA – Service-Oriented Architecture). No paradigma SOA, serviços representam uma evolução natural do desenvolvimento baseado em componentes, e, portanto, podem ser definidos como componentes de software de baixo acoplamento, reusáveis, que encapsulam funcionalidades discretas, que podem ser distribuídos e acessados remotamente de forma programática. É importante destacar que, enquanto SOA é um paradigma arquitetural para desenvolvimento de sistemas de software, serviços web (web services) representam a tecnologia existente mais amplamente adotada para implementar SOA explorando protocolos baseados em padrões da internet e em XML (eXtensible Markup Language). Com o crescimento do mercado e utilização dos serviços web, a tendência é sempre aumentar o número de serviços disponíveis para montagem de aplicações em diferentes contextos, tornando impraticável a tarefa de selecionar de forma manual os serviços requeridos para compor um sistema de software. Consequentemente, é possível afirmar que o esforço necessário para selecionar os serviços requeridos tende a aumentar cada vez mais, gerando um problema com um grande e complexo espaço de busca, tornando necessária a automatização do processo de seleção baseada em técnicas de busca metaheurística. Neste contexto, o trabalho proposto visa automatizar o processo de seleção de serviços web utilizando técnicas da Engenharia de Software Baseada em Buscas, cuja estratégia de seleção é orientada por métricas funcionais e estruturais, que têm o propósito de avaliar a similaridade entre as especificações e as respectivas implementações dos serviços candidatos, bem como as suas dependências, reduzindo assim o esforço de adaptação e integração de serviços web desenvolvidos por fornecedores distintos.
APA, Harvard, Vancouver, ISO, and other styles
28

Філон, А. А. "Комнлексне забезпечення якооті системи «Оцінка культури інформаційної безпеки організацій»." Thesis, Чернігів, 2021. http://ir.stu.cn.ua/123456789/25136.

Full text
Abstract:
Філон, А. А. Комплексне забезпечення якості системи «Оцінка культури інформаційної безпеки організації» : випускна кваліфікаційна робота : 121 "Інженерія програмного забезпечення" / А. А. Філон ; керівник роботи О. В. Трунова ; НУ "Чернігівська політехніка", кафедра технологій та програмної інженерії. – Чернігів, 2021. – 40 с.
Метою роботи є комплексне забезпечення якості веб-систем анкетування на різних етапах життєвого циклу, на прикладі програмного засобу «Оцінка культури інформаційної безпеки організацій». Об’єктом роботи є методи і моделі комплексної оцінки якості веб-систем анкетування. Предметом роботи є критерії і метрики оцінки якості веб-систем анкетування. Розроблювана система виконує такі завдання:  аналіз стандартів якості ПЗ;  виокремлення поняття та завдань тестування;  формування процесу тестування відповідно до ітераційних методологій;  аналіз сучасних підходів процесу тестування веб-систем;  розробка концепції (стратегії) тестування;  виявлення ступеню покриття вимог тестами та інших показників тестування;  розробка комплексної метрики оцінювання якості веб-систем анкетування;  залучення засобів автоматизації для прискорення та розширення процесів тестування;  проведення тестування системи відповідно до розробленої стратегії тестування веб-систем анкетування;  визначення якості реалізованого продукту. Практичною цінністю є можливість використання розробниками і тестувальниками запропонованої метрики та алгоритму для підвищення швидкості процесів тестування подібних систем на різних етапах життєвого циклу. Наукова новизна даної роботи полягає в тому, що запропоновано метрику для комплексної оцінки якості системи «Оцінка культури інформаційної безпеки організацій», яка на відміну від існуючих може використовуватися з різними методами тестування. При подальшій розробці можлива адаптація комплексної метрики оцінювання якості тестованої системи відповідно до залучених нових модулів та видів тестування. Результати дипломної роботи були висвітлені в тезах всеукраїнської конференції.
The purpose of the work is integrated quality assurance evaluation of web questionnaire systems at different life cycle stages, on the example of the software “Assessment of the organization's information security culture”. The object of the work are methods and models for integrated assessing evaluation of web questionnaire systems quality. The subject of the work are criteria and metrics for assessing evaluation of web questionnaire systems quality. The developed system performs the following tasks: – Analysis of software quality standards; – Highlighting the concept and objectives of testing; – Formation of the testing process according to iterative methodologies; – Analysis of modern approaches to the process of testing for evaluation web questionnaire systems; – Development of the testing concept (strategy); – Identification of test requirements coverage and other testing indicators; – Development of integrated metric for assessing the quality evaluation of web questionnaire systems; – Automation tools involvement to speed up and expand testing processes; – Testing the system in accordance with the developed test strategy for evaluation of web questionnaire systems; – Determining the quality of the developed product. The practical value is the ability of developers and testers to use the proposed metric and algorithm to increase the speed of testing processes for such systems at different life cycle stages. The scientific novelty of this work is seen in the proposed metric for integrated quality assessment of the system “Assessment of the organization's information security culture”, which in contrast to existing ones can be used with different testing methods. It is possible to adapt the integrated metric for assessing the quality of system under test according to the new involved modules and testing types during further development. The results of the qualification work were covered in the thesis of the All- Ukrainian conference.
APA, Harvard, Vancouver, ISO, and other styles
29

Oliveira, Ricardo Ramos de. "Avaliação de manutenibilidade entre as abordagens de web services RESTful e SOAP-WSDL." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-24072012-164751/.

Full text
Abstract:
A Engenharia de Software tem desenvolvido técnicas e métodos para apoiar o desenvolvimento de software confiável, flexível, com baixo custo de desenvolvimento e fácil manutenção. A avaliação da manutenibilidade contribui para fornecer meios para produzir software com alta qualidade. Este trabalho apresenta um experimento controlado para avaliar a manutenibilidade entre as abordagens de web services: RESTful e SOAP-WSDL. Esta avaliação foi conduzida usando 3 programas implementados na linguagem de programação Java e com a mesma arquitetura. Com base na arquitetura projetada, os web services desenvolvidos foram utilizados como objetos em estudos de caso, possibilitando avaliar e comparar a sua manutenibilidade. Os resultados obtidos demonstraram relações entre as informações sobre o custo e a qualidade dos serviços web, que contribuíram para esclarecer os critérios para a obtenção de uma boa relação entre o custo da manutenção e a evolução dos serviços web. Para concluir, os resultados indica que os web services RESTful são mais manuteníveis do lado do servidor, e os web services SOAP-WSDL são mais manuteníveis do lado do cliente. Os estudos realizados no experimento controlado são promissores e podem auxiliar na redução de custo na manutenção dos serviços web, melhorando dessa forma a qualidade do software no geral
Software Engineering has developed techniques and methods to support the development of reliable software, exible, low development cost and easy maintenance. The evaluation of maintainability contributes in this direction, providing the means to produce software with high quality. This paper presents an controlled experiment to evaluate the maintainability between the approaches of web services: RESTful and SOAP-WSDL. This evaluation was conducted using four programs implemented in the Java programming language, using the same architecture. Based on the designed architecture, Web services were developed as objects used in case studies, allowing to evaluate and compare its maintainability. The results showed relationships between the information on the cost and quality of web services, which contributed to clarify the criteria for obtaining a good relationship between the cost of maintenance and evolution of web services. In conclusion, the results indicate the RESTful web services are more maintainable server side in contrast to the SOAP-WSDL web services are more maintainable client side. Studies in controlled experiments are promising and may help reduce the maintenance cost of web services, thus improving overall software quality
APA, Harvard, Vancouver, ISO, and other styles
30

Tang, Ling-Xiang. "Link discovery for Chinese/English cross-language web information retrieval." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/58416/1/Ling-Xiang_Tang_Thesis.pdf.

Full text
Abstract:
Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.
APA, Harvard, Vancouver, ISO, and other styles
31

Souza, Iara Vidal Pereira de. "Altmetria: métricas alternativas do impacto da comunicação científica." Niterói, 2017. https://app.uff.br/riuff/handle/1/2811.

Full text
Abstract:
Submitted by Josimara Dias Brumatti (bcgdigital@ndc.uff.br) on 2017-01-26T13:03:43Z No. of bitstreams: 1 2014_Mest_PPGCI-UFF_IaraVidal.pdf: 1139977 bytes, checksum: c060d1dc8e124903f244f016f7b0fa45 (MD5)
Approved for entry into archive by Geisa Drumond (gmdrumond@vm.uff.br) on 2017-01-27T15:05:23Z (GMT) No. of bitstreams: 1 2014_Mest_PPGCI-UFF_IaraVidal.pdf: 1139977 bytes, checksum: c060d1dc8e124903f244f016f7b0fa45 (MD5)
Made available in DSpace on 2017-01-27T15:05:23Z (GMT). No. of bitstreams: 1 2014_Mest_PPGCI-UFF_IaraVidal.pdf: 1139977 bytes, checksum: c060d1dc8e124903f244f016f7b0fa45 (MD5)
A proposta deste trabalho é traçar o estado da arte da altmetria, definida como o estudo, a criação e a utilização de indicadores relacionados à interação de usuários com produtos de pesquisa diversos no âmbito da Web Social – visualizações, downloads, citações, reutilizações, compartilhamentos, etiquetagens, comentários, entre outros. Para entender o contexto de surgimento da altmetria, exploramos as especificidades e potencialidades da Web Social e as características da comunicação científica. Além de constituir mais um passo na evolução dos estudos métricos da informação, a altmetria se apresenta também como reação à crise do atual modelo de publicação e avaliação na Ciência. A partir de pesquisa bibliográfica exploratória em fontes nacionais e internacionais, levantamos a produção científica sobre altmetria, identificando atores envolvidos na produção de conhecimento na área, seus conceitos, as propostas e tendências dos estudos sobre o tema; fundamentando a reflexão sobre o desenvolvimento da área e a aceitação das métricas alternativas como ferramentas da avaliação científica. Constatamos que a altmetria é uma área de estudos em expansão, cujos métodos complementam os estudos métricos tradicionais contribuindo para o entendimento mais completo da comunicação científica, seus atores, seus processos, seus produtos e seus impactos.
Describes the state of the art of altmetrics, defined as the study, creation and use of measures related to user interaction with diverse research products through the Social Web – views, downloads, citations, reuse, sharing, tagging, comments, among others. To understand the context where altmetrics arises, we explore the Social Web’s specificities and potentials, and the characteristics of scientific communication. Besides being another step on the evolution of metric studies of information, altmetrics also presents itself as a reaction to the crisis in the current model of scientific publication and evaluation. Through an exploratory search in national and international sources we were able to identify the scientific literature about altmetrics, analysing the authors involved, its concepts, the proposals and trends around the theme, basing our reflection about the development of altmetrics and the acceptance of alternative metrics as tools for scientific evaluation. We note that altmetrics is an expanding area, and its methods complement traditional metrics contributing to a more complete understanding of scientific communication, its actors, processes, products and impacts.
APA, Harvard, Vancouver, ISO, and other styles
32

Thio, Niko. "Provider recommendation based on client-perceived performance." Connect to thesis, 2009. http://repository.unimelb.edu.au/10187/5773.

Full text
Abstract:
In recent years the service-oriented design paradigm has enabled applications to be built by incorporating third party services. With the increasing popularity of this new paradigm, many companies and organizations have started to adopt this technology, which has resulted in an increase of the number and variety of third party providers. With the vast improvement of global networking infrastructure, a large number of providers offer their services for worldwide clients. As a result, clients are often presented with a number of providers that offer services with the same or similar functionalities, but differ in terms of non-functional attributes (or Quality of Service – QoS), such as performance. In this environment, the role of provider recommendation has become more important - in assisting clients in choosing the provider that meets their QoS requirement.
In this thesis we focus on provider recommendation based on one of the most important QoS attributes – performance. Specifically, we investigate client-perceived performance, which is the application-level performance measured at the client-side every time the client invokes the service. This performance metric has the advantage of accurately representing client experience, compared to the widely used server-side metrics in the current frameworks (e.g. Service Level Agreement or SLA in Web Services context). As a result, provider recommendation based on this metric will be favourable from the client’s point of view.
In this thesis we address two key research challenges related to provider recommendation based on client-perceived performance - performance assessment and performance prediction. We begin by identifying heterogeneity factors that affect client-perceived performance among clients in a global Internet environment. We then perform extensive real-world experiments to evaluate the significance of each factor to the client-perceived performance.
From our finding on heterogeneity factors, we then develop a performance estimation technique to address performance assessment for cases where direct measurements are unavailable. This technique is based on the generalization concept, i.e. estimating performance based on the measurement gathered by similar clients. A two-stage grouping scheme based on the heterogeneity factors we identified earlier is proposed to address the problem of determining client similarity. We then develop an estimation algorithm and validate it using synthetic data, as well as real world datasets.
With regard to performance prediction, we focus on the medium-term prediction aspect to address the needs of the emerging technology requirements: distinguishing providers based on medium-term (e.g. one to seven days) performance. Such applications are found when the providers require subscription from their clients to access the service. Another situation where the medium-term prediction is important is in temporal-aware selection: the providers need to be differentiated, based on the expected performance of a particular time interval (e.g. during business hours). We investigate the applicability of classical time series prediction methods: ARIMA and exponential smoothing, as well as their seasonal counterparts – seasonal ARIMA and Holt-Winters. Our results show that these existing models lack the ability to capture the important characteristics of client-perceived performance, thus producing poor medium-term prediction. We then develop a medium-term prediction method that is specifically designed to account for the key characteristics of a client-perceived performance series, and to show that our prediction methods produce higher accuracy for medium-term prediction compared to the existing methods.
In order to demonstrate the applicability of our solution in practice, we developed a provider recommendation framework based on client-perceived performance (named PROPPER), which utilizes our findings on performance assessment and prediction. We formulated the recommendation algorithm and evaluated it through a mirror selection case study. It is shown that our framework produces better outcomes in most cases, compared to country-based or geographic distance-based selection schemes, which are the current approach of mirror selection nowadays.
APA, Harvard, Vancouver, ISO, and other styles
33

Walters, Jeromie L. "Online Evaluation System." University of Akron / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=akron1113514372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lehmann, Janette. "From site to inter-site user engagement : fundamentals and applications." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/287327.

Full text
Abstract:
L’objetiu d’aquesta tesi és aprofundir en el concepte de participació i compromís dels usuaris en pàgines web i analitzar com mesurar aquesta interacció. Comencem estudiant les mètriques del comportament online, les quals s’utilitzen habitualment com a representació de la participació de l’usuari, i proposem noves mètriques que consideren aspectes inexplorats de la participació i compromís de l’usuari. A continuació, analitzem una sèrie de casos d’estudi que demostren com aquestes mètriques proporcionen una millor comprensió de la participació i el compromís dels usuaris. En cadascun d’aquests casos d’estudi també analitzem la manera amb la qual les característiques de la pàgina web poden influenciar aquesta interacció. Les nostres troballes principals són: (1) la participació i el compromís de l’usuari varia entre pàgines web i aquesta depèn de les característiques de les mateixes; (2) els usuaris realitzen vàries tasques simultàniament dins les sessions online i això influeix en la interpretació de les mètriques considerades; (3) analitzar la participació i el compromís de l’usuari en diferents pàgines web permet obtenir una comprensió global de les relacions entre elles; i (4) la participació i compromís de l’usuari depèn de la qualitat del contigut i de l’estructura d’enllaços de la pàgina, però també es fonamenta en els propis interessos de l’usuari.
The aim of this thesis is to provide a deeper understanding of how users engage with websites and how to measure this engagement. We start with studying online behaviour metrics, which are commonly employed as a proxy for user engagement, and we propose new metrics that expose so far unconsidered aspects of user engagement. We then conduct several case studies that demonstrate how these metrics provide a deeper understanding of user engagement. Within each case study we also examine how the characteristics of a website influence user engagement. Some of our key findings include: (1) engagement differs between sites and these differences depend on the site itself; (2) users multitask within online sessions and this affects the interpretation of engagement metrics; (3) analysing engagement across sites enables a comprehensive look at user engagement, because this considers the relationships between sites; and (4) engagement depends on the quality of the content and the hyperlink structure of sites, but the interests of users can also drive it.
APA, Harvard, Vancouver, ISO, and other styles
35

Anjos, Cláudia Regina dos. "Mídias sociais nas bibliotecas da UFRJ: adoção e monitoramento." Universidade Federal do Estado do Rio de Janeiro, 2016. http://hdl.handle.net/11422/1246.

Full text
Abstract:
Submitted by CLÁUDIA ANJOS (claudiaregina@ippur.ufrj.br) on 2017-01-11T13:31:18Z No. of bitstreams: 1 DISSERTAÇAO CLAUDIA ANJOS.pdf: 2969178 bytes, checksum: e5ec926d368f6f164c8e595cc1a65225 (MD5)
Made available in DSpace on 2017-01-11T13:31:18Z (GMT). No. of bitstreams: 1 DISSERTAÇAO CLAUDIA ANJOS.pdf: 2969178 bytes, checksum: e5ec926d368f6f164c8e595cc1a65225 (MD5) Previous issue date: 2016-09-28
Este trabalho investigou o uso das mídias sociais em trinta e uma bibliotecas da Universidade Federal do Rio de Janeiro para compreender como utilizam estas ferramentas e tecer orientações para o bom uso das ferramentas de monitoramento no cenário biblioteconômico. O estudo baseou-se em técnicas de observação e de entrevistas para averiguar a percepção de bibliotecários e usuários na utilização das mídias no contexto das bibliotecas acadêmicas. Embora a importância da comunicação nesse canal seja reconhecida por ambos entrevistados, como meio de promover serviços e produtos, os resultados indicaram que as mídias sociais não são elementos dominantes do universo das bibliotecas acadêmicas. O estudo é acrescido de um Manual Básico de Uso de Mídias Sociais, direcionado aos profissionais da informação, onde são apresentadas sugestões para uso, monitoramento e medição do desempenho organizacional na comunicação em mídias sociais.
This study investigated the use of social media in thirty-one libraries of the Federal University of Rio de Janeiro to understand how to use these tools and weave guidelines for the proper use of monitoring tools in biblioteconômico scenario. The study was based on observation techniques and interviews to ascertain the perception of librarians and users in the use of media in the context of academic libraries. While the importance of communication channel that is recognized by both respondents as a means of promoting products and services, the results indicated that social media are not the dominant elements of the universe of academic libraries. The study is plus a Basic Guide to Using Social Media, directed to the professionals of the information, which are presented suggestions for use, monitoring and measurement of organizational performance communication in social media.
APA, Harvard, Vancouver, ISO, and other styles
36

Dantas, Andr? Medeiros. "Avalia??o de reusabilidade de aplica??es web baseadas em frameworks orientados a a??es e a componentes: estudo de Caso sobre os Frameworks Apache Struts e JavaServer Faces." Universidade Federal do Rio Grande do Norte, 2008. http://repositorio.ufrn.br:8080/jspui/handle/123456789/17975.

Full text
Abstract:
Made available in DSpace on 2014-12-17T15:47:46Z (GMT). No. of bitstreams: 1 AndreMD.pdf: 5208404 bytes, checksum: 35b3883a3ba487ddd5f5627c46d41e2c (MD5) Previous issue date: 2008-01-08
?Over the years the use of application frameworks designed for the View and Controller layers of MVC architectural pattern adapted to web applications has become very popular. These frameworks are classified into Actions Oriented and Components Oriented , according to the solution strategy adopted by the tools. The choice of such strategy leads the system architecture design to acquire non-functional characteristics caused by the way the framework influences the developer to implement the system. The components reusability is one of those characteristics and plays a very important role for development activities such as system evolution and maintenance. The work of this dissertation consists to analyze of how the reusability could be influenced by the Web frameworks usage. To accomplish this, small academic management applications were developed using the latest versions of Apache Struts and JavaServer Faces frameworks, the main representatives of Java plataform Web frameworks of. For this assessment was used a software quality model that associates internal attributes, which can be measured objectively, to the characteristics in question. These attributes and metrics defined for the model were based on some work related discussed in the document
?O uso de frameworks para as camadas do Controlador e Vis?o do padr?o arquitetural MVC adaptado para aplica??es Web se tornou bastante popular ao longo dos anos. Eles s?o classificados em Orientados a A??es ou Orientados a Componentes , de acordo com a estrat?gia de solu??o adotada pelas ferramentas. A escolha por uma dessas estrat?gias faz com que o design da arquitetura do sistema adquira caracter?sticas n?o-funcionais ocasionadas pela forma com que o framework leva o desenvolvedor a implementar o sistema. A reusabilidade dos componentes ? uma dessas caracter?sticas. Ela possui um papel muito importante para atividades como evolu??o e manuten??o do sistema. O trabalho desta disserta??o consiste em analisar o quanto a reusabilidade pode ser impactada de acordo com a utiliza??o de um tipo de framework Web. Com esse intuito, foram realizados estudos de caso atrav?s da implementa??o de pequenas aplica??es de controle acad?mico se utilizando das mais recentes vers?es dos frameworks Apache Struts e JavaServer Faces, os principais representantes de frameworks Web da plataforma Java. Para essa avalia??o, foi utilizado um modelo de qualidade de software respons?vel por associar atributos internos, que podem ser medidos objetivamente, ? caracter?stica em quest?o. Esses atributos e m?tricas definidos para o modelo foram baseados em alguns trabalhos relacionados discutidos no documento
APA, Harvard, Vancouver, ISO, and other styles
37

Attaallah, Abdulaziz Ahmad. "A Structural Metric Model to Predict the Complexity of Web Interfaces." Diss., North Dakota State University, 2017. http://hdl.handle.net/10365/25918.

Full text
Abstract:
The complexity of web pages has been widely investigated. Many experimental studies used several metrics to measure certain aspects of the users, tasks or GUIs. In this research, we focusing on the visual structure of web pages and how different users look at them regarding complexity. Several important measures and design elements have rarely been addressed together to study the complex nature of the visual structure. Therefore, we promoted a metric model to clarify this issue by conducting several experiments on groups of participants and using several websites from different genres. The goal is to form a metric model that can assist developers to measure more precisely the complexity of web interfaces under development. From the first experiment, we could draw the guidelines of the major entities in the metric model, and the focus was on two most important aspects of the web interfaces, which are the structural factors and elements. Thus, four main factors and three main elements were more representatives to the concept of complexity. The four factors are size, density, grouping and alignment, and the three elements are text graphics and links. Based on them we developed a structural metric model that relates these factors and elements together, and the results of the metric model are compared to the web interface users? ratings by using statistical analysis to predict the overall complexity of web interfaces. The results of that study are very promising where they show our metric model is capable of predicting the complex nature of web interfaces with high confidence.
APA, Harvard, Vancouver, ISO, and other styles
38

Drach, Marcos David. "Aplicabilidade de metricas por pontos de função em sistemas baseados em Web." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276360.

Full text
Abstract:
Orientadores: Ariadne Maria Brito Rizzoni Carvalho, Thelma Cecilia dos Santos Chiossi
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-04T13:53:20Z (GMT). No. of bitstreams: 1 Drach_MarcosDavid_M.pdf: 1097883 bytes, checksum: 0be02ee41451affd5b6b6ef00b77ddf1 (MD5) Previous issue date: 2005
Resumo: Métricas de software são padrões quantitativos de medidas de vários aspectos de um projeto ou produto de software, e se constitui em uma poderosa ferramenta gerencial, contribuindo para a elaboração de estimativas de prazo e custo mais precisas e para o estabelecimento de metas plausíveis, facilitando assim o processo de tomada de decisões e a subsequente obtenção de medidas de produtividade e qualidade. A métrica de Análise por Pontos de Função - FPA, criada no final da década de 70 com o objetivo de medir o tamanho de software a partir de sua especificação funcional, foi considerada um avanço em relação ao método de contagem por Linhas de Código Fonte - SLOC, a única métrica de tamanho empregada na época. Embora vários autores tenham desde então publicado várias extensões e alternativas ao método original no sentido de adequá-lo a sistemas específicos, sua aplicabilidade em sistemas Web ainda carece de um exame mais crítico. Este trabalho tem por objetivo realizar uma análise das características computacionais específicas da plataforma Web que permita a desenvolvedores e gerentes de projeto avaliarem o grau de adequação da FPA a este tipo de ambiente e sua contribuição para extração de requisitos e estimativa de esforço
Abstract: Software metrics are quantitative standards of measurement for many aspects of a software project or product, consisting of a powerful management tool that contributes to more accurate delivery time and cost estimates and to the establishment of feasible goals, facilitating both the decision-making process itself and the subsequent obtention of data measuring productivity and quality. The metric Function Point Analysis - FPA, created at the end of 70¿s to measure software size in terms of its functional specification, was considered an advance over the Source Line Of Code - SLOC counting method, the only method available at that time. Although many authors have published various extensions and alternatives to the original method, in order to adapt it to specific systems, its applicability in Web-based systems still requires a deeper and more critical examination. This work aims to present an analysis of the specific computational characteristics of the Web platform that allows developers and project managers to evaluate the adequacy of the FPA method to this environment and its contribution to the requirement extraction and effort estimation
Mestrado
Engenharia de Computação
Mestre Profissional em Computação
APA, Harvard, Vancouver, ISO, and other styles
39

Coraggio, Alessio <1987&gt. "L'impresa nel web: strategie e metriche di search engine optimization." Master's Degree Thesis, Università Ca' Foscari Venezia, 2013. http://hdl.handle.net/10579/2422.

Full text
Abstract:
Partendo da un'analisi del mercato pubblicitario, viene evidenziata l'importanza delle attività di marketing online. Nell'elaborato, quindi, vengono descritte le tecniche e le strategie di search engine optimization che le aziende possono applicare per ottenere una buona visibilità sul web. A supporto dell'analisi svolta saranno descritti due casi aziendali.
APA, Harvard, Vancouver, ISO, and other styles
40

Law, Marc Teva. "Distance metric learning for image and webpage comparison." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066019/document.

Full text
Abstract:
Cette thèse se focalise sur l'apprentissage de distance pour la comparaison d'images ou de pages Web. Les distances (ou métriques) sont exploitées dans divers contextes de l'apprentissage automatique et de la vision artificielle tels que la recherche des k plus proches voisins, le partitionnement, les machines à vecteurs de support, la recherche d'information/images, la visualisation etc. Nous nous intéressons dans cette thèse à l'apprentissage de fonction de distance paramétrée par une matrice symétrique semi-définie positive. Ce modèle, appelé (par abus) apprentissage de distance de Mahalanobis, consiste à apprendre une transformation linéaire des données telle que la distance euclidienne dans l'espace projeté appris satisfasse les contraintes d'apprentissage.Premièrement, nous proposons une méthode basée sur la comparaison de distances relatives qui prend en compte des relations riches entre les données, et exploite des similarités entre quadruplets d'exemples. Nous appliquons cette méthode aux attributs relatifs et à la classification hiérarchique d'images.Deuxièmement, nous proposons une nouvelle méthode de régularisation qui permet de contrôler le rang de la matrice apprise, limitant ainsi le nombre de paramètres indépendants appris et le sur-apprentissage. Nous montrons l'intérêt de notre méthode sur des bases synthétiques et réelles d'identification de visage.Enfin, nous proposons une nouvelle méthode de détection automatique de changement dans les pages Web, dans un contexte d'archivage. Pour cela, nous utilisons les relations de distance temporelle entre différentes versions d'une même page Web. La métrique apprise de façon entièrement non supervisée détecte les régions d'intérêt de la page et ignore le contenu non informatif tel que les menus et publicités. Nous montrons l'intérêt de la méthode sur différents sites Web
This thesis focuses on distance metric learning for image and webpage comparison. Distance metrics are used in many machine learning and computer vision contexts such as k-nearest neighbors classification, clustering, support vector machine, information/image retrieval, visualization etc. In this thesis, we focus on Mahalanobis-like distance metric learning where the learned model is parametered by a symmetric positive semidefinite matrix. It learns a linear tranformation such that the Euclidean distance in the induced projected space satisfies learning constraints.First, we propose a method based on comparison between relative distances that takes rich relations between data into account, and exploits similarities between quadruplets of examples. We apply this method on relative attributes and hierarchical image classification. Second, we propose a new regularization method that controls the rank of the learned matrix, limiting the number of independent parameters and overfitting. We show the interest of our method on synthetic and real-world recognition datasets. Eventually, we propose a novel Webpage change detection framework in a context of archiving. For this purpose, we use temporal distance relations between different versions of a same Webpage. The metric learned in a totally unsupervised way detects important regions and ignores unimportant content such as menus and advertisements. We show the interest of our method on different Websites
APA, Harvard, Vancouver, ISO, and other styles
41

Côrte, Leandro. "Método para a avaliação de servidores WWW no ambiente corporativo." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2002. http://hdl.handle.net/10183/3375.

Full text
Abstract:
O principal objetivo deste trabalho é apresentar um método e métricas para a avaliação do serviço Internet mais amplamente utilizado: a World Wide Web. As características básicas e funcionamento do serviço, bem como algumas ferramentas de avaliação de desempenho, serão descritas. Estes capítulos servirão de base para os demais, onde serão apresentados o método para avaliação do serviço web e métricas usadas para análise de desempenho, disponibilidade, confiabilidade, facilidades de administração e recursos. Por fim, o método e métricas serão aplicados na Procempa – Companhia de Processamento de Dados do Município de Porto Alegre, onde será possível verificá-los na prática. Além disto, dados importantes sobre a infra-estrutura web da Procempa serão fornecidos, os quais permitem uma análise do ambiente web atual e futuro da empresa.
APA, Harvard, Vancouver, ISO, and other styles
42

Novosedlik, Andrea. "La traduzione della canzone d'autore tra l'italiano e il francese: le scelte, la metrica e il suono." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11378/.

Full text
Abstract:
Sin dall’infanzia ho sentito un forte legame con la musica e l’amore per il canto e allo stesso tempo per le lingue straniere, mi ha condotto a frequentare il corso di mediazione linguistica interculturale dell’università di Bologna, che univa e fondeva fino alla radice queste due mie passioni. Nel corso della mia carriera universitaria, “au jour le jour” cercavo di cimentarmi in traduzioni di canzoni ma progredendo nei miei studi, più traducevo e più mi rendevo conto di quante difficoltà possa presentare questo genere di traduzione. Le figure retoriche e le frasi idiomatiche presenti in una canzone non solo sono molto difficili da trasportare nella lingua d’arrivo ma spesso e volentieri devono essere totalmente stravolte per poter mantenere il senso e lo scopo del testo di partenza. Insomma, “se vogliamo che tutto rimanga com’è, bisogna che tutto cambi”. (Giuseppe Tomasi di Lampedusa, 1969:40-41). Troppe volte mi sono imbattuto in canzoni già tradotte il quale senso primario andava disperso nell’oblio. In quanto aspirante traduttore so bene cosa significhi dover tagliare o sconvolgere il testo di partenza quando serve ma ove non ve n’è bisogno, non capisco il perché del doverlo fare. In questa mia tesi di laurea presenterò quattro canzoni d’autore: una canzone in italiano ed una in francese ancora non tradotte ed una canzone in italiano ed un’altra in francese che sono già state tradotte. Nel secondo caso, quello dei testi già tradotti, presenterò una mia versione di traduzione nella quale cercherò di rispettare la metrica, l’equilibrio dei suoni e le figure retoriche.
APA, Harvard, Vancouver, ISO, and other styles
43

Barlow, McKenzie Lee. "Comparative Analysis of Physiological Measurements and Environmental Metrics on Predicting Heat Stress Related Events." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1906.

Full text
Abstract:
Exposure to high heat and humidity can lead to serious health risks, including heat exhaustion and heat stroke. Wet Bulb Globe Temperature (WBGT) and heat index have historically been used to predict heat stress events, but individualized factors are not included in the measurement. It has been shown that there is a relationship between cardiovascular measurements and heat stress, which could be used to measure heat stress risk on an individual level. Research has been done to find relationships between cardiovascular metrics in a workplace environment, however the study did not include the use of a controlled environment as a baseline. This study provides measurements of transepidermal water loss (TEWL), heart rate, body core temperature, and blood pressure in a controlled environment when human subjects are exposed to high heat and humidity. Thirty subjects (n=17 females, 13 males) were asked to self-express their activity level (active vs. sedentary), gender, and age. The subjects performed a 30-minute moderate exercise routine on a stationary stepper machine in a heated environmental chamber (average WBGT of 26ºC). TEWL, heart rate, tympanic temperature, and blood pressure were recorded at every 10-minute increment of the exercise protocol per subject. The data was analyzed using JMP® software to find significant (P
APA, Harvard, Vancouver, ISO, and other styles
44

Londero, Eduardo Bauer. "Comportamento de Metricas de Inteligibilidade Textual em Documentos Recuperados naWeb." Universidade Catolica de Pelotas, 2011. http://tede.ucpel.edu.br:8080/jspui/handle/tede/220.

Full text
Abstract:
Made available in DSpace on 2016-03-22T17:26:45Z (GMT). No. of bitstreams: 1 Dissertacao_Eduardo_Revisado.pdf: 3489154 bytes, checksum: 3c327ee0bc47d79cd4af46e065105650 (MD5) Previous issue date: 2011-03-29
Text retrieved from the Internet through Google and Yahoo queries are evaluated using Flesch-Kincaid Grade Level, a simple assessment measure of text readability. This kind of metrics were created to help writers to evaluate their text, and recently in automatic text simplification for undercapable readers. In this work we apply these metrics to documents freely retrieved from the Internet, seeking to find correlations between legibility and relevance acknowledged to then by search engines. The initial premise guiding the comparison between readability and relevance is the statement known as Occam s Principle, or Principle of Economy. This study employs Flesch-Kincaid Grade Level in text documents retrieved from the Internet through search-engines queries and correlate it with the position. It was found a centralist trend in the texts recovered. The centralist tendency mean that the average spacing of groups of files from the average of the category they belong is meaningfull. With this measure is possible to establish a correlation between relevance and legibility, and also, to detect diferences in the way both search engines derive their relevance calculation. A subsequent experiment seeks to determine whether the measure of legibility can be employed to assist him or her choosing a document combined with original search engine ranking and if it is useful as advance information for choice and user navigation. In a final experiment, based on previously obtained knowledge, a comparison between Wikipedia and Britannica encyclopedias by employing the metric of understandability Flesch-Kincaid
Textos recuperados da Internet por interm´edio de consultas ao Google e Yahoo s ao analisados segundo uma m´etrica simples de avaliac¸ ao de inteligibilidade textual. Tais m´etricas foram criadas para orientar a produc¸ ao textual e recentemente tamb´em foram empregadas em simplificadores textuais autom´aticos experimentais para leitores inexperientes. Nesse trabalho aplicam-se essas m´etricas a texto originais livres, recuperados da Internet, para buscar correlacionar o grau de inteligibilidade textual com a relev ancia que lhes ´e conferida pelos buscadores utilizados. A premissa inicial a estimular a comparac¸ ao entre inteligibilidade e relev ancia ´e o enunciado conhecido como Princ´ıpio de Occam, ou princ´ıpio da economia. Observa-se uma tend encia centralista que ocorre a partir do pequeno afastamento m´edio dos grupos de arquivos melhor colocados no ranking em relac¸ ao `a m´edia da categoria a que pertencem. ´E com a medida do afastamento m´edio que se consegue verificar correlac¸ ao com a posic¸ ao do arquivo no ranking e ´e tamb´em com essa medida que se consegue registrar diferenc¸as entre o m´etodo de calcular a relev ancia do Google e do Yahoo. Um experimento que decorre do primeiro estudo procura determinar se a medida de inteligibilidade pode ser empregada para auxiliar o usu´ario da Internet a escolher arquivos mais simples ou se a sua indicac¸ ao junto `a listagem de links recuperados ´e ´util e informativa para a escolha e navegac¸ ao do usu´ario. Em um experimento final, embasado no conhecimento previamente obtido, s ao comparadas as enciclop´edias Brit anica eWikip´edia por meio do emprego da m´etrica de inteligibilidade Flesch-Kincaid Grade Level
APA, Harvard, Vancouver, ISO, and other styles
45

D’Souza, Jayesh. "Measuring Effectiveness in International Financial Crime Prevention: Can We Agree on a Performance Metric?" FIU Digital Commons, 2008. http://digitalcommons.fiu.edu/etd/286.

Full text
Abstract:
The attempts at carrying out terrorist attacks have become more prevalent. As a result, an increasing number of countries have become particularly vigilant against the means by which terrorists raise funds to finance their draconian acts against human life and property. Among the many counter-terrorism agencies in operation, governments have set up financial intelligence units (FIUs) within their borders for the purpose of tracking down terrorists’ funds. By investigating reported suspicious transactions, FIUs attempt to weed out financial criminals who use these illegal funds to finance terrorist activity. The prominent role played by FIUs means that their performance is always under the spotlight. By interviewing experts and conducting surveys of those associated with the fight against financial crime, this study investigated perceptions of FIU performance on a comparative basis between American and non-American FIUs. The target group of experts included financial institution personnel, civilian agents, law enforcement personnel, academicians, and consultants. Questions for the interview and surveys were based on the Kaplan and Norton’s Balanced Scorecard (BSC) methodology. One of the objectives of this study was to help determine the suitability of the BSC to this arena. While FIUs in this study have concentrated on performance by measuring outputs such as the number of suspicious transaction reports investigated, this study calls for a focus on outcomes involving all the parties responsible for financial criminal investigations. It is only through such an integrated approach that these various entities will be able to improve performance in solving financial crime. Experts in financial intelligence strongly believed that the quality and timeliness of intelligence was more important than keeping track of the number of suspicious transaction reports. Finally, this study concluded that the BSC could be appropriately applied to the arena of financial crime prevention even though the emphasis is markedly different from that in the private sector. While priority in the private sector is given to financial outcomes, in this arena employee growth and internal processes were perceived as most important in achieving a satisfactory outcome.
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, Otto Julio Ahlert Pinno da. "Detecção de anomalias em aplicações Web utilizando filtros baseados em coeficiente de correlação parcial." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4268.

Full text
Abstract:
Submitted by Erika Demachki (erikademachki@gmail.com) on 2015-03-09T12:10:52Z No. of bitstreams: 2 Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Erika Demachki (erikademachki@gmail.com) on 2015-03-09T12:11:12Z (GMT) No. of bitstreams: 2 Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-03-09T12:11:12Z (GMT). No. of bitstreams: 2 Dissertação - Otto Julio Ahlert Pinno da Silva - 2014.pdf: 1770799 bytes, checksum: 02efab9704ef08dc041959d737152b0a (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-10-31
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Finding faults or causes of performance problems in modernWeb computer systems is an arduous task that involves many hours of system metrics monitoring and log analysis. In order to aid administrators in this task, many anomaly detection mechanisms have been proposed to analyze the behavior of the system by collecting a large volume of statistical information showing the condition and performance of the computer system. One of the approaches adopted by these mechanism is the monitoring through strong correlations found in the system. In this approach, the collection of large amounts of data generate drawbacks associated with communication, storage and specially with the processing of information collected. Nevertheless, few mechanisms for detecting anomalies have a strategy for the selection of statistical information to be collected, i.e., for the selection of monitored metrics. This paper presents three metrics selection filters for mechanisms of anomaly detection based on monitoring of correlations. These filters were based on the concept of partial correlation technique which is capable of providing information not observable by common correlations methods. The validation of these filters was performed on a scenario of Web application, and, to simulate this environment, we use the TPC-W, a Web transactions Benchmark of type E-commerce. The results from our evaluation shows that one of our filters allowed the construction of a monitoring network with 8% fewer metrics that state-of-the-art filters, and achieve fault coverage up to 10% more efficient.
Encontrar falhas ou causas de problemas de desempenho em sistemas computacionais Web atuais é uma tarefa árdua que envolve muitas horas de análise de logs e métricas de sistemas. Para ajudar administradores nessa tarefa, diversos mecanismos de detecção de anomalia foram propostos visando analisar o comportamento do sistema mediante a coleta de um grande volume de informações estatísticas que demonstram o estado e o desempenho do sistema computacional. Uma das abordagens adotadas por esses mecanismo é o monitoramento por meio de correlações fortes identificadas no sistema. Nessa abordagem, a coleta desse grande número de dados gera inconvenientes associados à comunicação, armazenamento e, especialmente, com o processamento das informações coletadas. Apesar disso, poucos mecanismos de detecção de anomalias possuem uma estratégia para a seleção das informações estatísticas a serem coletadas, ou seja, para a seleção das métricas monitoradas. Este trabalho apresenta três filtros de seleção de métricas para mecanismos de detecção de anomalias baseados no monitoramento de correlações. Esses filtros foram baseados no conceito de correlação parcial, técnica que é capaz de fornecer informações não observáveis por métodos de correlações comuns. A validação desses filtros foi realizada sobre um cenário de aplicação Web, sendo que, para simular esse ambiente, nós utilizamos o TPC-W, um Benchmark de transações Web do tipo E-commerce. Os resultados obtidos em nossa avaliação mostram que um de nossos filtros permitiu a construção de uma rede de monitoramento com 8% menos métricas que filtros estado-da-arte, além de alcançar uma cobertura de falhas até 10% mais eficiente.
APA, Harvard, Vancouver, ISO, and other styles
47

Novák, Michal. "Komplexní marketingová strategie v online prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2011. http://www.nusl.cz/ntk/nusl-223301.

Full text
Abstract:
This thesis provides basic overview of marketing concepts and tools which are available for the Internet environment. It also provides new trends and opportunities in the online environment. Output of the thesis will be efficient strategy for men's lifestyle magazine with usage of minimum finance sources. Eficiency will be taken by using of combination and application of marketing tools available in the Internet environment. The main goal is to get super-synergy affect of marketing mix components for maximum efficiency and minimum costs.
APA, Harvard, Vancouver, ISO, and other styles
48

Cadiou, Corentin. "L’impact des grandes structures de l’Univers sur la formation des halos de matière noire et des galaxies How does the cosmic web impact assembly bias? Accurate tracer particles of baryon dynamics in the adaptive mesh refinement code Ramses Galaxy evolution in the metric of the cosmic web Galaxies flowing in the oriented saddle frame of the cosmic web." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS508.

Full text
Abstract:
À grande échelle, la distribution anisotrope de la matière forme un large réseau de vides délimités par des murs qui, avec les filaments présents à leurs intersections, tissent la toile cosmique. La matière qui doit former plus tard les halos de matière noire et leurs galaxies afflue vers les nœuds compacts se situant à l’intersection des filaments et garde dans ce processus une empreinte de la toile cosmique. Dans cette thèse, je développe une extension contrainte de la théorie de l’excursion qui, à l'aide d'un modèle de filament, me permet de montrer que l'environnement anisotrope a un effet sur l'histoire de formation des halos de matière noire. La toile cosmique a donc un rôle dans la formation des halos et de leurs galaxies. Dans un second temps, je construis un modèle qui décrit l'évolution de la toile cosmique (fusion de halos, mais aussi de filaments et de murs) afin de mieux contraindre les modèles de formation de galaxies. Le modèle prédit un excès d'accrétion anisotrope dans les filaments par rapports aux nœuds, biaisant ainsi la formation des galaxies. L'effet de l'accrétion anisotrope sur la formation des galaxies est ensuite étudié à l'aide de simulations hydrodynamiques et d'une nouvelle méthode permettant le suivi précis de l'histoire d'accrétion du gaz. J'y montre que le moment angulaire est transporté efficacement des grandes échelles de la toile cosmique jusque dans les zones internes du halo, où les couples gravitationnels le redistribue au disque de la galaxies et au halo interne
The anisotropic large-scale distribution of matter is made of an extended network of voids delimited by sheets, with filaments at their intersection which together form the cosmic web. Matter that will later form dark matter halos and their galaxies flows towards compact nodes at filaments' intersections and in the process, retains the imprint of the cosmic web. In this thesis, I develop a conditional version of the excursion set theory which, using a model of a large-scale filament, enables me to show that anisotropic environment have an impact on the formation history of dark matter halos. The cosmic web then has a role in the formation of halos and their galaxies. I then build a model that is able to capture the evolution of the cosmic web (halo mergers, but also filament and wall mergers) that can be used to better constrain galaxy formation models. The model predicts that an excess of anisotropic accretion is expected in filaments compared to nodes, so that the formation history of galaxies is biased. The effect of anisotropic accretion on galaxy formation is then studied using hydrodynamical simulations and a novel numerical method tailored to accurately follow the accretion history of the gas. I show that the angular momentum is transported efficiently from the cosmic web down to the inner halo, where gravitational torques redistribute it to the disk and the inner halo
APA, Harvard, Vancouver, ISO, and other styles
49

Wallace, Hailey. "Are We Providing Preferred Floral Resources for Bees in Our Neighborhoods?: Assessing the Relationship Between Small Scale Vegetation Metrics and Bee Presence in SE Portland." PDXScholar, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/5126.

Full text
Abstract:
Bee pollinators can thrive in highly urbanized environments if their preferred floral resources and habitat types are available. Enhanced pollinator habitats are being created globally, with a large local effort in Portland, Oregon. This project determined if we were providing the most preferred floral resources at enhanced pollinator sites for bees, if floral resources were available throughout the season, and if differences in dietary preferences between native and honey bees would allow for the identification of "native bee floral resources" in South East Portland. Bee pollinators were monitored from June to August at three enhanced pollinator sites in South East Portland, Oregon. A total of 566 individual bees were observed, tiny dark bees and bumblebees composed the large majority of the urban bee composition. Vegetation metrics and bee presence were correlated using a Generalized Linear Mixed Model and significant variables that predicted bee presence included Solidago canadenisis (p-value 0.0024), density of floral resources (p-value
APA, Harvard, Vancouver, ISO, and other styles
50

Sebastianutti, Marco. "Geodesic motion and Raychaudhuri equations." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18755/.

Full text
Abstract:
The work presented in this thesis is devoted to the study of geodesic motion in the context of General Relativity. The motion of a single test particle is governed by the geodesic equations of the given space-time, nevertheless one can be interested in the collective behavior of a family (congruence) of test particles, whose dynamics is controlled by the Raychaudhuri equations. In this thesis, both the aspects have been considered, with great interest in the latter issue. Geometric quantities appear in these evolution equations, therefore, it goes without saying that the features of a given space-time must necessarily arise. In this way, through the study of these quantities, one is able to analyze the given space-time. In the first part of this dissertation, we study the relation between geodesic motion and gravity. In fact, the geodesic equations are a useful tool for detecting a gravitational field. While, in the second part, after the derivation of Raychaudhuri equations, we focus on their applications to cosmology. Using these equations, as we mentioned above, one can show how geometric quantities linked to the given space-time, like expansion, shear and twist parameters govern the focusing or de-focusing of geodesic congruences. Physical requirements on matter stress-energy (i.e., positivity of energy density in any frame of reference), lead to the various energy conditions, which must hold, at least in a classical context. Therefore, under these suitable conditions, the focusing of a geodesics "bundle", in the FLRW metric, bring us to the idea of an initial (big bang) singularity in the model of a homogeneous isotropic universe. The geodesic focusing theorem derived from both, the Raychaudhuri equations and the energy conditions acts as an important tool in understanding the Hawking-Penrose singularity theorems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography