Dissertations / Theses on the topic 'TWITTER ANALYTICS'

To see the other types of publications on this topic, follow the link: TWITTER ANALYTICS.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'TWITTER ANALYTICS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leis, Machín Angela 1974. "Studying depression through big data analytics on Twitter." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2021. http://hdl.handle.net/10803/671365.

Full text
Abstract:
Mental disorders have become a major concern in public health, since they are one of the main causes of the overall disease burden worldwide. Depressive disorders are the most common mental illnesses, and they constitute the leading cause of disability worldwide. Language is one of the main tools on which mental health professionals base their understanding of human beings and their feelings, as it provides essential information for diagnosing and monitoring patients suffering from mental disorders. In parallel, social media platforms such as Twitter, allow us to observe the activity, thoughts and feelings of people’s daily lives, including those of patients suffering from mental disorders such as depression. Based on the characteristics and linguistic features of the tweets, it is possible to identify signs of depression among Twitter users. Moreover, the effect of antidepressant treatments can be linked to changes in the features of the tweets posted by depressive users. The analysis of this huge volume and diversity of data, the so-called “Big Data”, can provide relevant information about the course of mental disorders and the treatments these patients are receiving, which allows us to detect, monitor and predict depressive disorders. This thesis presents different studies carried out on Twitter data in the Spanish language, with the aim of detecting behavioral and linguistic patterns associated to depression, which can constitute the basis of new and complementary tools for the diagnose and follow-up of patients suffering from this disease
APA, Harvard, Vancouver, ISO, and other styles
2

Carvalho, Eder José de. "Visual analytics of topics in twitter in connection with political debates." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11092017-140904/.

Full text
Abstract:
Social media channels such as Twitter and Facebook often contribute to disseminate initiatives that seek to inform and empower citizens concerned with government actions. On the other hand, certain actions and statements by governmental institutions, or parliament members and political journalists that appear on the conventional media tend to reverberate on the social media. This scenario produces a lot of textual data that can reveal relevant information on governmental actions and policies. Nonetheless, the target audience still lacks appropriate tools capable of supporting the acquisition, correlation and interpretation of potentially useful information embedded in such text sources. In this scenario, this work presents two system for the analysis of government and social media data. One of the systems introduces a new visualization, based on the river metaphor, for the analysis of the temporal evolution of topics in Twitter in connection with political debates. For this purpose, the problem was initially modeled as a clustering problem and a domain-independent text segmentation method was adapted to associate (by clustering) Twitter content with parliamentary speeches. Moreover, a version of the MONIC framework for cluster transition detection was employed to track the temporal evolution of debates (or clusters) and to produce a set of time-stamped clusters. The other system, named ATR-Vis, combines visualization techniques with active retrieval strategies to involve the user in the retrieval of Twitters posts related to political debates and associate them to the specific debate they refer to. The framework proposed introduces four active retrieval strategies that make use of the Twitters structural information increasing retrieval accuracy while minimizing user involvement by keeping the number of labeling requests to a minimum. Evaluations through use cases and quantitative experiments, as well as qualitative analysis conducted with three domain experts, illustrates the effectiveness of ATR-Vis in the retrieval of relevant tweets. For the evaluation, two Twitter datasets were collected, related to parliamentary debates being held in Brazil and Canada, and a dataset comprising a set of top news stories that received great media attention at the time.
Mídias sociais como o Twitter e o Facebook atuam, em diversas situações, como canais de iniciativas que buscam ampliar as ações de cidadania. Por outro lado, certas ações e manifestações na mídia convencional por parte de instituições governamentais, ou de jornalistas e políticos como deputados e senadores, tendem a repercutir nas mídias sociais. Como resultado, gerase uma enorme quantidade de dados em formato textual que podem ser muito informativos sobre ações e políticas governamentais. No entanto, o público-alvo continua carente de boas ferramentas que ajudem a levantar, correlacionar e interpretar as informações potencialmente úteis associadas a esses textos. Neste contexto, este trabalho apresenta dois sistemas orientados à análise de dados governamentais e de mídias sociais. Um dos sistemas introduz uma nova visualização, baseada na metáfora do rio, para análise temporal da evolução de tópicos no Twitter em conexão com debates políticos. Para tanto, o problema foi inicialmente modelado como um problema de clusterização e um método de segmentação de texto independente de domínio foi adaptado para associar (por clusterização) tweets com discursos parlamentares. Uma versão do algorimo MONIC para detecção de transições entre agrupamentos foi empregada para rastrear a evolução temporal de debates (ou agrupamentos) e produzir um conjunto de agrupamentos com informação de tempo. O outro sistema, chamado ATR-Vis, combina técnicas de visualização com estratégias de recuperação ativa para envolver o usuário na recuperação de tweets relacionados a debates políticos e associa-os ao debate correspondente. O arcabouço proposto introduz quatro estratégias de recuperação ativa que utilizam informação estrutural do Twitter melhorando a acurácia do processo de recuperação e simultaneamente minimizando o número de pedidos de rotulação apresentados ao usuário. Avaliações por meio de casos de uso e experimentos quantitativos, assim como uma análise qualitativa conduzida com três especialistas ilustram a efetividade do ATR-Vis na recuperação de tweets relevantes. Para a avaliação, foram coletados dois conjuntos de tweets relacionados a debates parlamentares ocorridos no Brasil e no Canadá, e outro formado por um conjunto de notícias que receberam grande atenção da mídia no período da coleta.
APA, Harvard, Vancouver, ISO, and other styles
3

Haraldsson, Daniel. "Marknadsföring på Twitter : Vilken dag och tidpunkt är optimal?" Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11517.

Full text
Abstract:
Twitter and social media has a big impact on the modern society. Companies uses Twitter and other social Medias as marketing channels to reach multiple customers. The purpose of this research was to get an insight on what day and time that businesses get the most respondents on Twitter. The focus area is the social media Twitter and the way to use the 140 characters available. Efficiency and profitability are key factors for business firms so if they know when the prospective and existing customers are most receptive to a message you can obtain an advantage which may be important in the future. The factors comes down to time, specifically time and day. To reach a conclusion on this matter the scientific report at hand will use business analytics to analyze data from Twitter to reach this conclusion. The data used for this analyze is from five companies with a Swedish market and based on two months April 2015 and May 2015.
APA, Harvard, Vancouver, ISO, and other styles
4

Mahendiran, Aravindan. "Automated Vocabulary Building for Characterizing and Forecasting Elections using Social Media Analytics." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/25430.

Full text
Abstract:
Twitter has become a popular data source in the recent decade and garnered a significant amount of attention as a surrogate data source for many important forecasting problems. Strong correlations have been observed between Twitter indicators and real-world trends spanning elections, stock markets, book sales, and flu outbreaks. A key ingredient to all methods that use Twitter for forecasting is to agree on a domain-specific vocabulary to track the pertinent tweets, which is typically provided by subject matter experts (SMEs). The language used in Twitter drastically differs from other forms of online discourse, such as news articles and blogs. It constantly evolves over time as users adopt popular hashtags to express their opinions. Thus, the vocabulary used by forecasting algorithms needs to be dynamic in nature and should capture emerging trends over time. This thesis proposes a novel unsupervised learning algorithm that builds a dynamic vocabulary using Probabilistic Soft Logic (PSL), a framework for probabilistic reasoning over relational domains. Using eight presidential elections from Latin America, we show how our query expansion methodology improves the performance of traditional election forecasting algorithms. Through this approach we demonstrate how we can achieve close to a two-fold increase in the number of tweets retrieved for predictions and a 36.90% reduction in prediction error.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Carlos, Marcelo Aparecido. "Análise de surtos de doenças transmitidas pelo mosquito Aedes aegypti utilizando Big-Data Analytics e mensagens do Twitter." reponame:Repositório Institucional da UFABC, 2017.

Find full text
Abstract:
Orientador: Prof. Dr. Filipe Ieda Fazanaro
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2017.
O uso do big-data aliado a técnicas de mineração de textos vem crescendo a cada ano em diversas áreas da ciência, especialmente na área da saúde, na medicina de precisão, em prontuários eletrônicos entre outros. A motivação desse trabalho parte da hipótese de que é possível usar conceitos de big-data para analisar grandes quantidades de dados sobre as doenças da dengue, chikungunya e zika vírus, para monitorar e antecipar informações sobre possíveis surtos dessas doenças. Entretanto, a análise de grandes volumes de dados - inerente ao estudo em big-data - possui desafios, particularmente, devido à falta de escalabilidade dos algoritmos e à complexidade do gerenciamento dos mais diferentes tipos e estruturas dos dados envolvidos. O principal objetivo desse trabalho é apresentar uma implementação de técnicas de mineração de textos, em especial, aqueles oriundos de redes sociais, tais como o Twitter, aliadas à abordagem de análises em big-data e aprendizado de máquina, para monitorar a incidência das doenças da dengue, chikungunya e zika vírus, todas transmitidas pelo mosquito Aedes aegypti. Os resultados obtidos indicam que a implementação realizada, baseado na junção dos algoritmos de aprendizado de máquina K-Means e SVM, teve rendimento satisfatório para a amostra utilizada em comparação aos registros do Ministério da Saúde, indicando, assim, um potencial para utilização do seu propósito. Observa-se que a principal vantagem das análises em big-data está relacionada à possibilidade de empregar dados não estruturados os quais são obtidos em redes sociais, sites de e-commerce, dentre outros. Nesse sentido, dados que antes pareciam, de certo modo, de pouca importância, passam a ter grande potencial e relevância.
The use of the big-data allied to techniques of text mining has been growing every year in several areas of science, especially in the area of health, precision medicine, electronic medical records among others. The motivation from this work, is based on the hypothesis that it is possible to use big-data concepts to analyze large volumes of data about the dengue disease, chikungunya and zika virus, to monitor and anticipate information about possible outbreaks for diseases. However, the analysis of large volumes of data - inherent in the big-data study - has some challenges, particularly due to the lack of scalability of the algorithms and the complexity of managing the most different types and structures of the data involved. The main objective of this work is to present the implementation of text mining techniques - especially from social networks such as Twitter - allies to the approach of big-data and machine-learned analyzes to monitor the incidence of Dengue, Chikungunya and Zika virus, all transmissions by the mosquito Aedes aegypti. The results obtained indicate that the implementation made based on the combination of machine learning algorithms, K-Means and SVM, got a satisfactory yield for a sample used, if compared the publications of the records of the Ministry of Health, thus indicating a potential for the purpose. It is observed that a main advantage of the big-data analyzes is related to the possibility of employing unstructured data, e-commerce sites, among others. In this sense, data that once seemed, in a way, of little importance, have great potential and relevance.
APA, Harvard, Vancouver, ISO, and other styles
6

Bigsby, Kristina Gavin. "From hashtags to Heismans: social media and networks in college football recruiting." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6371.

Full text
Abstract:
Social media has changed the way that we create, use, and disseminate information and presents an unparalleled opportunity to gather large-scale data on the networks, behaviors, and opinions of individuals. This dissertation focuses on the role of social media and social networks in recruitment, examining the complex interactions between offline recruiting activities, online social media, and recruiting outcomes. Specifically, it explores how the information college football recruits reveal about themselves online is related to their decisions as well as how this information can diffuse and influence the decisions of others. Recruitment occurs in many contexts, and this research draws comparisons between college football and personnel recruiting. This work is one of the first large-scale studies of social media in college football recruiting, and uses a unique dataset that is both broad and deep, capturing information about 2,644 recruits, 682 schools, 764 coaches, and 2,397 current college football players and tracking offline and online behavior over six months. This dissertation comprises three case studies corresponding to the major decisions in the football recruiting cycle—the coach’s decision to make a scholarship offer, the athlete’s decision to commit, and the athlete’s decision to decommit. The first study investigates the relationship between a recruit’s social media use and his recruiting success. Informed by previous work on impression management in personnel recruitment, I construct logistic classifiers to identify self-promotion and ingratiation in 5.5 million tweets and use regression analysis to model the relationship between tweets and scholarship offers over time. The results indicate that tweet content predicts whether an athlete will receive a new offer in the next month. Furthermore, the level of Twitter activity is strongly related to recruiting success, suggesting that simply possessing a social media account may offer a significant advantage in terms of attracting coaches’ attention and earning scholarship offers. These findings underscore the critical role of social media in athletic recruitment and may benefit recruits by informing their branding and communication strategies. The second study examines whether a recruit’s social media activity presages his college preferences. I combine data on recruits’ college options, recruiting activities, Twitter connections, and Twitter content to construct a logistic classifier predicting which school a recruit will select out of those that have offered him a scholarship. My results highlight the value of social media data—especially the hashtags posted by the athlete and his online social network connections—for predicting his commitment decision. These findings may prove useful for college coaches seeking innovative methods to compete for elite talent, as well as assisting them in allocating recruiting resources. The third study focuses on athletic turnover, i.e., decommitments. I construct a logistic classifier to predict the occurrence of decommitments over time based on recruits’ college choices, recruiting activities, online social networks, and the decommitment behavior of their peers. The results further underscore the power of online social networks for predicting offline recruiting outcomes, giving coaches the tools to better identify vulnerable commitments.
APA, Harvard, Vancouver, ISO, and other styles
7

Gröbe, Mathias. "Konzeption und Entwicklung eines automatisierten Workflows zur geovisuellen Analyse von georeferenzierten Textdaten(strömen) / Microblogging Content." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-210672.

Full text
Abstract:
Die vorliegende Masterarbeit behandelt den Entwurf und die exemplarische Umsetzung eines Arbeitsablaufs zur Aufbereitung von georeferenziertem Microblogging Content. Als beispielhafte Datenquelle wurde Twitter herangezogen. Darauf basierend, wurden Überlegungen angestellt, welche Arbeitsschritte nötig und mit welchen Mitteln sie am besten realisiert werden können. Dabei zeigte sich, dass eine ganze Reihe von Bausteinen aus dem Bereich des Data Mining und des Text Mining für eine Pipeline bereits vorhanden sind und diese zum Teil nur noch mit den richtigen Einstellungen aneinandergereiht werden müssen. Zwar kann eine logische Reihenfolge definiert werden, aber weitere Anpassungen auf die Fragestellung und die verwendeten Daten können notwendig sein. Unterstützt wird dieser Prozess durch verschiedenen Visualisierungen mittels Histogrammen, Wortwolken und Kartendarstellungen. So kann neues Wissen entdeckt und nach und nach die Parametrisierung der Schritte gemäß den Prinzipien des Geovisual Analytics verfeinert werden. Für eine exemplarische Umsetzung wurde nach der Betrachtung verschiedener Softwareprodukte die für statistische Anwendungen optimierte Programmiersprache R ausgewählt. Abschließend wurden die Software mit Daten von Twitter und Flickr evaluiert
This Master's Thesis deals with the conception and exemplary implementation of a workflow for georeferenced Microblogging Content. Data from Twitter is used as an example and as a starting point to think about how to build that workflow. In the field of Data Mining and Text Mining, there was found a whole range of useful software modules that already exist. Mostly, they only need to get lined up to a process pipeline using appropriate preferences. Although a logical order can be defined, further adjustments according to the research question and the data are required. The process is supported by different forms of visualizations such as histograms, tag clouds and maps. This way new knowledge can be discovered and the options for the preparation can be improved. This way of knowledge discovery is already known as Geovisual Analytics. After a review of multiple existing software tools, the programming language R is used to implement the workflow as this language is optimized for solving statistical problems. Finally, the workflow has been tested using data from Twitter and Flickr
APA, Harvard, Vancouver, ISO, and other styles
8

Dehghan, Ehsan. "Networked discursive alliances: Antagonism, agonism, and the dynamics of discursive struggles in the Australian Twittersphere." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/174604/1/Ehsan_Dehghan_Thesis.pdf.

Full text
Abstract:
This project examines the complex inter-relationship between social media and democracy, by investigating the dynamics of economic, social, and political disagreements and struggles among Twitter users in Australia. The thesis looks for ways to transform polarisation and disagreements into conflictual togetherness.
APA, Harvard, Vancouver, ISO, and other styles
9

Vondrášek, Petr. "Komerční využití sociálních sítí." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-197442.

Full text
Abstract:
This thesis analyses the possible utilizations of social networks by companies in order to obtain benefits in their field of business. This study is focused on the most popular social network on a global scope, i.e. Facebook, Twitter, LinkedIn, Google+ and YouTube. The theoretical section summarizes and provides an analysis of its suitability and possibility for applications in commercial companies. The possibilities discussed are in particular marketing, client communication, brand development and public relations. Moreover, the practical part of this thesis lists the main case studies that depict the utilization of social networks in practice and highlight the benefits the individual companies gain and their costs. Case studies further discuss the use of social networks by traditional and less traditional means for small and larger-sized companies. The thesis is complemented by applied research of social network activities of companies and the analysis of the perception of their users, which is realized by questionnaire survey.
APA, Harvard, Vancouver, ISO, and other styles
10

Björnham, Alexandra. "Agile communication for a greener world." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-122117.

Full text
Abstract:
As a research focused organization the problem with making the information easily read and interesting to the extent that the reader wants to share this information with its friends is a crucial one. To create the perfect communication, reaching and affecting the majority of society is an impossible task. If the focus instead lies on the thought that by building a serious and dependable reputation, using the ease of social media and trying to create a ripple effect to make change by networking communication, there is a possibility to influence. The art of persuasion starts by building trust in a person or in this case in an organization. But if the communication is made by social media, how can one tell if the communication has built trust or created any positive response by the readers? By using Python, a search algorithm has been set up for mining Twitter and analyzing all data covering the area of biofuel and its participants. This data is then used to start an information feedback loop, where the analytical conclusions made from the retrieved information and activities can affect the communication forwarded from the sender. In an agile manner the user is to choose “sprint”-time as well as a time for retrospect, all to refine the analytical method and improve the process.
APA, Harvard, Vancouver, ISO, and other styles
11

Skočík, Miroslav. "Internetový marketing." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2014. http://www.nusl.cz/ntk/nusl-224533.

Full text
Abstract:
Diploma thesis focuses on the selection and application of appropriate internet marketing tools. It specifies possible fields of B2C promotion on the internet. It analyses chosen company and its existing operations in the field of internet marketing due to recommended procedures and procedures of its competitors. Diploma thesis uses theoretical knowledge and analysis for application of proposed internet marketing strategy and finally compares performances of applied areas of the internet marketing.
APA, Harvard, Vancouver, ISO, and other styles
12

Vondruška, Pavel. "Možnosti a podmínky využití sociálních sítí pro zvýšení konkurenceschopnosti." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-124671.

Full text
Abstract:
The diploma thesis is focused on the social networking sites in terms of their use for increasing the competitiveness of companies. The first part deals with a theoretical introduction to social networks where the basic concepts are defined and then characterized by major networks globally. Each social network is analyzed both in terms of the structure of its users, and are listed as individual options, which provides for promotion within their system - they are mentioned both paid and unpaid opportunities. At the end of this section are described the general possibilities of using social networking for business - ways to measure the impact of these activities, risks and requirements of their operation within the company. The second part is a practical solution to use social networks to support the selected e-commerce business. The introduction provides basic parameters of the e-shop is defined by the competitive environment. The analysis carried out various solutions to various social networks. It is designed and implemented their own social network to the target audience the company. The practical part ends with a final report including recommendations for further progress in the use of social networks. The main contribution of the paper is to present a detailed analysis of selected enterprise communication on social networks, including putting all the relevant data from which the conclusion can be drawn in specific recommendations to further enhance competitiveness.
APA, Harvard, Vancouver, ISO, and other styles
13

Willis, Margaret Mary. "Interpreting "Big Data": Rock Star Expertise, Analytical Distance, and Self-Quantification." Thesis, Boston College, 2015. http://hdl.handle.net/2345/bc-ir:104932.

Full text
Abstract:
Thesis advisor: Natalia Sarkisian
The recent proliferation of technologies to collect and analyze “Big Data” has changed the research landscape, making it easier for some to use unprecedented amounts of real-time data to guide decisions and build ‘knowledge.’ In the three articles of this dissertation, I examine what these changes reveal about the nature of expertise and the position of the researcher. In the first article, “Monopoly or Generosity? ‘Rock Stars’ of Big Data, Data Democrats, and the Role of Technologies in Systems of Expertise,” I challenge the claims of recent scholarship, which frames the monopoly of experts and the spread of systems of expertise as opposing forces. I analyze video recordings (N= 30) of the proceedings of two professional conferences about Big Data Analytics (BDA), and I identify distinct orientations towards BDA practice among presenters: (1) those who argue that BDA should be conducted by highly specialized “Rock Star” data experts, and (2) those who argue that access to BDA should be “democratized” to non-experts through the use of automated technology. While the “data democrats” ague that automating technology enhances the spread of the system of BDA expertise, they ignore the ways that it also enhances, and hides, the monopoly of the experts who designed the technology. In addition to its implications for practitioners of BDA, this work contributes to the sociology of expertise by demonstrating the importance of focusing on both monopoly and generosity in order to study power in systems of expertise, particularly those relying extensively on technology. Scholars have discussed several ways that the position of the researcher affects the production of knowledge. In “Distance Makes the Scholar Grow Fonder? The Relationship Between Analytical Distance and Critical Reflection on Methods in Big Data Analytics,” I pinpoint two types of researcher “distance” that have already been explored in the literature (experiential and interactional), and I identify a third type of distance—analytical distance—that has not been examined so far. Based on an empirical analysis of 113 articles that utilize Twitter data, I find that the analytical distance that authors maintain from the coding process is related to whether the authors include explicit critical reflections about their research in the article. Namely, articles in which the authors automate the coding process are significantly less likely to reflect on the reliability or validity of the study, even after controlling for factors such as article length and author’s discipline. These findings have implications for numerous research settings, from studies conducted by a team of scholars who delegate analytic tasks, to “big data” or “e-science” research that automates parts of the analytic process. Individuals who engage in self-tracking—collecting data about themselves or aspects of their lives for their own purposes—occupy a unique position as both researcher and subject. In the sociology of knowledge, previous research suggests that low experiential distance between researcher and subject can lead to more nuanced interpretations but also blind the researcher to his or her underlying assumptions. However, these prior studies of distance fail to explore what happens when the boundary between researcher and subject collapses in “N of one” studies. In “The Collapse of Experiential Distance and the Inescapable Ambiguity of Quantifying Selves,” I borrow from art and literary theories of grotesquerie—another instance of the collapse of boundaries—to examine the collapse of boundaries in self-tracking. Based on empirical analyses of video testimonies (N=102) and interviews (N=7) with members of the Quantified Self community of self-trackers, I find that ambiguity and multiplicity are integral facets of these data practices. I discuss the implications of these findings for the sociological study of researcher distance, and also the practical implications for the neoliberal turn that assigns responsibility to individuals to collect, analyze, and make the best use of personal data
Thesis (PhD) — Boston College, 2015
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Sociology
APA, Harvard, Vancouver, ISO, and other styles
14

Purra, Joel. "Swedes Online: You Are More Tracked Than You Think." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117075.

Full text
Abstract:
When you are browsing websites, third-party resources record your online habits; such tracking can be considered an invasion of privacy. It was previously unknown how many third-party resources, trackers and tracker companies are present in the different classes of websites chosen: globally popular websites, random samples of .se/.dk/.com/.net domains and curated lists of websites of public interest in Sweden. The in-browser HTTP/HTTPS traffic was recorded while downloading over 150,000 websites, allowing comparison of HTTPS adaption and third-party tracking within and across the different classes of websites. The data shows that known third-party resources including known trackers are present on over 90% of most classes, that third-party hosted content such as video, scripts and fonts make up a large portion of the known trackers seen on a typical website and that tracking is just as prevalent on secure as insecure sites. Observations include that Google is the most widespread tracker organization by far, that content is being served by known trackers may suggest that trackers are moving to providing services to the end user to avoid being blocked by privacy tools and ad blockers, and that the small difference in tracking between using HTTP and HTTPS connections may suggest that users are given a false sense of privacy when using HTTPS.

Source code, datasets, and a video recording of the presentation is available on the master's thesis website.

APA, Harvard, Vancouver, ISO, and other styles
15

PENG, XUAN-NING, and 彭萱寧. "Insights from hashtag #fintech and Twitter Analytics." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vg428n.

Full text
Abstract:
碩士
國立臺北科技大學
資訊與財金管理系
107
Social media has become a popular tool for interacting with others. Business organizations discover things and apply them to decisions by analyzing big data generated by the community. In recent years, Fintech has created a huge buzz. However, the value of social media in the Fintech is unknown. This study analyzes the characteristics and trends of the community in the field through the #hashtag, and makes the conclusions and recommendations. The analysis found that the Fintech field is more popular than random tweets. Fintech experts and organizations use Twitter to post new information. The topics of the tweets with higher retweets are more concentrated and professional. Most tweets are emotionally neutral, while tweets with more intense emotions have different keywords inside.
APA, Harvard, Vancouver, ISO, and other styles
16

SINGH, VIKESH KUMAR. "AN IMPACT OF SOCIAL MEDIA VIA TWITTER ANALYTICS." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16337.

Full text
Abstract:
In today’s world micro blogging has become emerging connection medium for Internet users [1]. Many users share their views on different semblance in famous websites such as Facebook, Qzone, LinkedIn, Twitter and Tumblr. With increase in the user on social networking sites, many big giants and media organization are trying to achieve different ways to get these social media information so that they can know what people think about their quality, product and companies. Many firms, Big Organization, Political parties as always keen in knowing if the people will sustain with their event, program or not. Many social NGO’s and Organization can ask people’s views on current topics, challenge for open debate etc. All such kind of information’s can be collected from such plenty of micro blogging websites. Here we represent a function which performs which will do classification based on tweets/retweets and calculate the impact of specific keyword/#Hashtag in Twitter. Currently twitter network is dazzled with huge no of tweets tweeted by its users. For an productive categorization and searing of tweets, user need to use suitable meaningful sentence and hashtags in their tweets. Twitter has a huge number of users which may varies from Politian’s, Celebrities, Actors, company representatives, an even country president uses twitter to express their views on social platform. By this ways we can collect the all possible text posts of users from different organization, companies, interest groups and different social groups [2]. In this project I propose generic functions/recommendation method of the tweets of the individuals/popular personalities that tweet which will create an impact on user mind after the tweets. If we have different groups of users and these tweets, we can easily create our methods so that we can find out the top most familiars users and top most familiars tweets from collected data. Hashtags/Keywords are then used to select to select most familiar tweets and user and then we can assign them some ranking values/scores to them. In future I will explore a more on more types of different categories by after-peak value, before peak value, and during-peak value popularity. It will be arousing to propose different recommendation methods.
APA, Harvard, Vancouver, ISO, and other styles
17

SHARMA, HARSHITA. "SOFT COMPUTING FOR RUMOUR ANALYTICS ON BENCHMARK TWITTER DATA." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16701.

Full text
Abstract:
As social media is a fertile ground for origin and spread of rumours, it is imperative to detect and deter rumours. Various computational models that encompass elements of learning have been studied on benchmark datasets for rumour resolution with four individual tasks, namely rumour detection, tracking, stance and veracity classification. Quick rumour detection during initial propagation phase is desirable for subsequent veracity and stance assessment. This research presents the use of adaptive and heuristic optimization to select a near-optimal set of input variables that would minimize variance and maximize generalizability of the learning model, which is highly desirable to achieve high rumour prediction accuracy. An empirical evaluation of hybrid filter-wrapper on PHEME rumour dataset is done. The features are extracted initially using the conventional term frequency-inverse document frequency (TF-IDF) statistical measure and to select an optimal feature subset two filter methods, namely, information gain and chi-square are separately combined with three swarm intelligence-based wrapper methods, cuckoo search, bat algorithm and ant colony optimization algorithm. The performance results for the combinations have been evaluated by training three classifiers (Naïve Bayes, Random Forest and J48 decision tree) and an average accuracy gain of approximately 7% is observed using hybrid filter-wrapper feature selection approaches. Chi-square filter with Cuckoo and ACO give the same maximum accuracy of 61.19% whereas Chi-square with bat gives the maximum feature reduction selecting only 17.6% iv features. The model clearly maximizes the relevance and minimizes the redundancy in feature set to build an efficient rumour detection model for social data. Due to the ever increasing use and dependence of netizens on social media, it has become a fertile ground for breeding Rumours. This work aims to propose a model for Potential Rumour Origin Detection (PROD) to enable detection of users who can be likely rumour originators. It can not only help to find the original culprit who started a rumour but can aid in veracity classification task of the rumour pipeline as well. This work uses features of the user’s account and tweet to extract meta-data. This meta-data is encoded in an 8 tuple feature vector. A credibility quotient for each user is calculated by assigning weights to each parameter. The higher the credibility of a user, less likely it is to be a rumour originator. Based on the credibility, a label is assigned to each user indicating whether it can be a potential rumour source or not. Three supervised machine learning algorithms have been used for training and evaluation and compared to a baseline zeroR classifier. The results have been evaluated on benchmark PHEME dataset and it is observed that the multi-layer perceptron classifier achieves the highest performance accuracy, that is, an average 97.26% for all five events of PHEME to detect potential rumour source.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Yu-Ping, and 王妤平. "Prediction of Real-World Exploits : the Use of Social Media (Twitter) Analytics." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/pe4vtc.

Full text
Abstract:
碩士
淡江大學
資訊管理學系碩士班
105
As the growth and completeness of networking infrastructure and the popularity of information systems, enterprises and organizations are greatly exposed under information security risk. Software and hardware vulnerabilities that are revealed frequently provide a convenient way for cyber criminals to exploit and attack enterprises or organizations. The publications and discussions of vulnerabilities are frequently found on internet forums; social media have become major platforms for such information exchange after their popularity. The goal of this study is to utilize messages on Twitter regarding vulnerabilities to assess the probability that a vulnerability will be exploited in the real-world. Beside messages on Twitter, information security resources are also used to extract the features of a vulnerability; these resources include: National Vulnerability Database, CVE Details, VulDB, ExploitDB and Microsoft Technet. The study proposes a three-stage classification model to predict the probability that a vulnerability will be exploited, and employs the k-means clustering to adjust the ratio between the positive and negative instances in the sample to alleviate the data (class) imbalance problem during training. The steps of the three-stage classifier are: (1) using support vector machine (SVM) at the first stage training; (2) at the second stage, those instances that are classified as exploited in the testing sample by SVM are further used as training sample of the decision tree classification; (3) the third stage compute the Bayes’ probabilities of those instances which are classified as exploited by decision tree in the testing result. The resulting Bayes’ probabilities serve as a reference for enterprises or vendors to take an appropriate action to a vulnerability.
APA, Harvard, Vancouver, ISO, and other styles
19

Mullassery, Mohanan Shalini. "Social media data analytics for the NSW construction industry : a study on Twitter." Thesis, 2022. http://hdl.handle.net/1959.7/uws:67977.

Full text
Abstract:
The primary aim of this dissertation is to explore the social interaction and relationship of people within the NSW construction industry through social media data analytics. The research objective is to perform social media data analytics through Twitter and explore the social interactions between different stakeholders in the construction industry to understand the real-world situations better. The data analytics was performed on Twitter tweets, retweets, and hashtags that were collected from four clusters on construction stakeholders in NSW, namely construction workers, companies, media, and union. Tweets, retweets, and hashtags that were collected from four clusters on construction stakeholders in NSW, namely construction workers, companies, media, and unions. The thesis seeks to perform social media data analytics in order to explore and investigate the social interactions and links between the different stakeholders that are present in the construction industry. Investigating these interactions will help reveal a multitude of other related social aspects about the stakeholders, e.g., their genuine attitudes about the construction industry and how they feel being involved in this field of work. In order to facilitate this research, a social media data analytics study was carried out to find out the links and associations that are present between the construction workers, companies, unions, and media group entities. Five types of analyses were performed, namely sentiment analysis, link analysis, topic modelling, geo-location analysis, and timeline analysis. The results indicated that there are minimal social interactions between the construction workers and the other three clusters (i.e., companies, unions, and the media). The main reason that has been attributed to this observation is the way workers operate in a rather informal and casual manner. The construction companies, unions, and the media define their behavior in a much more formal and corporate attitude, hence they tend to relate to one another more than they do with workers. A number of counteractive approaches may be enforced in an effort to restore healthy social relations between workers and the other three clusters. For example, the company management teams should endeavor to develop stronger interactions with the workers and improve the working conditions, in overall.
APA, Harvard, Vancouver, ISO, and other styles
20

"Event Analytics on Social Media: Challenges and Solutions." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.27510.

Full text
Abstract:
abstract: Social media platforms such as Twitter, Facebook, and blogs have emerged as valuable - in fact, the de facto - virtual town halls for people to discover, report, share and communicate with others about various types of events. These events range from widely-known events such as the U.S Presidential debate to smaller scale, local events such as a local Halloween block party. During these events, we often witness a large amount of commentary contributed by crowds on social media. This burst of social media responses surges with the "second-screen" behavior and greatly enriches the user experience when interacting with the event and people's awareness of an event. Monitoring and analyzing this rich and continuous flow of user-generated content can yield unprecedentedly valuable information about the event, since these responses usually offer far more rich and powerful views about the event that mainstream news simply could not achieve. Despite these benefits, social media also tends to be noisy, chaotic, and overwhelming, posing challenges to users in seeking and distilling high quality content from that noise. In this dissertation, I explore ways to leverage social media as a source of information and analyze events based on their social media responses collectively. I develop, implement and evaluate EventRadar, an event analysis toolbox which is able to identify, enrich, and characterize events using the massive amounts of social media responses. EventRadar contains three automated, scalable tools to handle three core event analysis tasks: Event Characterization, Event Recognition, and Event Enrichment. More specifically, I develop ET-LDA, a Bayesian model and SocSent, a matrix factorization framework for handling the Event Characterization task, i.e., modeling characterizing an event in terms of its topics and its audience's response behavior (via ET-LDA), and the sentiments regarding its topics (via SocSent). I also develop DeMa, an unsupervised event detection algorithm for handling the Event Recognition task, i.e., detecting trending events from a stream of noisy social media posts. Last, I develop CrowdX, a spatial crowdsourcing system for handling the Event Enrichment task, i.e., gathering additional first hand information (e.g., photos) from the field to enrich the given event's context. Enabled by EventRadar, it is more feasible to uncover patterns that have not been explored previously and re-validating existing social theories with new evidence. As a result, I am able to gain deep insights into how people respond to the event that they are engaged in. The results reveal several key insights into people's various responding behavior over the event's timeline such the topical context of people's tweets does not always correlate with the timeline of the event. In addition, I also explore the factors that affect a person's engagement with real-world events on Twitter and find that people engage in an event because they are interested in the topics pertaining to that event; and while engaging, their engagement is largely affected by their friends' behavior.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2014
APA, Harvard, Vancouver, ISO, and other styles
21

Banerjee, S., J. P. Singh, Y. K. Dwivedi, and Nripendra P. Rana. "Social media analytics for end-users’ expectation management in information systems development projects." 2021. http://hdl.handle.net/10454/18498.

Full text
Abstract:
Yes
This exploratory research aims to investigate social media users’ expectations of information systems (IS) products that are conceived but not yet launched. It specifically analyses social media data from Twitter about forthcoming smartphones and smartwatches from Apple and Samsung, two firms known for their innovative gadgets. Tweets related to the following four forthcoming IS products were retrieved from 1st January 2020 to 30th September 2020: (1) Apple iPhone 12 (6,125 tweets), (2) Apple Watch 6 (553 tweets), (3) Samsung Galaxy Z Flip 2 (923 tweets), and (4) Samsung Galaxy Watch Active 3 (207 tweets). These 7,808 tweets were analysed using a combination of the Natural Language Processing Toolkit (NLTK) and sentiment analysis (SentiWordNet). The online community was quite vocal about topics such as design, camera and hardware specifications. For all the forthcoming gadgets, the proportion of positive tweets exceeded that of negative tweets. The most prevalent sentiment expressed in Apple-related tweets was neutral but in Samsung-related tweets was positive. Additionally, it was found that the proportion of tweets echoing negative sentiment was lower for Apple compared with Samsung. This paper is the earliest empirical work to examine the degree to which social media chatter can be used by project managers for IS development projects, specifically for the purpose of end-users’ expectation management.
APA, Harvard, Vancouver, ISO, and other styles
22

Singh, Asheen. "Social media analytics and the role of twitter in the 2014 South Africa general election: a case study." Thesis, 2018. https://hdl.handle.net/10539/25757.

Full text
Abstract:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science., University of the Witwatersrand, Johannesburg, 2018
Social network sites such as Twitter have created vibrant and diverse communities in which users express their opinions and views on a variety of topics such as politics. Extensive research has been conducted in countries such as Ireland, Germany and the United States, in which text mining techniques have been used to obtain information from politically oriented tweets. The purpose of this research was to determine if text mining techniques can be used to uncover meaningful information from a corpus of political tweets collected during the 2014 South African General Election. The Twitter Application Programming Interface was used to collect tweets that were related to the three major political parties in South Africa, namely: the African National Congress (ANC), the Democratic Alliance (DA) and the Economic Freedom Fighters (EFF). The text mining techniques used in this research are: sentiment analysis, clustering, association rule mining and word cloud analysis. In addition, a correlation analysis was performed to determine if there exists a relationship between the total number of tweets mentioning a political party and the total number of votes obtained by that party. The VADER (Valence Aware Dictionary for sEntiment Reasoning) sentiment classifier was used to determine the public’s sentiment towards the three main political parties. This revealed an overwhelming neutral sentiment of the public towards the ANC, DA and EFF. The result produced by the VADER sentiment classifier was significantly greater than any of the baselines in this research. The K-Means cluster algorithm was used to successfully cluster the corpus of political tweets into political-party clusters. Clusters containing tweets relating to the ANC and EFF were formed. However, tweets relating to the DA were scattered across multiple clusters. A fairly strong relationship was discovered between the number of positive tweets that mention the ANC and the number of votes the ANC received in election. Due to the lack of data, no conclusions could be made for the DA or the EFF. The apriori algorithm uncovered numerous association rules, some of which were found to be interest- ing. The results have also demonstrated the usefulness of word cloud analysis in providing easy-to-understand information from the tweet corpus used in this study. This research has highlighted the many ways in which text mining techniques can be used to obtain meaningful information from a corpus of political tweets. This case study can be seen as a contribution to a research effort that seeks to unlock the information contained in textual data from social network sites.
MT 2018
APA, Harvard, Vancouver, ISO, and other styles
23

Zaza, Imad. "Ontological knowledge-base for railway control system and analytical data platform for Twitter." Doctoral thesis, 2018. http://hdl.handle.net/2158/1126141.

Full text
Abstract:
The scope of this thesis is Railway signaling and Social Media Analysis. In particular, with regard to the first theme, it has been conducted an investigation into the domain of railway signaling, it has been defined the objectives of the research or the development and verification of an ontological model for the management of railway signaling and finally having discussed results and inefficiencies. As far as for SMA it have been studied and discussed the state of the art of SMA tools including Twitter Vigilance developed within the DISIT laboratory at the University of Florence. It has been proposed a distributed architecture porting analysis, where it have been also highlighted the problems associated with migrating single host applications to distributed architectures and possible mitigation.
APA, Harvard, Vancouver, ISO, and other styles
24

Singh, P., Y. K. Dwivedi, K. S. Kahlon, R. S. Sawhney, A. A. Alalwan, and Nripendra P. Rana. "Smart monitoring and controlling of government policies using social media and cloud computing." 2019. http://hdl.handle.net/10454/17468.

Full text
Abstract:
Yes
The governments, nowadays, throughout the world are increasingly becoming dependent on public opinion regarding the framing and implementation of certain policies for the welfare of the general public. The role of social media is vital to this emerging trend. Traditionally, lack of public participation in various policy making decision used to be a major cause of concern particularly when formulating and evaluating such policies. However, the exponential rise in usage of social media platforms by general public has given the government a wider insight to overcome this long pending dilemma. Cloud-based e-governance is currently being realized due to IT infrastructure availability along with mindset changes of government advisors towards realizing the various policies in a best possible manner. This paper presents a pragmatic approach that combines the capabilities of both cloud computing and social media analytics towards efficient monitoring and controlling of governmental policies through public involvement. The proposed system has provided us some encouraging results, when tested for Goods and Services Tax (GST) implementation by Indian government and established that it can be successfully implemented for efficient policy making and implementation.
APA, Harvard, Vancouver, ISO, and other styles
25

Gröbe, Mathias. "Konzeption und Entwicklung eines automatisierten Workflows zur geovisuellen Analyse von georeferenzierten Textdaten(strömen) / Microblogging Content." Master's thesis, 2015. https://tud.qucosa.de/id/qucosa%3A29848.

Full text
Abstract:
Die vorliegende Masterarbeit behandelt den Entwurf und die exemplarische Umsetzung eines Arbeitsablaufs zur Aufbereitung von georeferenziertem Microblogging Content. Als beispielhafte Datenquelle wurde Twitter herangezogen. Darauf basierend, wurden Überlegungen angestellt, welche Arbeitsschritte nötig und mit welchen Mitteln sie am besten realisiert werden können. Dabei zeigte sich, dass eine ganze Reihe von Bausteinen aus dem Bereich des Data Mining und des Text Mining für eine Pipeline bereits vorhanden sind und diese zum Teil nur noch mit den richtigen Einstellungen aneinandergereiht werden müssen. Zwar kann eine logische Reihenfolge definiert werden, aber weitere Anpassungen auf die Fragestellung und die verwendeten Daten können notwendig sein. Unterstützt wird dieser Prozess durch verschiedenen Visualisierungen mittels Histogrammen, Wortwolken und Kartendarstellungen. So kann neues Wissen entdeckt und nach und nach die Parametrisierung der Schritte gemäß den Prinzipien des Geovisual Analytics verfeinert werden. Für eine exemplarische Umsetzung wurde nach der Betrachtung verschiedener Softwareprodukte die für statistische Anwendungen optimierte Programmiersprache R ausgewählt. Abschließend wurden die Software mit Daten von Twitter und Flickr evaluiert.
This Master's Thesis deals with the conception and exemplary implementation of a workflow for georeferenced Microblogging Content. Data from Twitter is used as an example and as a starting point to think about how to build that workflow. In the field of Data Mining and Text Mining, there was found a whole range of useful software modules that already exist. Mostly, they only need to get lined up to a process pipeline using appropriate preferences. Although a logical order can be defined, further adjustments according to the research question and the data are required. The process is supported by different forms of visualizations such as histograms, tag clouds and maps. This way new knowledge can be discovered and the options for the preparation can be improved. This way of knowledge discovery is already known as Geovisual Analytics. After a review of multiple existing software tools, the programming language R is used to implement the workflow as this language is optimized for solving statistical problems. Finally, the workflow has been tested using data from Twitter and Flickr.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography