Дисертації з теми "TWITTER ANALYTICS"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-25 дисертацій для дослідження на тему "TWITTER ANALYTICS".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Leis, Machín Angela 1974. "Studying depression through big data analytics on Twitter." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2021. http://hdl.handle.net/10803/671365.
Повний текст джерелаCarvalho, Eder José de. "Visual analytics of topics in twitter in connection with political debates." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11092017-140904/.
Повний текст джерелаMídias sociais como o Twitter e o Facebook atuam, em diversas situações, como canais de iniciativas que buscam ampliar as ações de cidadania. Por outro lado, certas ações e manifestações na mídia convencional por parte de instituições governamentais, ou de jornalistas e políticos como deputados e senadores, tendem a repercutir nas mídias sociais. Como resultado, gerase uma enorme quantidade de dados em formato textual que podem ser muito informativos sobre ações e políticas governamentais. No entanto, o público-alvo continua carente de boas ferramentas que ajudem a levantar, correlacionar e interpretar as informações potencialmente úteis associadas a esses textos. Neste contexto, este trabalho apresenta dois sistemas orientados à análise de dados governamentais e de mídias sociais. Um dos sistemas introduz uma nova visualização, baseada na metáfora do rio, para análise temporal da evolução de tópicos no Twitter em conexão com debates políticos. Para tanto, o problema foi inicialmente modelado como um problema de clusterização e um método de segmentação de texto independente de domínio foi adaptado para associar (por clusterização) tweets com discursos parlamentares. Uma versão do algorimo MONIC para detecção de transições entre agrupamentos foi empregada para rastrear a evolução temporal de debates (ou agrupamentos) e produzir um conjunto de agrupamentos com informação de tempo. O outro sistema, chamado ATR-Vis, combina técnicas de visualização com estratégias de recuperação ativa para envolver o usuário na recuperação de tweets relacionados a debates políticos e associa-os ao debate correspondente. O arcabouço proposto introduz quatro estratégias de recuperação ativa que utilizam informação estrutural do Twitter melhorando a acurácia do processo de recuperação e simultaneamente minimizando o número de pedidos de rotulação apresentados ao usuário. Avaliações por meio de casos de uso e experimentos quantitativos, assim como uma análise qualitativa conduzida com três especialistas ilustram a efetividade do ATR-Vis na recuperação de tweets relevantes. Para a avaliação, foram coletados dois conjuntos de tweets relacionados a debates parlamentares ocorridos no Brasil e no Canadá, e outro formado por um conjunto de notícias que receberam grande atenção da mídia no período da coleta.
Haraldsson, Daniel. "Marknadsföring på Twitter : Vilken dag och tidpunkt är optimal?" Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11517.
Повний текст джерелаMahendiran, Aravindan. "Automated Vocabulary Building for Characterizing and Forecasting Elections using Social Media Analytics." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/25430.
Повний текст джерелаMaster of Science
Carlos, Marcelo Aparecido. "Análise de surtos de doenças transmitidas pelo mosquito Aedes aegypti utilizando Big-Data Analytics e mensagens do Twitter." reponame:Repositório Institucional da UFABC, 2017.
Знайти повний текст джерелаDissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2017.
O uso do big-data aliado a técnicas de mineração de textos vem crescendo a cada ano em diversas áreas da ciência, especialmente na área da saúde, na medicina de precisão, em prontuários eletrônicos entre outros. A motivação desse trabalho parte da hipótese de que é possível usar conceitos de big-data para analisar grandes quantidades de dados sobre as doenças da dengue, chikungunya e zika vírus, para monitorar e antecipar informações sobre possíveis surtos dessas doenças. Entretanto, a análise de grandes volumes de dados - inerente ao estudo em big-data - possui desafios, particularmente, devido à falta de escalabilidade dos algoritmos e à complexidade do gerenciamento dos mais diferentes tipos e estruturas dos dados envolvidos. O principal objetivo desse trabalho é apresentar uma implementação de técnicas de mineração de textos, em especial, aqueles oriundos de redes sociais, tais como o Twitter, aliadas à abordagem de análises em big-data e aprendizado de máquina, para monitorar a incidência das doenças da dengue, chikungunya e zika vírus, todas transmitidas pelo mosquito Aedes aegypti. Os resultados obtidos indicam que a implementação realizada, baseado na junção dos algoritmos de aprendizado de máquina K-Means e SVM, teve rendimento satisfatório para a amostra utilizada em comparação aos registros do Ministério da Saúde, indicando, assim, um potencial para utilização do seu propósito. Observa-se que a principal vantagem das análises em big-data está relacionada à possibilidade de empregar dados não estruturados os quais são obtidos em redes sociais, sites de e-commerce, dentre outros. Nesse sentido, dados que antes pareciam, de certo modo, de pouca importância, passam a ter grande potencial e relevância.
The use of the big-data allied to techniques of text mining has been growing every year in several areas of science, especially in the area of health, precision medicine, electronic medical records among others. The motivation from this work, is based on the hypothesis that it is possible to use big-data concepts to analyze large volumes of data about the dengue disease, chikungunya and zika virus, to monitor and anticipate information about possible outbreaks for diseases. However, the analysis of large volumes of data - inherent in the big-data study - has some challenges, particularly due to the lack of scalability of the algorithms and the complexity of managing the most different types and structures of the data involved. The main objective of this work is to present the implementation of text mining techniques - especially from social networks such as Twitter - allies to the approach of big-data and machine-learned analyzes to monitor the incidence of Dengue, Chikungunya and Zika virus, all transmissions by the mosquito Aedes aegypti. The results obtained indicate that the implementation made based on the combination of machine learning algorithms, K-Means and SVM, got a satisfactory yield for a sample used, if compared the publications of the records of the Ministry of Health, thus indicating a potential for the purpose. It is observed that a main advantage of the big-data analyzes is related to the possibility of employing unstructured data, e-commerce sites, among others. In this sense, data that once seemed, in a way, of little importance, have great potential and relevance.
Bigsby, Kristina Gavin. "From hashtags to Heismans: social media and networks in college football recruiting." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6371.
Повний текст джерелаGröbe, Mathias. "Konzeption und Entwicklung eines automatisierten Workflows zur geovisuellen Analyse von georeferenzierten Textdaten(strömen) / Microblogging Content." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-210672.
Повний текст джерелаThis Master's Thesis deals with the conception and exemplary implementation of a workflow for georeferenced Microblogging Content. Data from Twitter is used as an example and as a starting point to think about how to build that workflow. In the field of Data Mining and Text Mining, there was found a whole range of useful software modules that already exist. Mostly, they only need to get lined up to a process pipeline using appropriate preferences. Although a logical order can be defined, further adjustments according to the research question and the data are required. The process is supported by different forms of visualizations such as histograms, tag clouds and maps. This way new knowledge can be discovered and the options for the preparation can be improved. This way of knowledge discovery is already known as Geovisual Analytics. After a review of multiple existing software tools, the programming language R is used to implement the workflow as this language is optimized for solving statistical problems. Finally, the workflow has been tested using data from Twitter and Flickr
Dehghan, Ehsan. "Networked discursive alliances: Antagonism, agonism, and the dynamics of discursive struggles in the Australian Twittersphere." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/174604/1/Ehsan_Dehghan_Thesis.pdf.
Повний текст джерелаVondrášek, Petr. "Komerční využití sociálních sítí." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-197442.
Повний текст джерелаBjörnham, Alexandra. "Agile communication for a greener world." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-122117.
Повний текст джерелаSkočík, Miroslav. "Internetový marketing." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2014. http://www.nusl.cz/ntk/nusl-224533.
Повний текст джерелаVondruška, Pavel. "Možnosti a podmínky využití sociálních sítí pro zvýšení konkurenceschopnosti." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-124671.
Повний текст джерелаWillis, Margaret Mary. "Interpreting "Big Data": Rock Star Expertise, Analytical Distance, and Self-Quantification." Thesis, Boston College, 2015. http://hdl.handle.net/2345/bc-ir:104932.
Повний текст джерелаThe recent proliferation of technologies to collect and analyze “Big Data” has changed the research landscape, making it easier for some to use unprecedented amounts of real-time data to guide decisions and build ‘knowledge.’ In the three articles of this dissertation, I examine what these changes reveal about the nature of expertise and the position of the researcher. In the first article, “Monopoly or Generosity? ‘Rock Stars’ of Big Data, Data Democrats, and the Role of Technologies in Systems of Expertise,” I challenge the claims of recent scholarship, which frames the monopoly of experts and the spread of systems of expertise as opposing forces. I analyze video recordings (N= 30) of the proceedings of two professional conferences about Big Data Analytics (BDA), and I identify distinct orientations towards BDA practice among presenters: (1) those who argue that BDA should be conducted by highly specialized “Rock Star” data experts, and (2) those who argue that access to BDA should be “democratized” to non-experts through the use of automated technology. While the “data democrats” ague that automating technology enhances the spread of the system of BDA expertise, they ignore the ways that it also enhances, and hides, the monopoly of the experts who designed the technology. In addition to its implications for practitioners of BDA, this work contributes to the sociology of expertise by demonstrating the importance of focusing on both monopoly and generosity in order to study power in systems of expertise, particularly those relying extensively on technology. Scholars have discussed several ways that the position of the researcher affects the production of knowledge. In “Distance Makes the Scholar Grow Fonder? The Relationship Between Analytical Distance and Critical Reflection on Methods in Big Data Analytics,” I pinpoint two types of researcher “distance” that have already been explored in the literature (experiential and interactional), and I identify a third type of distance—analytical distance—that has not been examined so far. Based on an empirical analysis of 113 articles that utilize Twitter data, I find that the analytical distance that authors maintain from the coding process is related to whether the authors include explicit critical reflections about their research in the article. Namely, articles in which the authors automate the coding process are significantly less likely to reflect on the reliability or validity of the study, even after controlling for factors such as article length and author’s discipline. These findings have implications for numerous research settings, from studies conducted by a team of scholars who delegate analytic tasks, to “big data” or “e-science” research that automates parts of the analytic process. Individuals who engage in self-tracking—collecting data about themselves or aspects of their lives for their own purposes—occupy a unique position as both researcher and subject. In the sociology of knowledge, previous research suggests that low experiential distance between researcher and subject can lead to more nuanced interpretations but also blind the researcher to his or her underlying assumptions. However, these prior studies of distance fail to explore what happens when the boundary between researcher and subject collapses in “N of one” studies. In “The Collapse of Experiential Distance and the Inescapable Ambiguity of Quantifying Selves,” I borrow from art and literary theories of grotesquerie—another instance of the collapse of boundaries—to examine the collapse of boundaries in self-tracking. Based on empirical analyses of video testimonies (N=102) and interviews (N=7) with members of the Quantified Self community of self-trackers, I find that ambiguity and multiplicity are integral facets of these data practices. I discuss the implications of these findings for the sociological study of researcher distance, and also the practical implications for the neoliberal turn that assigns responsibility to individuals to collect, analyze, and make the best use of personal data
Thesis (PhD) — Boston College, 2015
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Sociology
Purra, Joel. "Swedes Online: You Are More Tracked Than You Think." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-117075.
Повний текст джерелаSource code, datasets, and a video recording of the presentation is available on the master's thesis website.
PENG, XUAN-NING, and 彭萱寧. "Insights from hashtag #fintech and Twitter Analytics." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/vg428n.
Повний текст джерела國立臺北科技大學
資訊與財金管理系
107
Social media has become a popular tool for interacting with others. Business organizations discover things and apply them to decisions by analyzing big data generated by the community. In recent years, Fintech has created a huge buzz. However, the value of social media in the Fintech is unknown. This study analyzes the characteristics and trends of the community in the field through the #hashtag, and makes the conclusions and recommendations. The analysis found that the Fintech field is more popular than random tweets. Fintech experts and organizations use Twitter to post new information. The topics of the tweets with higher retweets are more concentrated and professional. Most tweets are emotionally neutral, while tweets with more intense emotions have different keywords inside.
SINGH, VIKESH KUMAR. "AN IMPACT OF SOCIAL MEDIA VIA TWITTER ANALYTICS." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16337.
Повний текст джерелаSHARMA, HARSHITA. "SOFT COMPUTING FOR RUMOUR ANALYTICS ON BENCHMARK TWITTER DATA." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16701.
Повний текст джерелаWang, Yu-Ping, and 王妤平. "Prediction of Real-World Exploits : the Use of Social Media (Twitter) Analytics." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/pe4vtc.
Повний текст джерела淡江大學
資訊管理學系碩士班
105
As the growth and completeness of networking infrastructure and the popularity of information systems, enterprises and organizations are greatly exposed under information security risk. Software and hardware vulnerabilities that are revealed frequently provide a convenient way for cyber criminals to exploit and attack enterprises or organizations. The publications and discussions of vulnerabilities are frequently found on internet forums; social media have become major platforms for such information exchange after their popularity. The goal of this study is to utilize messages on Twitter regarding vulnerabilities to assess the probability that a vulnerability will be exploited in the real-world. Beside messages on Twitter, information security resources are also used to extract the features of a vulnerability; these resources include: National Vulnerability Database, CVE Details, VulDB, ExploitDB and Microsoft Technet. The study proposes a three-stage classification model to predict the probability that a vulnerability will be exploited, and employs the k-means clustering to adjust the ratio between the positive and negative instances in the sample to alleviate the data (class) imbalance problem during training. The steps of the three-stage classifier are: (1) using support vector machine (SVM) at the first stage training; (2) at the second stage, those instances that are classified as exploited in the testing sample by SVM are further used as training sample of the decision tree classification; (3) the third stage compute the Bayes’ probabilities of those instances which are classified as exploited by decision tree in the testing result. The resulting Bayes’ probabilities serve as a reference for enterprises or vendors to take an appropriate action to a vulnerability.
Mullassery, Mohanan Shalini. "Social media data analytics for the NSW construction industry : a study on Twitter." Thesis, 2022. http://hdl.handle.net/1959.7/uws:67977.
Повний текст джерела"Event Analytics on Social Media: Challenges and Solutions." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.27510.
Повний текст джерелаDissertation/Thesis
Doctoral Dissertation Computer Science 2014
Banerjee, S., J. P. Singh, Y. K. Dwivedi, and Nripendra P. Rana. "Social media analytics for end-users’ expectation management in information systems development projects." 2021. http://hdl.handle.net/10454/18498.
Повний текст джерелаThis exploratory research aims to investigate social media users’ expectations of information systems (IS) products that are conceived but not yet launched. It specifically analyses social media data from Twitter about forthcoming smartphones and smartwatches from Apple and Samsung, two firms known for their innovative gadgets. Tweets related to the following four forthcoming IS products were retrieved from 1st January 2020 to 30th September 2020: (1) Apple iPhone 12 (6,125 tweets), (2) Apple Watch 6 (553 tweets), (3) Samsung Galaxy Z Flip 2 (923 tweets), and (4) Samsung Galaxy Watch Active 3 (207 tweets). These 7,808 tweets were analysed using a combination of the Natural Language Processing Toolkit (NLTK) and sentiment analysis (SentiWordNet). The online community was quite vocal about topics such as design, camera and hardware specifications. For all the forthcoming gadgets, the proportion of positive tweets exceeded that of negative tweets. The most prevalent sentiment expressed in Apple-related tweets was neutral but in Samsung-related tweets was positive. Additionally, it was found that the proportion of tweets echoing negative sentiment was lower for Apple compared with Samsung. This paper is the earliest empirical work to examine the degree to which social media chatter can be used by project managers for IS development projects, specifically for the purpose of end-users’ expectation management.
Singh, Asheen. "Social media analytics and the role of twitter in the 2014 South Africa general election: a case study." Thesis, 2018. https://hdl.handle.net/10539/25757.
Повний текст джерелаSocial network sites such as Twitter have created vibrant and diverse communities in which users express their opinions and views on a variety of topics such as politics. Extensive research has been conducted in countries such as Ireland, Germany and the United States, in which text mining techniques have been used to obtain information from politically oriented tweets. The purpose of this research was to determine if text mining techniques can be used to uncover meaningful information from a corpus of political tweets collected during the 2014 South African General Election. The Twitter Application Programming Interface was used to collect tweets that were related to the three major political parties in South Africa, namely: the African National Congress (ANC), the Democratic Alliance (DA) and the Economic Freedom Fighters (EFF). The text mining techniques used in this research are: sentiment analysis, clustering, association rule mining and word cloud analysis. In addition, a correlation analysis was performed to determine if there exists a relationship between the total number of tweets mentioning a political party and the total number of votes obtained by that party. The VADER (Valence Aware Dictionary for sEntiment Reasoning) sentiment classifier was used to determine the public’s sentiment towards the three main political parties. This revealed an overwhelming neutral sentiment of the public towards the ANC, DA and EFF. The result produced by the VADER sentiment classifier was significantly greater than any of the baselines in this research. The K-Means cluster algorithm was used to successfully cluster the corpus of political tweets into political-party clusters. Clusters containing tweets relating to the ANC and EFF were formed. However, tweets relating to the DA were scattered across multiple clusters. A fairly strong relationship was discovered between the number of positive tweets that mention the ANC and the number of votes the ANC received in election. Due to the lack of data, no conclusions could be made for the DA or the EFF. The apriori algorithm uncovered numerous association rules, some of which were found to be interest- ing. The results have also demonstrated the usefulness of word cloud analysis in providing easy-to-understand information from the tweet corpus used in this study. This research has highlighted the many ways in which text mining techniques can be used to obtain meaningful information from a corpus of political tweets. This case study can be seen as a contribution to a research effort that seeks to unlock the information contained in textual data from social network sites.
MT 2018
Zaza, Imad. "Ontological knowledge-base for railway control system and analytical data platform for Twitter." Doctoral thesis, 2018. http://hdl.handle.net/2158/1126141.
Повний текст джерелаSingh, P., Y. K. Dwivedi, K. S. Kahlon, R. S. Sawhney, A. A. Alalwan, and Nripendra P. Rana. "Smart monitoring and controlling of government policies using social media and cloud computing." 2019. http://hdl.handle.net/10454/17468.
Повний текст джерелаThe governments, nowadays, throughout the world are increasingly becoming dependent on public opinion regarding the framing and implementation of certain policies for the welfare of the general public. The role of social media is vital to this emerging trend. Traditionally, lack of public participation in various policy making decision used to be a major cause of concern particularly when formulating and evaluating such policies. However, the exponential rise in usage of social media platforms by general public has given the government a wider insight to overcome this long pending dilemma. Cloud-based e-governance is currently being realized due to IT infrastructure availability along with mindset changes of government advisors towards realizing the various policies in a best possible manner. This paper presents a pragmatic approach that combines the capabilities of both cloud computing and social media analytics towards efficient monitoring and controlling of governmental policies through public involvement. The proposed system has provided us some encouraging results, when tested for Goods and Services Tax (GST) implementation by Indian government and established that it can be successfully implemented for efficient policy making and implementation.
Gröbe, Mathias. "Konzeption und Entwicklung eines automatisierten Workflows zur geovisuellen Analyse von georeferenzierten Textdaten(strömen) / Microblogging Content." Master's thesis, 2015. https://tud.qucosa.de/id/qucosa%3A29848.
Повний текст джерелаThis Master's Thesis deals with the conception and exemplary implementation of a workflow for georeferenced Microblogging Content. Data from Twitter is used as an example and as a starting point to think about how to build that workflow. In the field of Data Mining and Text Mining, there was found a whole range of useful software modules that already exist. Mostly, they only need to get lined up to a process pipeline using appropriate preferences. Although a logical order can be defined, further adjustments according to the research question and the data are required. The process is supported by different forms of visualizations such as histograms, tag clouds and maps. This way new knowledge can be discovered and the options for the preparation can be improved. This way of knowledge discovery is already known as Geovisual Analytics. After a review of multiple existing software tools, the programming language R is used to implement the workflow as this language is optimized for solving statistical problems. Finally, the workflow has been tested using data from Twitter and Flickr.