Segui questo link per vedere altri tipi di pubblicazioni sul tema: Online content moderation.

Articoli di riviste sul tema "Online content moderation"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Online content moderation".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Jhaver, Shagun, Sucheta Ghoshal, Amy Bruckman e Eric Gilbert. "Online Harassment and Content Moderation". ACM Transactions on Computer-Human Interaction 25, n. 2 (26 aprile 2018): 1–33. http://dx.doi.org/10.1145/3185593.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Goldman, Eric. "Content Moderation Remedies". Michigan Technology Law Review, n. 28.1 (2021): 1. http://dx.doi.org/10.36645/mtlr.28.1.content.

Testo completo
Abstract (sommario):
This Article addresses a critical but underexplored aspect of content moderation: if a user’s online content or actions violate an Internet service’s rules, what should happen next? The longstanding expectation is that Internet services should remove violative content or accounts from their services as quickly as possible, and many laws mandate that result. However, Internet services have a wide range of other options—what I call “remedies”—they can use to redress content or accounts that violate the applicable rules. This Article describes dozens of remedies that Internet services have actually imposed. It then provides a normative framework to help Internet services and regulators navigate these remedial options to address the many difficult tradeoffs involved in content moderation. By moving past the binary remove-or-not remedy framework that dominates the current discourse about content moderation, this Article helps to improve the efficacy of content moderation, promote free expression, promote competition among Internet services, and improve Internet services’ community-building functions.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Vlachopoulos, Panos. "Reconsidering the role of online tutors in asynchronous online discussions". Proceedings of the International Conference on Networked Learning 6 (5 maggio 2008): 401–8. http://dx.doi.org/10.54337/nlc.v6.9341.

Testo completo
Abstract (sommario):
A number of publications in the field of e-learning highlight the importance of the “moderator's” approach to developing students’ online learning. They identify that the major challenges for online teachers arise from the diversity of roles which moderators are required to undertake. However, little is reported about the roles e-moderators actually adopt in different learning contexts, and how these range between ‘teaching’ and ‘facilitating’. This research focused on the ways in which several different e-moderators in higher education approached the online learning with students. Four case studies were carried out in two research settings of blended learning, which involved fully online sessions. The first three case studies, as part of the first research setting, offered an insight into the moderation practices of a novice moderator and two guest expert moderators and the reaction of seventeen students to these moderation practices. The fourth one explored in a different research setting the moderation of a group of twenty five students by a moderator, who was the face-to-face lecturer of the same group of students. A grounded theory approach was used to analyse and interpret the data. This generated a comparative insight into diverse moderation practices, and the consequent actions and reactions of e-moderators and students. The study found that there were pre-established relationships between the various actors involved in the discussions, which directly influenced how moderators intervened, and how students reacted. Distinct differences were identified in the ways individual moderators decided when and how to intervene. This resulted in a learner or teacher centred approach with a concentration on process or content. One of the main aspects of the moderation practice was therefore identified as ‘the dichotomy of moderation’, which is discussed in this paper.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

De Gregorio, Giovanni. "Democratising online content moderation: A constitutional framework". Computer Law & Security Review 36 (aprile 2020): 105374. http://dx.doi.org/10.1016/j.clsr.2019.105374.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Hartmann, Ivar A. "A new framework for online content moderation". Computer Law & Security Review 36 (aprile 2020): 105376. http://dx.doi.org/10.1016/j.clsr.2019.105376.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Li, Hanlin, Brent Hecht e Stevie Chancellor. "All That’s Happening behind the Scenes: Putting the Spotlight on Volunteer Moderator Labor in Reddit". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 maggio 2022): 584–95. http://dx.doi.org/10.1609/icwsm.v16i1.19317.

Testo completo
Abstract (sommario):
Online volunteers are an uncompensated yet valuable labor force for many social platforms. For example, volunteer content moderators perform a vast amount of labor to maintain online communities. However, as social platforms like Reddit favor revenue generation and user engagement, moderators are under-supported to manage the expansion of online communities. To preserve these online communities, developers and researchers of social platforms must account for and support as much of this labor as possible. In this paper, we quantitatively characterize the publicly visible and invisible actions taken by moderators on Reddit, using a unique dataset of private moderator logs for 126 subreddits and over 900 moderators. Our analysis of this dataset reveals the heterogeneity of moderation work across both communities and moderators. Moreover, we find that analyzing only visible work – the dominant way that moderation work has been studied thus far – drastically underestimates the amount of human moderation labor on a subreddit. We discuss the implications of our results on content moderation research and social platforms.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

B, Ravinarayana. "Semantic Conversational Content Moderation". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n. 04 (5 aprile 2024): 1–5. http://dx.doi.org/10.55041/ijsrem30161.

Testo completo
Abstract (sommario):
In today's digital environment, social media platforms have become an integral part of our daily lives, facilitating global communication, knowledge sharing and community building. However, these platforms are increasingly vulnerable to the spread of offensive and toxic content, including misinformation, harassment and hate speech. Such content poses a significant threat to the safety and well-being of Internet users. In response to this immediate problem, we set out to develop an AI-based conversation moderation service aimed at effectively detecting and removing offensive or semantically toxic information in real time. Our solutions strive to make the internet safer and more user-friendly, thereby promoting a more positive and inclusive online environment. Key Words: Natural Language Processing(NLP), Moderation, Semantic Analysis, Mistral.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Saputra, Rian, M. Zaid M Zaid e Silaas Oghenemaro Emovwodo. "The Court Online Content Moderation: A Constitutional Framework". Journal of Human Rights, Culture and Legal System 2, n. 3 (1 dicembre 2022): 139–48. http://dx.doi.org/10.53955/jhcls.v2i3.54.

Testo completo
Abstract (sommario):
This study aims to see and describe the practice of electronic justice in Indonesia based on the digital constitutionalism approach; as a concept that tends to be new, Digital Constitutionalism in its development also accommodates the due process online in scientific discourse. This research is normative legal research using a statutory and conceptual approach. Based on the research results, it is known that the practice of electronic justice in Indonesia still uses procedural law guidelines, which are conventional procedural law and internal judicial regulations. In contrast, the development of electronic justice that utilizes technological advances is insufficient to use conventional procedural law in its implementation because it is annulled. It has not been oriented to the protection of Human Rights as conceptualized in the Digital Constitutionalism discourse, which includes due process online. So the regulation of electronic justice in the future must be based on Digital Constitutionalism, which includes knowing the due process online by prioritizing the protection of human rights in a virtual scope from the provider of electronic judicial technology facilities.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Frey, Seth, Maarten W. Bos e Robert W. Sumner. "Can you moderate an unreadable message? 'Blind' content moderation via human computation". Human Computation 4, n. 1 (1 luglio 2017): 78–106. http://dx.doi.org/10.15346/hc.v4i1.83.

Testo completo
Abstract (sommario):
User-generated content (UGC) is fundamental to online social engagement, but eliciting and managing it come with many challenges. The special features of UGC moderation highlight many of the general challenges of human computation in general. They also emphasize how moderation and privacy interact: people have rights to both privacy and safety online, but it is difficult to provide one without violating the other: scanning a user's inbox for potentially malicious messages seems to imply access to all safe ones as well. Are privacy and safety opposed, or is it possible in some circumstance to guarantee the safety of anonymous content without access to that content. We demonstrate that such "blind content moderation" is possible in certain domains. Additionally, the methods we introduce offer safety guarantees, an expressive content space, and require no human moderation load: they are safe, expressive, and scalable Though it may seem preposterous to try moderating UGC without human- or machine-level access to it, human computation makes blind moderation possible. We establish this existence claim by defining two very different human computational methods, behavioral thresholding and reverse correlation. Each leverages the statistical and behavioral properties of so-called "inappropriate content" in different decision settings to moderate UGC without access to a message's meaning or intention. The first, behavioral thresholding, is shown to generalize the well-known ESP game.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Bastian, Reese D. "Content Moderation Issues Online: Section 230 Is Not to Blame". Student Articles Edition 8, n. 2 (febbraio 2022): 42–71. http://dx.doi.org/10.37419/jpl.v8.i2.1.

Testo completo
Abstract (sommario):
Section 230 of the Communications Decency Act (“Section 230”) is the glue that holds the Internet—as we know it today—together. Section 230 says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Simply put, Section 230 says that websites or platforms are not liable for content posted by third parties. There are many critics who attribute the maladies of the online world to Section 230. Section 230 presents issues such as over-moderation by Interactive Computer Service (“ICS”) providers that can go as far as to be considered censorship and under-moderation that leads to uncomely and even unsafe cyberspaces. Repealing or weakening Section 230 will not fix over-moderation—or even under-moderation—online but allowing and fostering competition in the tech sector will.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Omoyeni, Oladele Emmanuel, Chibunna Victor Enyinnaya, Peter Bardi, Clement A. Ogbaini e John O. Ajayi. "The Role of Technology In Combating Fake and Malicious Contents". Advances in Multidisciplinary & Scientific Research Journal Publications 10, n. 1 (30 marzo 2024): 17–24. http://dx.doi.org/10.22624/aims/sij/v10n1p4.

Testo completo
Abstract (sommario):
Technology presents a double-edged sword in the fight against fake content online This paper investigates the multifaceted roles of technology in the dissemination of fake content online. It explores how advancements like automation, social media algorithms, and deep fakes facilitate the spread of misinformation. Conversely, it examines how technologies like AI, fact-checking tools, and content moderation can be harnessed to mitigate this challenge Conversely, the paper examines how technologies like artificial intelligence (AI), fact-checking tools, and content moderation can be harnessed to mitigate this challenge. Finally, the discussion goes into policy considerations and potential future technological solutions for fostering a more trustworthy online environment (Shu et al., 2017; Tandoc et al., 2018). Keywords: Fake Content, Misinformation, Disinformation, Technology, AI, Fact-Checking, Content Moderation, Online Trust. Journal Reference Format: Omoyeni, O.E. Enyinnaya, C.V., Bardi, P., Ogbaini, C.A. & Ajayi, J.O. (2024): The Role of Technology In Combating Fake and Malicious Contents Social Informatics, Business, Politics, Law, Environmental Sciences & Technology Journal. Vol. 10, No. 1. Pp 17-24 www.isteams/socialinformaticsjournal. dx.doi.org/10.22624/AIMS/SIJ/V10N1P4
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Wu, Yi. "Privacy, Free Speech and Content Moderation: A Literature Review and Constitutional Framework Analysis". Innovation in Science and Technology 1, n. 4 (novembre 2022): 30–39. http://dx.doi.org/10.56397/ist.2022.11.04.

Testo completo
Abstract (sommario):
Content moderation is one of the lifeblood of Internet platform companies. Concealment and innovation coexist. The United States and the European Union are leading the world in the field of human rights protection in content moderation. Both the text of the Constitution and the jurisprudence have formed a relatively complete framework of protection. Public and private sectors, online and offline communities, and the balance between ex-ante and ex-post, are at the intersection of content moderation with privacy and free speech. We need to further research exploring the design of the online moderation model, which will balance the arguments around policy, human rights law, and the need to make online spaces safer for a world-wide diverse population.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Seering, Joseph, Tony Wang, Jina Yoon e Geoff Kaufman. "Moderator engagement and community development in the age of algorithms". New Media & Society 21, n. 7 (11 gennaio 2019): 1417–43. http://dx.doi.org/10.1177/1461444818821316.

Testo completo
Abstract (sommario):
Online communities provide a forum for rich social interaction and identity development for billions of Internet users worldwide. In order to manage these communities, platform owners have increasingly turned to commercial content moderation, which includes both the use of moderation algorithms and the employment of professional moderators, rather than user-driven moderation, to detect and respond to anti-normative behaviors such as harassment and spread of offensive content. We present findings from semi-structured interviews with 56 volunteer moderators of online communities across three platforms (Twitch, Reddit, and Facebook), from which we derived a generalized model categorizing the ways moderators engage with their communities and explaining how these communities develop as a result. This model contains three processes: being and becoming a moderator; moderation tasks, actions, and responses; and rules and community development. In this work, we describe how moderators contribute to the development of meaningful communities, both with and without algorithmic support.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Seering, Joseph, Manas Khadka, Nava Haghighi, Tanya Yang, Zachary Xi e Michael Bernstein. "Chillbot: Content Moderation in the Backchannel". Proceedings of the ACM on Human-Computer Interaction 8, CSCW2 (7 novembre 2024): 1–26. http://dx.doi.org/10.1145/3686941.

Testo completo
Abstract (sommario):
Moderating online spaces effectively is not a matter of simply taking down content: moderators also provide private feedback and defuse situations before they cross the line into harm. However, moderators have little tool support for these activities, which often occur in the backchannel rather than in front of the entire community. In this paper, we introduce Chillbot, a moderation tool for Discord designed to facilitate backchanneling from moderators to users. With Chillbot, moderators gain the ability to send rapid anonymous feedback responses to situations where removal or formal punishment is too heavy-handed to be appropriate, helping educate users about how to improve their behavior while avoiding direct confrontations that can put moderators at risk. We evaluated Chillbot through a two week field deployment on eleven Discord servers ranging in size from 25 to over 240,000 members. Moderators in these communities used Chillbot more than four hundred times during the study, and moderators from six of the eleven servers continued using the tool past the end of the formal study period. Based on this deployment, we describe implications for the design of a broader variety of means by which moderation tools can help shape communities' norms and behavior.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Ghadirian, Hajar, Keyvan Salehi e Ahamd Fauzi Mohd Ayub. "Assessing the Effectiveness of Role Assignment on Improving Students' Asynchronous Online Discussion Participation". International Journal of Distance Education Technologies 17, n. 1 (gennaio 2019): 31–51. http://dx.doi.org/10.4018/ijdet.2019010103.

Testo completo
Abstract (sommario):
Taking into account prior research suggesting a lack of student participation in online discussions, this study examines the influence of peer moderator (PM) role assignment on students' participation and that of their peers' participation in online discussions. Eighty-four participants operated in a moderator role, reciprocally. Moreover, the study examines the differences in the level of e-moderation supports enacted by PMs of high-and low-density online discussions. Online participation was assessed using log files of seven-week discussions and social network analysis techniques. Quantitative content analysis was applied with online interaction transcripts of PMs for two groups of online discussions. The results indicated that students in the PM role reached significantly higher level of participation quantity and patterns and their non-posting participation significantly influenced all indicators of group participation. Further, high-and low-density online discussions differed significantly with regards to frequency of PMs' e-moderation supports.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Vahed, Sarah, Catalina Goanta, Pietro Ortolani e Alan G. Sanfey. "Moral judgment of objectionable online content: Reporting decisions and punishment preferences on social media". PLOS ONE 19, n. 3 (25 marzo 2024): e0300960. http://dx.doi.org/10.1371/journal.pone.0300960.

Testo completo
Abstract (sommario):
Harmful and inappropriate online content is prevalent, necessitating the need to understand how individuals judge and wish to mitigate the spread of negative content on social media. In an online study with a diverse sample of social media users (n = 294), we sought to elucidate factors that influence individuals’ evaluation of objectionable online content. Participants were presented with images varying in moral valence, each accompanied by an indicator of intention from an ostensible content poster. Half of the participants were assigned the role of user content moderator, while the remaining participants were instructed to respond as they normally would online. The study aimed to establish whether moral imagery, the intention of a content poster, and the perceived responsibility of social media users, affect judgments of objectionability, operationalized through both decisions to flag content and preferences to seek punishment of other users. Our findings reveal that moral imagery strongly influences users’ assessments of what is appropriate online content, with participants almost exclusively choosing to report and punish morally negative images. Poster intention also plays a significant role in user’s decisions, with greater objection shown to morally negative content when it has been shared by another user for the purpose of showing support for it. Bestowing a content moderation role affected reporting behaviour but not punishment preferences. We also explore individual user characteristics, finding a negative association between trust in social media platforms and reporting decisions. Conversely, a positive relationship was identified between trait empathy and reporting rates. Collectively, our insights highlight the complexity of social media users’ moderation decisions and preferences. The results advance understanding of moral judgments and punishment preferences online, and offer insights for platforms and regulatory bodies aiming to better understand social media users’ role in content moderation.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Zhu, Junzhe, Elizabeth Wickes e John R. Gallagher. "A machine learning algorithm for sorting online comments via topic modeling". Communication Design Quarterly 9, n. 2 (luglio 2021): 4–14. http://dx.doi.org/10.1145/3453460.3453462.

Testo completo
Abstract (sommario):
This article uses a machine learning algorithm to demonstrate a proof-of-concept case for moderating and managing online comments as a form of content moderation, which is an emerging area of interest for technical and professional communication (TPC) researchers. The algorithm sorts comments by topical similarity to a reference comment/article rather than display comments by linear time and popularity. This approach has the practical benefit of enabling TPC researchers to reconceptualize content display systems in dynamic ways.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Barker, Kim. "The AVMSD as a harmonized approach to moderating hate speech?" Journal of Digital Media & Policy 12, n. 3 (1 novembre 2021): 387–405. http://dx.doi.org/10.1386/jdmp_00072_1.

Testo completo
Abstract (sommario):
While the EU Commission has set its sights on much broader online content regulation through the Digital Services Act (DSA), and the ongoing reforms to the eCommerce Directive (eCD), the AudioVisual Media Services Directive (AVMSD) is set to play a role in online hate regulation for content on video-sharing platforms (VSPs). The challenges though are greater than fitting within the DSA agenda and include particular barriers from national content moderation laws across various member-states, together with a fragmented umbrella of pan-European content regulation mechanisms. The convergence in timing of the AVMSD transposition, the draft DSA and the reform of the eCD (and its liability shield) offers a unique opportunity to assess the implications of harmonized content moderation responsibilities for hate speech from 2020 and beyond. This article explores the AVMSD, DSA and eCD as a ‘trend’, before discussing the impact of the reformulated AVMSD on VSPs and their efforts to tackle online hate. It argues that the AVMSD is an underappreciated tool in the increasingly harmonized approach to moderating online hate speech, but also a trigger-point for the new EU trend towards tackling platform power.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Jiang, Jialun Aaron, Morgan Klaus Scheuerman, Casey Fiesler e Jed R. Brubaker. "Understanding international perceptions of the severity of harmful content online". PLOS ONE 16, n. 8 (27 agosto 2021): e0256762. http://dx.doi.org/10.1371/journal.pone.0256762.

Testo completo
Abstract (sommario):
Online social media platforms constantly struggle with harmful content such as misinformation and violence, but how to effectively moderate and prioritize such content for billions of global users with different backgrounds and values presents a challenge. Through an international survey with 1,696 internet users across 8 different countries across the world, this empirical study examines how international users perceive harmful content online and the similarities and differences in their perceptions. We found that across countries, the perceived severity consistently followed an exponential growth as the harmful content became more severe, but what harmful content were perceived as more or less severe varied significantly. Our results challenge platform content moderation’s status quo of using a one-size-fits-all approach to govern international users, and provide guidance on how platforms may wish to prioritize and customize their moderation of harmful content.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Soraya, Serin Himatus, e Wahyu Tri Wibowo. "Construction of Public Opinion about Religious Moderation on NU Online Instagram Accounts (@nuonline_id)". KOMUNIKA: Jurnal Dakwah dan Komunikasi 15, n. 1 (30 giugno 2021): 111–23. http://dx.doi.org/10.24090/komunika.v15i1.4572.

Testo completo
Abstract (sommario):
This study tries to describe how NU Online constructs the content of religious moderation in influencing public opinion. Starting from the development of social media in the digital era, which plays a significant role in building public opinion and culture, NU Online uses Instagram (@nuonline_id) to spread the concept of moderation. The spread of the moderation message aims to maintain the country’s integrity and prevent radicalism or extremism in religion. This research is descriptive qualitative research. The study conducts by online observations of the @nuonline_id Instagram page to detect religious moderation content. This study succeeded in finding the construction of religious moderation on the @nuonline_id account using text analysis (images and text). The study results explain that NU Online carries out the structure of religious moderation in six aspects of life: (a) Aspects of inter-religious relations. In this aspect, NU Online invites Muslims to maintain harmony with people of different religions. (b) Aspects of social life by suggesting the Indonesian people live in harmony with each other as Indonesian citizens. (c) Political factors and state management by asking the government to develop moderation in carrying out their duties. (d) The education aspect is by the inculcation of the moderation concept in the curriculum. (e) Legal aspects and understanding of religious texts encourage Islamic scholars to consider the religious context in establishing laws. (f) Economic factors, taking into economic equity.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Bhasme, Amisha Subhashrao. "Generative AI for Ethical and Bias-Free Content Moderation". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, n. 02 (8 febbraio 2025): 1–9. https://doi.org/10.55041/ijsrem41387.

Testo completo
Abstract (sommario):
The growth of online platforms has led to an increase in harmful content, such as hate speech, fake news, and explicit images. While traditional content moderation techniques are human-centric, they struggle to scale effectively. Generative AI presents an opportunity to automate and enhance content moderation, offering efficiency at scale. However, generative AI models must be designed to detect harmful content while ensuring fairness and ethical behavior, avoiding biases and over-censorship. This paper explores the challenges of using generative AI for content moderation, focusing on bias detection, fairness frameworks, and solutions to prevent harm.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Yüksel, Hilal Müleyke, Arma Deger Mut e Alper Ozpinar. "Enhancing Image Content Analysis in B2C Online Marketplaces". European Journal of Research and Development 3, n. 4 (31 dicembre 2023): 229–39. http://dx.doi.org/10.56038/ejrnd.v3i4.381.

Testo completo
Abstract (sommario):
The automation of image analysis in Business-to-Consumer (B2C) online marketplaces is critical, especially when managing vast quantities of supplier-uploaded product images that may contain various forms of objectionable content. This study addresses the automated detection of diverse content types, including sexual, political, and disturbing content, as well as prohibited items like alcohol, tobacco, drugs, and weapons. Furthermore, the identification of competing brand logos and related imagery is examined for competition and ethical reasons. The research integrates custom transfer learning models with the established Microsoft and Google Vision APIs to enhance the precision of content analysis in e-commerce settings. The introduced transfer learning model, trained on a comprehensive dataset, exhibited a significant improvement in identifying and categorizing the specified content types, achieving a notable true positive rate that surpasses traditional API performances. The findings reveal that the “Pazarama Model”, with its transfer learning framework, not only delivers a more accurate and cost-effective content moderation solution but also demonstrates enhanced efficiency by reducing the image processing time and associated costs. These results support a shift toward specialized transfer learning models for content moderation, advocating for their adoption to maintain content integrity and enhance user trust within e-commerce platforms. The study advocates for continued refinement of these models, suggesting the integration of multimodal data to further advance the content analysis capabilities in B2C environments
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Fanshoby, Muhammad, e Khofifah Nur Hidayat. "Optimalisasi Pesan Moderasi Beragama di Era Digital: Studi Website NU Online". El Madani : Jurnal Dakwah dan Komunikasi Islam 5, n. 01 (12 giugno 2024): 1–22. http://dx.doi.org/10.53678/elmadani.v5i01.1722.

Testo completo
Abstract (sommario):
This research focuses on optimizing religious moderation messages through Search Engine Optimization (SEO) on NU Online, an Islamic website platform in Indonesia. Using a constructivist paradigm with a descriptive qualitative approach, this study explores how religious moderation messages are constructed and disseminated through Computer-Mediated Communication (CMC) and SEO. Data was collected through interviews, observations, and documentation, with a specific analysis on the popular article titled "Moderation of Religion and Its Urgency" on NU Online. The results show that NU Online has effectively implemented keyword research and On-Page SEO but is lacking in Off-Page SEO implementation. This indicates that the religious moderation messages are not yet fully optimized on this website. These findings underscore the importance of comprehensive SEO strategies to enhance the visibility and dissemination of religious moderation messages. The study recommends increasing the use of Off-Page SEO on Islamic websites. This includes the development of models or practical guides to optimize online religious content, encompassing aspects of keyword research, inclusive content, and effective campaign strategies. These implications are crucial for expanding the reach and impact of religious moderation messages, especially in the current digital era. The study also opens opportunities for further research on the influence of SEO on the dissemination of religious messages on the internet. The implementation of effective SEO strategies is expected to enhance the effectiveness of disseminating religious moderation messages through NU Online and other Islamic websites in Indonesia.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Vera, Valerie. "Nonsuicidal Self‐Injury and Content Moderation on TikTok". Proceedings of the Association for Information Science and Technology 60, n. 1 (ottobre 2023): 1164–66. http://dx.doi.org/10.1002/pra2.979.

Testo completo
Abstract (sommario):
ABSTRACTOnline nonsuicidal self‐injury communities commonly create and share information on harm reduction strategies and exchange social support on social media platforms, including the short‐form video sharing platform TikTok. While TikTok's Community Guidelines permit users to share personal experiences with mental health topics, TikTok explicitly bans content depicting, promoting, normalizing, or glorifying activities that could lead to self‐harm. As such, TikTok may moderate user‐generated content, leading to exclusion and marginalization in this digital space. Through semi‐structured interviews with eight TikTok users with a history of nonsuicidal self‐injury, this pilot study explores how users experience TikTok's algorithm to create and engage with content on nonsuicidal self‐injury. Findings demonstrate that users understand how to circumnavigate TikTok's algorithm through algospeak (i.e., codewords or turns of phrases) and signaling to maintain visibility on the platform. Further, findings emphasize that users actively engage in self‐surveillance and self‐censorship to create a safe online community. In turn, content moderation can ultimately hinder progress toward the destigmatization of nonsuicidal self‐injury and restrict social support exchanged within online nonsuicidal self‐injury communities.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Dias Oliva, Thiago. "Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression". Human Rights Law Review 20, n. 4 (dicembre 2020): 607–40. http://dx.doi.org/10.1093/hrlr/ngaa032.

Testo completo
Abstract (sommario):
Abstract With the increase in online content circulation new challenges have arisen: the dissemination of defamatory content, non-consensual intimate images, hate speech, fake news, the increase of copyright violations, among others. Due to the huge amount of work required in moderating content, internet platforms are developing artificial intelligence to automate decision-making content removal. This article discusses the reported performance of current content moderation technologies from a legal perspective, addressing the following question: what risks do these technologies pose to freedom of expression, access to information and diversity in the digital environment? The legal analysis developed by the article focuses on international human rights law standards. Despite recent improvements, content moderation technologies still fail to understand context, thereby posing risks to users’ free speech, access to information and equality. Consequently, it is concluded, these technologies should not be the sole basis for reaching decisions that directly affect user expression.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Horta Ribeiro, Manoel, Justin Cheng e Robert West. "Post Approvals in Online Communities". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 maggio 2022): 335–46. http://dx.doi.org/10.1609/icwsm.v16i1.19296.

Testo completo
Abstract (sommario):
In many online communities, community leaders (i.e., moderators and administrators) can proactively filter undesired content by requiring posts to be approved before publication. But although many communities adopt post approvals, there has been little research on its impact on community behavior. Through a longitudinal analysis of 233,402 Facebook Groups, we examined 1) the factors that led to a community adopting post approvals and 2) how the setting shaped subsequent user activity and moderation in the group. We find that communities that adopted post approvals tended to do so following sudden increases in user activity (e.g., comments) and moderation (e.g., reported posts). This adoption of post approvals led to fewer but higher-quality posts. Though fewer posts were shared after adoption, not only did community members write more comments, use more reactions, and spend more time on the posts that were shared, they also reported these posts less. Further, post approvals did not significantly increase the average time leaders spent in the group, though groups that enabled the setting tended to appoint more leaders. Last, the impact of post approvals varied with both group size and how the setting was used, e.g., group size mediates whether leaders spent more or less time in the group following the adoption of the setting. Our findings suggest ways that proactive content moderation may be improved to better support online communities.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Scheffler, Sarah, e Jonathan Mayer. "SoK: Content Moderation for End-to-End Encryption". Proceedings on Privacy Enhancing Technologies 2023, n. 2 (aprile 2023): 403–29. http://dx.doi.org/10.56553/popets-2023-0060.

Testo completo
Abstract (sommario):
Popular messaging applications now enable end-to-end-encryption (E2EE) by default, and E2EE data storage is becoming common. These important advances for security and privacy create new content moderation challenges for online services, because services can no longer directly access plaintext content. While ongoing public policy debates about E2EE and content moderation in the United States and European Union emphasize child sexual abuse material and misinformation in messaging and storage, we identify and synthesize a wealth of scholarship that goes far beyond those topics. We bridge literature that is diverse in both content moderation subject matter, such as malware, spam, hate speech, terrorist content, and enterprise policy compliance, as well as intended deployments, including not only privacy-preserving content moderation for messaging, email, and cloud storage, but also private introspection of encrypted web traffic by middleboxes. In this work, we systematize the study of content moderation in E2EE settings. We set out a process pipeline for content moderation, drawing on a broad interdisciplinary literature that is not specific to E2EE. We examine cryptography and policy design choices at all stages of this pipeline, and we suggest areas of future research to fill gaps in literature and better understand possible paths forward.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Urman, Aleksandra, Aniko Hannak e Mykola Makhortykh. "User Attitudes to Content Moderation in Web Search". Proceedings of the ACM on Human-Computer Interaction 8, CSCW1 (17 aprile 2024): 1–27. http://dx.doi.org/10.1145/3637423.

Testo completo
Abstract (sommario):
Internet users highly rely on and trust web search engines, such as Google, to find relevant information online. However, scholars have documented numerous biases and inaccuracies in search outputs. To improve the quality of search results, search engines employ various content moderation practices such as interface elements informing users about potentially dangerous websites and algorithmic mechanisms for downgrading or removing low-quality search results. While the reliance of the public on web search engines and their use of moderation practices is well-established, user attitudes towards these practices have not yet been explored in detail. To address this gap, we first conducted an overview of content moderation practices used by search engines, and then surveyed a representative sample of the US adult population (N=398) to examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search. We also analyzed the relationship between user characteristics and their support for specific moderation practices. We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results. More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Trio Mashuri, Akbar, Abdul Rojak Lubis e Agoes Moh Moefad. "Construction of Religious Moderation at Nahdlatul Ulama Online Media in East Java". MUHARRIK: Jurnal Dakwah dan Sosial 6, n. 1 (26 maggio 2023): 71–86. http://dx.doi.org/10.37680/muharrik.v6i1.2814.

Testo completo
Abstract (sommario):
This research focuses on how journalists at Nahdlatul Ulama (NU) Online Media in East Java construct news presented to the public about religious moderation. It showcases news framing on religious moderation from the perspective of Nahdlatul Ulama through online media in East Java. This research aims to understand how NU's religious moderation is disseminated in society through media produced by journalists. The research method employed is qualitative, utilizing media text analysis with framing by Zhondang Pan and Gerald M. Kosicki, analyzing four syntactic structures: script, thematic, and rhetorical. The research also involves validation of the truth and news construction with the editorial board of NU Online East Java. The results of this research explain that journalists at NU Online East Java construct news on religious moderation by presenting the teachings of Ahlus Sunnah Wal Jamaah Nahdliyah through the practice of Hubbul Watan Minal Iman (Love of the Homeland is Part of Faith), with the vision of Islam as a mercy to all creations (rahmat lil 'alamin). Conflicts arise in online media concerning narratives of violence, liberalization, and radicalism. Consequently, journalists construct the values of religious moderation from the perspective of NU figures, namely Alisha Wahid and Gus Miftah, to create alternative narratives in online media that promote positive content rather than content that divides intergroup relationships.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Romadlan, Said. "Pola Konten Pemberitaan Pemilu 2024 dan Ideologi Moderatisme di Media Keislaman Online". Jurnal Komunikasi Islam 14, n. 1 (21 giugno 2024): 46–70. http://dx.doi.org/10.15642/jki.2024.14.1.46-70.

Testo completo
Abstract (sommario):
This study aims to comprehend the pattern of news content of Indonesia's 2024 presidential election and the ideology on the media suaramuhammadiyah.id and NU online during the presidential election campaign. Using a qualitative content analysis method, this study found that the news content of the two online Islamic media had different patterns in issue selection and source determination. Meanwhile, in the category of content tendencies, the two online media showed relatively the same pattern, that is, both were neutral. This study also found that the news content of the media suaramuhammadiyah.id and NU online was influenced by the ideology of moderation. This can be seen from the tendency of the content of the two online media which disseminated moderate Islamic ideology developed by Muhammadiyah and NU. The tendency of neutrality is in accordance with the values of Muhammadiyah and NU moderation, as contained in the doctrine of Muhammadiyah's "Islam Berkemajuan" (Progressive Islam) and NU's "Islam Nusantara" (Islam of the Archipelago).
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Udupa, Sahana, Antonis Maronikolakis e Axel Wisiorek. "Ethical scaling for content moderation: Extreme speech and the (in)significance of artificial intelligence". Big Data & Society 10, n. 1 (gennaio 2023): 205395172311724. http://dx.doi.org/10.1177/20539517231172424.

Testo completo
Abstract (sommario):
In this article, we present new empirical evidence to demonstrate the severe limitations of existing machine learning content moderation methods to keep pace with, let alone stay ahead of, hateful language online. Building on the collaborative coding project “AI4Digntiy,” we outline the ambiguities and complexities of annotating problematic text in AI-assisted moderation systems. We diagnose the shortcomings of the content moderation and natural language processing approach as emerging from a broader epistemological trapping wrapped in the liberal-modern idea of “the human.” Presenting a decolonial critique of the “human vs machine” conundrum and drawing attention to the structuring effects of coloniality on extreme speech, we propose “ethical scaling” to highlight moderation process as political praxis. As a normative framework for platform governance, ethical scaling calls for a transparent, reflexive, and replicable process of iteration for content moderation with community participation and global parity, which should evolve in conjunction with addressing algorithmic amplification of divisive content and resource allocation for content moderation.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Schluger, Charlotte, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil e Karen Levy. "Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support". Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7 novembre 2022): 1–27. http://dx.doi.org/10.1145/3555095.

Testo completo
Abstract (sommario):
To address the widespread problem of uncivil behavior, many online discussion platforms employ human moderators to take action against objectionable content, such as removing it or placing sanctions on its authors. Thisreactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation, and has accordingly underpinned many recent efforts at introducing automation into the moderation process. Comparatively less work has been done to understand other moderation paradigms---such as proactively discouraging the emergence of antisocial behavior rather than reacting to it---and the role algorithmic support can play in these paradigms. In this work, we investigate such a proactive framework for moderation in a case study of a collaborative setting: Wikipedia Talk Pages. We employ a mixed methods approach, combining qualitative and design components for a holistic analysis. Through interviews with moderators, we find that despite a lack of technical and social support, moderators already engage in a number of proactive moderation behaviors, such as preemptively intervening in conversations to keep them on track. Further, we explore how automation could assist with this existing proactive moderation workflow by building a prototype tool, presenting it to moderators, and examining how the assistance it provides might fit into their workflow. The resulting feedback uncovers both strengths and drawbacks of the prototype tool and suggests concrete steps towards further developing such assisting technology so it can most effectively support moderators in their existing proactive moderation workflow.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Crosset, Valentine, e Benoît Dupont. "Cognitive assemblages: The entangled nature of algorithmic content moderation". Big Data & Society 9, n. 2 (luglio 2022): 205395172211433. http://dx.doi.org/10.1177/20539517221143361.

Testo completo
Abstract (sommario):
This article examines algorithmic content moderation, using the moderation of violent extremist content as a specific case. In recent years, algorithms have increasingly been mobilized to perform essential moderation functions for online social media platforms such as Facebook, YouTube, and Twitter, including limiting the proliferation of extremist speech. Drawing on Katherine Hayles’ concept of “cognitive assemblages” and the Critical Security Studies literature, we show how algorithmic regulation operates within larger assemblages of humans and non-humans to influence the surveillance and regulation of information flows. We argue that the dynamics of algorithmic regulation are more liquid, cobbled together and distributed than it appears. It is characterized by a set of shifting human and machine entities, which mix traditional surveillance methods with more sophisticated tools, and whose linkages and interactions are transient. The processes that enable the consolidation of knowledge about risky profiles and contents are, therefore, collective and distributed among humans and machines. This allows us to argue that the cognitive assemblages involved in content moderation become a cobbled space of preemptive calculation.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Wagner, Ben, Matthias C. Kettemann, Anna Sophia Tiedeke, Felicitas Rachinger e Marie-Therese Sekwenz. "Mapping interpretations of the law in online content moderation in Germany". Computer Law & Security Review 55 (novembre 2024): 106054. http://dx.doi.org/10.1016/j.clsr.2024.106054.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Horta Ribeiro, Manoel, Shagun Jhaver, Savvas Zannettou, Jeremy Blackburn, Gianluca Stringhini, Emiliano De Cristofaro e Robert West. "Do Platform Migrations Compromise Content Moderation? Evidence from r/The_Donald and r/Incels". Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (13 ottobre 2021): 1–24. http://dx.doi.org/10.1145/3476057.

Testo completo
Abstract (sommario):
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated websites. Previous work suggests that within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of their user base and activity on the new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Ma, Renkai, e Yubo Kou. ""Defaulting to boilerplate answers, they didn't engage in a genuine conversation": Dimensions of Transparency Design in Creator Moderation". Proceedings of the ACM on Human-Computer Interaction 7, CSCW1 (14 aprile 2023): 1–26. http://dx.doi.org/10.1145/3579477.

Testo completo
Abstract (sommario):
Transparency matters a lot to people who experience moderation on online platforms; much CSCW research has viewed offering explanations as one of the primary solutions to enhance moderation transparency. However, relatively little attention has been paid to unpacking what transparency entails in moderation design, especially for content creators. We interviewed 28 YouTubers to understand their moderation experiences and analyze the dimensions of moderation transparency. We identified four primary dimensions: participants desired the moderation system to present moderation decisions saliently, explain the decisions profoundly, afford communication with the users effectively, and offer repairment and learning opportunities. We discuss how these four dimensions are mutually constitutive and conditioned in the context of creator moderation, where the target of governance mechanisms extends beyond the content to creator careers. We then elaborate on how a dynamic, transparency perspective could value content creators' digital labor, how transparency design could support creators' learning, as well as implications for transparency design of other creator platforms.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Boberg, Svenja, Tim Schatto-Eckrodt, Lena Frischlich e Thorsten Quandt. "The Moral Gatekeeper? Moderation and Deletion of User-Generated Content in a Leading News Forum". Media and Communication 6, n. 4 (8 novembre 2018): 58–69. http://dx.doi.org/10.17645/mac.v6i4.1493.

Testo completo
Abstract (sommario):
Participatory formats in online journalism offer increased options for user comments to reach a mass audience, also enabling the spreading of incivility. As a result, journalists feel the need to moderate offensive user comments in order to prevent the derailment of discussion threads. However, little is known about the principles on which forum moderation is based. The current study aims to fill this void by examining 673,361 user comments (including all incoming and rejected comments) of the largest newspaper forum in Germany (Spiegel Online) in terms of the moderation decision, the topic addressed, and the use of insulting language using automated content analysis. The analyses revealed that the deletion of user comments is a frequently used moderation strategy. Overall, more than one-third of comments studied were rejected. Further, users mostly engaged with political topics. The usage of swear words was not a reason to block a comment, except when offenses were used in connection with politically sensitive topics. We discuss the results in light of the necessity for journalists to establish consistent and transparent moderation strategies.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Lupu, Yonatan, Richard Sear, Nicolas Velásquez, Rhys Leahy, Nicholas Johnson Restrepo, Beth Goldberg e Neil F. Johnson. "Offline events and online hate". PLOS ONE 18, n. 1 (25 gennaio 2023): e0278511. http://dx.doi.org/10.1371/journal.pone.0278511.

Testo completo
Abstract (sommario):
Online hate speech is a critical and worsening problem, with extremists using social media platforms to radicalize recruits and coordinate offline violent events. While much progress has been made in analyzing online hate speech, no study to date has classified multiple types of hate speech across both mainstream and fringe platforms. We conduct a supervised machine learning analysis of 7 types of online hate speech on 6 interconnected online platforms. We find that offline trigger events, such as protests and elections, are often followed by increases in types of online hate speech that bear seemingly little connection to the underlying event. This occurs on both mainstream and fringe platforms, despite moderation efforts, raising new research questions about the relationship between offline events and online speech, as well as implications for online content moderation.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Li, Hanlin, Brent Hecht e Stevie Chancellor. "Measuring the Monetary Value of Online Volunteer Work". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 maggio 2022): 596–606. http://dx.doi.org/10.1609/icwsm.v16i1.19318.

Testo completo
Abstract (sommario):
Online volunteers are a crucial labor force that keeps many for-profit systems afloat (e.g. social media platforms and online review sites). Despite their substantial role in upholding highly valuable technological systems, online volunteers have no way of knowing the value of their work. This paper uses content moderation as a case study and measures its monetary value to make apparent volunteer labor’s value. Using a novel dataset of private logs generated by moderators, we use linear mixed-effect regression and estimate that Reddit moderators worked a minimum of 466 hours per day in 2020. These hours are worth 3.4 million USD based on the median hourly wage for comparable content moderation services in the U.S. We discuss how this information may inform pathways to alleviate the one-sided relationship between technology companies and online volunteers.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Katsaros, Matthew, Kathy Yang e Lauren Fratamico. "Reconsidering Tweets: Intervening during Tweet Creation Decreases Offensive Content". Proceedings of the International AAAI Conference on Web and Social Media 16 (31 maggio 2022): 477–87. http://dx.doi.org/10.1609/icwsm.v16i1.19308.

Testo completo
Abstract (sommario):
The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of offensive content can be attributed not just to the deletion and revision of prompted Tweets - we also observed a decrease in both the number of offensive Tweets that prompted users create in the future and the number of offensive replies to prompted Tweets. We conclude that interventions allowing users to reconsider their comments can be an effective mechanism for reducing offensive content online.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Meggyesfalvi, Boglárka. "Policing harmful content on social media platforms". Belügyi Szemle 69, n. 6. ksz. (1 dicembre 2021): 26–38. http://dx.doi.org/10.38146/bsz.spec.2021.6.2.

Testo completo
Abstract (sommario):
Social media content moderation is an important area to explore, as the number of users and the amount of content are rapidly increasing every year. As an effect of the COVID19 pandemic, people of all ages around the world spend proportionately more time online. While the internet undeniably brings many benefits, the need for effective online policing is even greater now, as the risk of exposure to harmful content grows. In this paper, the aim is to understand the context of how harmful content - such as posts containing child sexual abuse material, terrorist propaganda or explicit violence - is policed online on social media platforms, and how it could be improved. It is intended in this assessment to outline the difficulties in defining and regulating the growing amount of harmful content online, which includes looking at relevant current legal frameworks at development. It is noted that the subjectivity and complexity in moderating content online will remain by the very nature of the subject. It is discussed and critically analysed whose responsibility managing toxic online content should be. It is argued that an environment in which all stakeholders (including supranational organisations, states, law enforcement agencies, companies and users) maximise their participation, and cooperation should be created in order to effectively ensure online safety. Acknowledging the critical role human content moderators play in keeping social media platforms safe online spaces, consideration about their working conditions are raised. They are essential stakeholders in policing (legal and illegal) harmful content; therefore, they have to be treated better for humanistic and practical reasons. Recommendations are outlined such as trying to prevent harmful content from entering social media platforms in the first place, providing moderators better access to mental health support, and using more available technological tools.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Tran, Chau, Kaylea Champion, Benjamin Mako Hill e Rachel Greenstadt. "The Risks, Benefits, and Consequences of Prepublication Moderation: Evidence from 17 Wikipedia Language Editions". Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7 novembre 2022): 1–25. http://dx.doi.org/10.1145/3555225.

Testo completo
Abstract (sommario):
Many online communities rely on postpublication moderation where contributors-even those that are perceived as being risky-are allowed to publish material immediately and where moderation takes place after the fact. An alternative arrangement involves moderating content before publication. A range of communities have argued against prepublication moderation by suggesting that it makes contributing less enjoyable for new members and that it will distract established community members with extra moderation work. We present an empirical analysis of the effects of a prepublication moderation system called FlaggedRevs that was deployed by several Wikipedia language editions. We used panel data from 17 large Wikipedia editions to test a series of hypotheses related to the effect of the system on activity levels and contribution quality. We found that the system was very effective at keeping low-quality contributions from ever becoming visible. Although there is some evidence that the system discouraged participation among users without accounts, our analysis suggests that the system's effects on contribution volume and quality were moderate at most. Our findings imply that concerns regarding the major negative effects of prepublication moderation systems on contribution quality and project productivity may be overstated.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Liya Nikmah Jazhila, Imam Bonjol Juhari, Kun Wazis e Mohd Aashif bin Ismail. "Nahdlatul Ulama's Dedication to Promoting Religious Moderation: A Virtual Ethnographic Study of the NU Online Website". Fikri : Jurnal Kajian Agama, Sosial dan Budaya 9, n. 1 (30 giugno 2024): 88–103. http://dx.doi.org/10.25217/jf.v9i1.4613.

Testo completo
Abstract (sommario):
The rise of hate speech, extremism, radicalism, and the deterioration of interfaith relations has become a prevalent issue within communities, especially in online media. Given the potential for this to cause division, it is crucial to approach the issue with moderation. This research focuses on exploring how the Nahdlatul Ulama Organization presents religious moderation on the NU Online website to promote values of goodness and demonstrate tolerance. The study used qualitative research methods and a virtual ethnography approach to analyze virtual texts and online media records. Data collection centered on selecting content related to religious moderation on NU Online media, and the analysis involved data reduction, display, and drawing conclusions. The research revealed that NU Online's moderate da'wah entails: Narrative of religious moderation as an option to gain Islamic insights, creating a virtual space to promote a calming religious spirit and present Islam as a universal religion ‘raḥmatan lil ālamīn’, and kindly promoting moderate Islam amidst the rapid influx of information in the media disruption era.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

McInnis, Brian, Leah Ajmani, Lu Sun, Yiwen Hou, Ziwen Zeng e Steven P. Dow. "Reporting the Community Beat: Practices for Moderating Online Discussion at a News Website". Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (13 ottobre 2021): 1–25. http://dx.doi.org/10.1145/3476074.

Testo completo
Abstract (sommario):
Due to challenges around low-quality comments and misinformation, many news outlets have opted to turn off commenting features on their websites. The New York Times (NYT), on the other hand, has continued to scale up its online discussion resources to reach large audiences. Through interviews with the NYT moderation team, we present examples of how moderators manage the first ~24 hours of online discussion after a story breaks, while balancing concerns about journalistic credibility. We discuss how managing comments at the NYT is not merely a matter of content regulation, but can involve reporting from the "community beat" to recognize emerging topics and synthesize the multiple perspectives in a discussion to promote community. We discuss how other news organizations---including those lacking moderation resources---might appropriate the strategies and decisions offered by the NYT. Future research should investigate strategies to share and update the information generated about topics in the news through the course of content moderation.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Moran, Rachel E., Izzi Grasso e Kolina Koltai. "Folk Theories of Avoiding Content Moderation: How Vaccine-Opposed Influencers Amplify Vaccine Opposition on Instagram". Social Media + Society 8, n. 4 (ottobre 2022): 205630512211442. http://dx.doi.org/10.1177/20563051221144252.

Testo completo
Abstract (sommario):
This study analyzes how vaccine-opposed users on Instagram share anti-vaccine content despite facing growing moderation attempts by the platform. Through a thematic analysis of Instagram content (in-feed and ephemeral “stories”) of a sample of vaccine-opposed Instagram users, we explore the observable tactics deployed by vaccine-opposed users in their attempts to avoid content moderation and amplify anti-vaccination content. Tactics range from lexical variations to encode vaccine-related keywords, to the creative use of Instagram features and affordances. The emergence of such tactics exists as a type of “folk theorization”—the cultivation of non-professional knowledge of how visibility on the platform works. Findings highlight the complications of content moderation as a route to minimizing misinformation, the consequences of algorithmic opacity and knowledge-building within problematic online communities.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Link, Daniel, Jie Ling, Jannik Hoffjann e Bernd Hellingrath. "A Semi-Automated Content Moderation Workflow for Humanitarian Situation Assessments". International Journal of Information Systems for Crisis Response and Management 8, n. 2 (aprile 2016): 31–49. http://dx.doi.org/10.4018/ijiscram.2016040103.

Testo completo
Abstract (sommario):
Although online social media has been recognized as a source of information that is potentially relevant for humanitarian organizations, it remains to demonstrate positive impact. The authors argue that relevant information isn't yet incorporated effectively into decision-making because the key role of humanitarian situation assessment experts and their methodologies hasn't been sufficiently recognized and incorporated into information systems design. In particular, the authors focus on the content moderation process (i.e. on examining, correcting and enriching data and controlling its dissemination) and argue that existing systems, which often follow a human-is-the-loop approach, either lack automation support or flexibility. In contrast, they present an interactive, semi-automated content moderation workflow and an instantiating prototype that follows the human-is-the-loop approach and centers on assessment experts. The evaluation of the new system practitioner interviews and serious games suggests that it offers good compatibility with experts' work practices, moderation quality and flexibility.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Langvardt, Kyle. "Regulating Online Content Moderation". SSRN Electronic Journal, 2017. http://dx.doi.org/10.2139/ssrn.3024739.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Frost-Arnold, Karen. "EXPLOITATION IN ONLINE CONTENT MODERATION". AoIR Selected Papers of Internet Research, 29 marzo 2023. http://dx.doi.org/10.5210/spir.v2022i0.13003.

Testo completo
Abstract (sommario):
This paper presents a normative framework for evaluating the moral and epistemic exploitation that online content moderation workers experience while working for social media companies (often as subcontractors or as Mechanical Turk workers). I argue that the current labor model of commercial content moderation (CCM) is exploitative in ways that inflict profound moral harm and epistemic injustices on the workers. This detailed account of exploitation enables us to see more clearly the contours and causes of the moral and epistemic injustice involved in CCM, and helps us understand precisely why these forms of exploitation are unjust. It also suggests some practical solutions for a more just labor model for the moderation work that shapes our online ecosystem.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

He, Qinglai, Yili Hong e T. S. Raghu. "Platform Governance with Algorithm-Based Content Moderation: An Empirical Study on Reddit". Information Systems Research, 21 agosto 2024. http://dx.doi.org/10.1287/isre.2021.0036.

Testo completo
Abstract (sommario):
Pratice- and Policy-oriented Abstract Volunteer (human) moderators have been the essential workforce for content moderation to combat growing inappropriate online content. Because volunteer-based content moderation faces challenges in achieving scalable, desirable, and sustainable moderation, many online platforms have started to adopt algorithm-based content moderation tools (bots). However, it is unclear how volunteer moderators react to the bot adoption in terms of their community-policing and -nurturing efforts. Our research collected public moderation records by bots and volunteer moderators from Reddit. Our analysis suggests that bots can augment volunteer moderators. Augmentation results in volunteers shifting their efforts from simple policing work to a broader set of moderations, including policing over subjective rule violations and satisfying the increased needs for community-nurturing activities following the policing actions. This paper has implications for online platform managers looking to scale online activities and explains how volunteers can achieve more effective and sustainable content moderation with the assistance of bots.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Lee, Edward. "Moderating Content Moderation: A Framework for Nonpartisanship in Online Governance". SSRN Electronic Journal, 2020. http://dx.doi.org/10.2139/ssrn.3705466.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia