Letteratura scientifica selezionata sul tema "Explicit content detection"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Explicit content detection".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Explicit content detection":

1

Marcial Basilio, Jorge Alberto, Gualberto Aguilar Torres, Gabriel Sanchez Perez, Linda Karina Toscano Medina, Hector Manuel Perez Meana e Enrique Escamilla Hernadez. "Explicit Content Image Detection". Signal & Image Processing : An International Journal 1, n. 2 (29 dicembre 2010): 47–58. http://dx.doi.org/10.5121/sipij.2010.1205.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Qamar Bhatti, Ali, Muhammad Umer, Syed Hasan Adil, Mansoor Ebrahim, Daniyal Nawaz e Faizan Ahmed. "Explicit Content Detection System: An Approach towards a Safe and Ethical Environment". Applied Computational Intelligence and Soft Computing 2018 (4 luglio 2018): 1–13. http://dx.doi.org/10.1155/2018/1463546.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An explicit content detection (ECD) system to detect Not Suitable For Work (NSFW) media (i.e., image/ video) content is proposed. The proposed ECD system is based on residual network (i.e., deep learning model) which returns a probability to indicate the explicitness in media content. The value is further compared with a defined threshold to decide whether the content is explicit or nonexplicit. The proposed system not only differentiates between explicit/nonexplicit contents but also indicates the degree of explicitness in any media content, i.e., high, medium, or low. In addition, the system also identifies the media files with tampered extension and label them as suspicious. The experimental result shows that the proposed model provides an accuracy of ~ 95% when tested on our image and video datasets.
3

Appati, Justice Kwame, Kennedy Yaw Lodonu e Richmond Chris-Koka. "A Review of Image Analysis Techniques for Adult Content Detection". International Journal of Software Innovation 9, n. 2 (aprile 2021): 102–21. http://dx.doi.org/10.4018/ijsi.2021040106.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The fast growth of internet access globally without boundary has led to some negative impacts among children who are exposed to pornographic contents daily. Many parental control strategies have been put in place to protect these children; however, these strategies are usually inspired by political and social interventions. With the availability of computational tools, many automated explicit content detection methods though having their flaws have been proposed to support these social interventions. In this study, a review of the current automated adult content detectors is presented with open issues for future research work.
4

Chen, Xiaoyuan, Turki Aljrees, Muhammad Umer, Hanen Karamti, Saba Tahir, Nihal Abuzinadah, Khaled Alnowaiser, Ala’ Abdulmajid Eshmawi, Abdullah Mohamed e Imran Ashraf. "A novel approach for explicit song lyrics detection using machine and deep ensemble learning models". PeerJ Computer Science 9 (30 agosto 2023): e1469. http://dx.doi.org/10.7717/peerj-cs.1469.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The content of music is not always suitable for all ages. Industries that manage music content are looking for ways to help adults determine what is appropriate for children. Lyrics of songs have become increasingly inappropriate for kids and can negatively impact their mental development. However, it is difficult to filter explicit musical content because it is mostly done manually, which is time-consuming and prone to errors. Existing approaches lack the desired accuracy and are complex. This study suggests using a combination of machine learning and deep learning models to automatically screen song lyrics in this regard. The proposed model, called ELSTM-VC, combines extra tree classifier and long short-term memory and its performance is compared to other models. The ELSTM-VC can detect explicit content in English lyrics and can be useful for the music industry. The study used a dataset of 100 songs from Spotify for training, and the results show that the proposed approach effectively detects explicit lyrics. It can censor offensive content for children with a 96% accuracy. The performance of the proposed approach is better than existing approaches including machine learning models and encoding-decoding models.
5

Muhammad Fadzli, Muhammad Arif Haikal, Mohd Fadzil Abu Hassan e Norazlin Ibrahim. "Explicit kissing scene detection in cartoon using convolutional long short-term memory". Bulletin of Electrical Engineering and Informatics 11, n. 1 (1 febbraio 2022): 213–20. http://dx.doi.org/10.11591/eei.v11i1.3542.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The main concern of this study is due to certain cartoon content consisting of explicit scenes such as kissing, sex, violence. That are somehow not suitable for kids and may contradict to some religions and cultures. There are some reasons the film industry does not expel the kissing scene in a cartoon movie. It is categorized as a romance sequence and love scene. These could be a double-edged weapon that will ruin an individual’s childhood through excessive exposure to explicit content. This paper proposes a deep learning-based classifier to detect the kissing scene in the cartoon by using Darknet-19 for frame-level feature extraction, while the feature aggregation in the temporal domain is using convolutional long short-term memory (conv-LSTM). This paper also has discussed a few steps related to evaluation and analysis regarding the performance of the models. Extensive experiments prove that the proposed system provides excellent results of 96.43% accuracy to detect the kissing scene in the cartoon. Due to high accuracy performance, the model is suitable to be a kissing scene filter feature in a digital video player that may able to decrease the excessive exposure to explicit content for kids.
6

Marcial Basilio, Jorge A., Gualberto Aguilar Torres, Gabriel Sánchez Pérez, Karina Toscano Medina e Héctor M. Pérez Meana. "Novel method for pornographic image detection using HSV and YCbCr color models". Revista Facultad de Ingeniería Universidad de Antioquia, n. 64 (3 ottobre 2012): 79–90. http://dx.doi.org/10.17533/udea.redin.13117.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper a novel method to explicit content or pornographic images detection is proposed, using the transformation from RGB to HSV or YCbCr color model, which is the most usual format to images that exists on Internet, moreover the using of a threshold to skin detection applying the color models HSV and YCbCr is proposed. Using the proposed threshold the image is segmented, once the image segmented, the skin quantity localized in that image is calculated. The obtained results using the proposed system are compared with two programs which carry out with the same goal, the Forensic Toolkit 3.1 Explicit Image Detection (FTK 3.1 EID) and the Parabenís Porn Detection Stick that are two the most commercials solutions to pornographic images detection. The reported results in this paper were obtained using three sets of images, each one of them consist of 800 images choosing randomly which 400 are natural images and the rest are explicit content images, this sets were used to probe the proposed system and the two tools commercials. The proposed system achieved a 78,75% of recognizing, 28% of false positives and 14,50% of false negatives, the software FTK 3.1 Explicit Image Detection obtained 72,12% of recognizing, 38,50% of false positives and 17,25% of false negatives. Parabenís Porn Detection Stick achieved 74,25% of recognizing with 16% of false positives and 35,50% of false negatives. Finally can be prove that the proposed system be able to detect the images under study better than two of the software solutions more using for forensic researchers, for this reason the proposed method can be applied to computer forensics or in detection of pornographic images stored on mass storage devices.
7

Zhang, Linhao, Li Jin, Xian Sun, Guangluan Xu, Zequn Zhang, Xiaoyu Li, Nayu Liu, Qing Liu e Shiyao Yan. "TOT:Topology-Aware Optimal Transport for Multimodal Hate Detection". Proceedings of the AAAI Conference on Artificial Intelligence 37, n. 4 (26 giugno 2023): 4884–92. http://dx.doi.org/10.1609/aaai.v37i4.25614.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Multimodal hate detection, which aims to identify the harmful content online such as memes, is crucial for building a wholesome internet environment. Previous work has made enlightening exploration in detecting explicit hate remarks. However, most of their approaches neglect the analysis of implicit harm, which is particularly challenging as explicit text markers and demographic visual cues are often twisted or missing. The leveraged cross-modal attention mechanisms also suffer from the distributional modality gap and lack logical interpretability. To address these semantic gap issues, we propose TOT: a topology-aware optimal transport framework to decipher the implicit harm in memes scenario, which formulates the cross-modal aligning problem as solutions for optimal transportation plans. Specifically, we leverage an optimal transport kernel method to capture complementary information from multiple modalities. The kernel embedding provides a non-linear transformation ability to reproduce a kernel Hilbert space (RKHS), which reflects significance for eliminating the distributional modality gap. Moreover, we perceive the topology information based on aligned representations to conduct bipartite graph path reasoning. The newly achieved state-of-the-art performance on two publicly available benchmark datasets, together with further visual analysis, demonstrate the superiority of TOT in capturing implicit cross-modal alignment.
8

Bekaryan, Lilit. "Lost in “Transl-Hation”: Exploring the Impact of Machine Translation as an Intermediary Tool in Detecting Armenian Hate Speech". Translation Studies: Theory and Practice 3, n. 2 (6) (25 dicembre 2023): 40–47. http://dx.doi.org/10.46991/tstp/2023.3.2.040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
As the pervasive spread of hate speech continues to pose significant challenges to online communities, detecting, and countering hateful content on social media has become a priority. Social media platforms typically use machine translation to identify the hateful content of the posts made in languages other than English. If this approach works effectively in identifying explicit hateful content in languages that are predominantly used on social media, its effect is almost insignificant when it comes to Armenian. The present research investigates the effectiveness of machine translation as an intermediary tool in accurately identifying and addressing instances of Armenian hate speech posts retrieved from social networking websites. The study of hate speech posts and comments made by Armenian users in Armenian helps identify that it is often the absence of intricate cultural and linguistic nuances, as well as insufficient contextualized understanding, that impede with hate speech detection in Armenian.
9

FOUCAMBERT, DENIS, e JACQUES BAILLÉ. "Evolution of the missing-letter effect among young readers between ages 5 and 8". Applied Psycholinguistics 32, n. 1 (7 ottobre 2010): 1–17. http://dx.doi.org/10.1017/s0142716410000263.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
ABSTRACTIn light of the numerous studies on the detection of target letters among adults, it is generally accepted that the missing-letter effect depends both on a given word's frequency in its language and on its role (function vs. content) in a sentence. Following a presentation of several models explaining these observations we analyze the results of a letter-detection task given to 886 French students from kindergarten to second grade. The purpose of the present study is to determine the moment when the sensitivity to content/function word distinction emerges. The results of this study reveal that even if word frequency plays a role in letter detection, the emergence of an ability to extract sentence structure, along the lines of the structural model of reading, is significantly linked to the initial stages of explicit reading instruction.
10

Cela-Conde, Camilo J., Gisèle Marty, Enric Munar, Marcos Nadal e Lucrecia Burges. "The “Style Scheme” Grounds Perception of Paintings". Perceptual and Motor Skills 95, n. 1 (agosto 2002): 91–100. http://dx.doi.org/10.2466/pms.2002.95.1.91.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We studied the formation of style scheme (identification of the style that characterizes an artist) presenting 100 participants aesthetic visual stimuli. Participants were Spanish university students who volunteered: 72 women, 28 men of mean age 22.8 yr. Among those 50 were enrolled in History of Art and 50 students in Psychology. Stimuli belonged to different categories—High Art (pictures of well-known artists, like Van Gogh)/Popular Art (decorative pictures like Christmas postcards) and Representational (pictures with explicit meaning content, like a landscape)/Abstract (pictures without explicit meaning content, like Pollock's colored stains). Analysis using Signal Detection Theory techniques focused on how participants discriminate representational and abstract pictures. With High An stimuli, participants can better discriminate representational paintings than abstract ones. However, the difference in discrimination between representational and abstract pictures diminishes among participants studying History of Art. It seems that prior education in art favors forming style schemes and to some extent enables the participant to detect the “meaning” in High Art abstract paintings.

Tesi sul tema "Explicit content detection":

1

Vaglio, Andrea. "Leveraging lyrics from audio for MIR". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT027.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les paroles de chansons fournissent un grand nombre d’informations sur la musique car ellescontiennent une grande partie de la sémantique des chansons. Ces informations pourraient aider les utilisateurs à naviguer facilement dans une large collection de chansons et permettre de leur offrir des recommandations personnalisées. Cependant, ces informations ne sont souvent pas disponibles sous leur forme textuelle. Les systèmes de reconnaissance de la voix chantée pourraient être utilisés pour obtenir des transcriptions directement à partir de la source audio. Ces approches sont usuellement adaptées de celles de la reconnaissance vocale. La transcription de la parole est un domaine vieux de plusieurs décennies qui a récemment connu des avancées significatives en raison des derniers développements des techniques d’apprentissage automatique. Cependant, appliqués au chant, ces algorithmes donnent des résultats peu satisfaisants et le processus de transcription des paroles reste difficile avec des complications particulières. Dans cette thèse, nous étudions plusieurs problèmes de ’Music Information Retrieval’ scientifiquement et industriellement complexes en utilisant des informations sur les paroles générées directement à partir de l’audio. L’accent est mis sur la nécessité de rendre les approches aussi pertinentes que possible dans le monde réel. Cela implique par exemple de les tester sur des ensembles de données vastes et diversifiés et d’étudier leur extensibilité. A cette fin, nous utilisons un large ensemble de données publiques possédant des annotations vocales et adaptons avec succès plusieurs des algorithmes de reconnaissance de paroles les plus performants. Nous présentons notamment, pour la première fois, un système qui détecte le contenu explicite directement à partir de l’audio. Les premières recherches sur la création d’un système d’alignement paroles audio multilingue sont également décrites. L’étude de la tâche alignement paroles-audio est complétée de deux expériences quantifiant la perception de la synchronisation de l’audio et des paroles. Une nouvelle approche phonotactique pour l’identification de la langue est également présentée. Enfin, nous proposons le premier algorithme de détection de versions employant explicitement les informations sur les paroles extraites de l’audio
Lyrics provide a lot of information about music since they encapsulate a lot of the semantics of songs. Such information could help users navigate easily through a large collection of songs and to recommend new music to them. However, this information is often unavailable in its textual form. To get around this problem, singing voice recognition systems could be used to obtain transcripts directly from the audio. These approaches are generally adapted from the speech recognition ones. Speech transcription is a decades-old domain that has lately seen significant advancements due to developments in machine learning techniques. When applied to the singing voice, however, these algorithms provide poor results. For a number of reasons, the process of lyrics transcription remains difficult. In this thesis, we investigate several scientifically and industrially difficult ’Music Information Retrieval’ problems by utilizing lyrics information generated straight from audio. The emphasis is on making approaches as relevant in real-world settings as possible. This entails testing them on vast and diverse datasets and investigating their scalability. To do so, a huge publicly available annotated lyrics dataset is used, and several state-of-the-art lyrics recognition algorithms are successfully adapted. We notably present, for the first time, a system that detects explicit content directly from audio. The first research on the creation of a multilingual lyrics-toaudio system are as well described. The lyrics-toaudio alignment task is further studied in two experiments quantifying the perception of audio and lyrics synchronization. A novel phonotactic method for language identification is also presented. Finally, we provide the first cover song detection algorithm that makes explicit use of lyrics information extracted from audio

Capitoli di libri sul tema "Explicit content detection":

1

Vajda, Peter, Ivan Ivanov, Lutz Goldmann, Jong-Seok Lee e Touradj Ebrahimi. "Robust Duplicate Detection of 2D and 3D Objects". In Methods and Innovations for Multimedia Database Content Management, 96–117. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1791-9.ch007.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, the authors analyze their graph-based approach for 2D and 3D object duplicate detection in still images. A graph model is used to represent the 3D spatial information of the object based on the features extracted from training images to avoid explicit and complex 3D object modeling. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Different limitations of this approach are analyzed by evaluating performance with respect to the number of training images and calculation of optimal parameters in a number of applications. Furthermore, effectiveness of object duplicate detection algorithm is measured over different object classes. The authors’ method is shown to be robust in detecting the same objects even when images with objects are taken from different viewpoints or distances.
2

Khribi, Mohamed Koutheaïr, Mohamed Jemni e Olfa Nasraoui. "Automatic Personalization in E-Learning Based on Recommendation Systems". In Intelligent and Adaptive Learning Systems, 19–33. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-842-2.ch002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Web based learning environments are being increasingly used at a large scale in the education area. This situation has brought a dramatic growth in the amount of educational resources and services incorporated continuously in these systems, and related access and usage of this educational content by a diversity of learners. However, the delivery of this educational content is generally done in the same way for all learners without giving any special attention to the different consumption styles or differences between their profiles and individual needs. Therefore, providing personalization in e-learning systems has to be considered as a necessity and not an option. Recommending suitable links represents an instance of adaptive navigation support technology. E-learning recommender systems are used to locate relevant educational Web objects that better match the learner’s profile and interests, this requires the ability of a system to predict learner’s needs and preferences. Therefore, recommendation systems need to use Web mining techniques in one or more phases of the recommendation process, especially in the modelling and pattern discovery phase. Most emergent recommendation systems in e-learning tend to rely on automated detection of student’s preferences and needs since it is more efficient and attractive to provide needed support to students without requesting any explicit information from them. In this chapter, we present an overview of personalization in e-learning based on recommendation systems and Web mining techniques.
3

Wadhwani, Arun Kumar, Sulochana Wadhwani e Tripty Singh. "Computer Aided Diagnosis System for Breast Cancer Detection". In Medical Imaging, 1002–19. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0571-6.ch040.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Management of breast cancer in elder patients is challenging due to a lack of good quality evidence regarding the role of adjuvant chemotherapy. Mammograms can depict most of the significant changes of breast disease. The primary radiographic signs of breast cancer are masses (its density, site, shape, borders), spicular lesions and calcification content. The basic idea is to convert the mammogram image and convert into 3-D matrix. Obtained matrix is used to convert the mammogram into binary image. Several techniques like detecting cell, filling gaps, dilating gaps, removing border, smoothing the objects, finding structures & extracting large objects have been used. Finally finding the granulometry of tissues in an Image without explicitly segmenting (detecting) each object. Compared to existing multiscale enhancement approaches, images processed with this method appear more familiar to radiologists and naturally close to the original mammogram.
4

Wadhwani, Arun Kumar, Sulochana Wadhwani e Tripty Singh. "Computer Aided Diagnosis System for Breast Cancer Detection". In Advances in Medical Technologies and Clinical Practice, 378–95. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9530-6.ch015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Management of breast cancer in elder patients is challenging due to a lack of good quality evidence regarding the role of adjuvant chemotherapy. Mammograms can depict most of the significant changes of breast disease. The primary radiographic signs of breast cancer are masses (its density, site, shape, borders), spicular lesions and calcification content. The basic idea is to convert the mammogram image and convert into 3-D matrix. Obtained matrix is used to convert the mammogram into binary image. Several techniques like detecting cell, filling gaps, dilating gaps, removing border, smoothing the objects, finding structures & extracting large objects have been used. Finally finding the granulometry of tissues in an Image without explicitly segmenting (detecting) each object. Compared to existing multiscale enhancement approaches, images processed with this method appear more familiar to radiologists and naturally close to the original mammogram.
5

Pierzchajlo, Stephen, e Jonas K. Olofsson. "Human olfaction: A view from the top". In Sensory Individuals, 209–24. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780198866305.003.0013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Although our understanding of human olfactory perception has increased dramatically, it remains less well understood how olfaction interacts with cognitive processes. This chapter reviews the claim that olfaction predominantly supports ‘novelty detection’, a framework that leaves little room for cognitive representations and processes. It is argued that this framework has several possible limitations. Instead, it is proposed that human olfaction is best understood in terms of its significant reliance on top-down processes; a fundamental role of olfaction is to evaluate predictions that are constructed based on the context, and these predictions may be explicit and cognitive in nature. Olfaction is, according to this view, not limited to passively responding to environmental changes, but actively interacts with cognitive systems to shape goal-directed behaviours. It is proposed that cue-target matching tasks provide useful and flexible experimental designs to study these effects.
6

G. Miathali, Poornima. "Efficient Machine Learning Classifier for Fault Detection in Wireless Sensor Networks". In Wireless Sensor Networks - Research Issues and Effective Smart Solutions [Working Title]. IntechOpen, 2023. http://dx.doi.org/10.5772/intechopen.111462.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The deployment of wireless sensor networks in unpredictable and dangerous conditions makes them prone to software, hardware, and communication errors. Sensors are physical devices that are deployed in inaccessible environment which makes them malicious. The Fault occurs in the sensed data and its detection should be precise and rapid to limit the loss. The status of sensed data should be explicitly determined to guarantee the normal function of the Sensor Networks. For the purpose of fault detection machine learning classifiers are employed because they are effective and used to classify sensed data into faulty and non-faulty data. The faults due to Dos, Probe, R2L, and U2R are considered for implementation. KDD CUP 99 dataset is chosen for training and test purpose, and the dataset contains 41 features which are categorized as content, basic, TCP features. The required feature for each fault category is selected through recursive feature elimination technique. The performance of the classifier is measured and evaluated in terms of Accuracy, precision, recall, and F-measures. From experimental results, it is observed that Random Forest classifier is best suited for Wireless Sensor Networks fault detection. The simulation result shows that Multi-layer perceptron outperforms the other classifier with 92% of accuracy.
7

Marks-Tarlow, Terry. "Intuition in a Nonlinear World". In Chaos and Nonlinear Psychology, 243–61. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780190465025.003.0013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This chapter argues that intuition is an emergent, nonverbal form of awareness that arises immediately, effortlessly, and automatically from bottom-up (nonconscious to conscious) processing within the implicit realm of perception, memory, and learning. Intuition likely evolved from social instincts within the mammalian brain that enable us to cope with a chaotic world filled with incomplete information and unpredictable situations. Conscious deliberation, which involves slower processing within the explicit realm, operates by focusing in on detail, stripping away context, and reducing wholes into component parts. By contrast, intuition is holistic and integrative, considering the big picture and full context. Intuition also facilitates fractal-like pattern detection by which the whole of things is perceived within the parts of experience. Within the field of clinical psychology, intuition is a central ingredient for deep, emergent, and unpredictable transformation. Clinical intuition is especially effective under conditions of uncertainty, urgency, and emergency. More generally, intuition promotes creativity, nuance, and fluidity in dealing with the unique dynamics of this person, with this developmental history, in this moment, with this therapist.
8

Kruger-Robbins, Benjamin. "“Holy Fruit Salad, Batman!”". In Camp TV of the 1960s, 154—C7P100. Oxford University PressNew York, 2023. http://dx.doi.org/10.1093/oso/9780197650745.003.0008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract This chapter considers how Batman (1966–68) functioned as a part of the American Broadcasting Company’s (ABC) competitive drive against the other two major networks, the National Broadcasting Company (NBC) and the Columbia Broadcasting System (CBS), to push boundaries of acceptable denotative and connotative queer visibility in prime-time television. While academics and popular critics frequently cite William Dozier’s TV adaptation of Bob Kane’s comic series as a canonically camp, gay-inflected text, this chapter analyzes reception, production, and distribution artifacts surrounding Batman in conjunction with those relating to explicitly gay-themed episodes of the short-lived detective procedural N.Y.P.D. (1967–69). Such comparison reveals a dual branding strategy that positioned ABC as an emerging venue for “quality” adult gay content while tightening the network’s grip on queer-tinged “adolescent” entertainment. Ultimately, ABC’s camp stylization across its programming formed part of a complex and occasionally affirmative, if also socially problematic, overture to “new” audience demographics.
9

Vinayakumar, R., K. P. Soman e Prabaharan Poornachandran. "Evaluation of Recurrent Neural Network and its Variants for Intrusion Detection System (IDS)". In Deep Learning and Neural Networks, 295–316. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article describes how sequential data modeling is a relevant task in Cybersecurity. Sequences are attributed temporal characteristics either explicitly or implicitly. Recurrent neural networks (RNNs) are a subset of artificial neural networks (ANNs) which have appeared as a powerful, principle approach to learn dynamic temporal behaviors in an arbitrary length of large-scale sequence data. Furthermore, stacked recurrent neural networks (S-RNNs) have the potential to learn complex temporal behaviors quickly, including sparse representations. To leverage this, the authors model network traffic as a time series, particularly transmission control protocol / internet protocol (TCP/IP) packets in a predefined time range with a supervised learning method, using millions of known good and bad network connections. To find out the best architecture, the authors complete a comprehensive review of various RNN architectures with its network parameters and network structures. Ideally, as a test bed, they use the existing benchmark Defense Advanced Research Projects Agency / Knowledge Discovery and Data Mining (DARPA) / (KDD) Cup ‘99' intrusion detection (ID) contest data set to show the efficacy of these various RNN architectures. All the experiments of deep learning architectures are run up to 1000 epochs with a learning rate in the range [0.01-0.5] on a GPU-enabled TensorFlow and experiments of traditional machine learning algorithms are done using Scikit-learn. Experiments of families of RNN architecture achieved a low false positive rate in comparison to the traditional machine learning classifiers. The primary reason is that RNN architectures are able to store information for long-term dependencies over time-lags and to adjust with successive connection sequence information. In addition, the effectiveness of RNN architectures are shown for the UNSW-NB15 data set.
10

Dickscheid, Timo, Sarah Haas, Sebastian Bludau, Philipp Glock, Marcel Huysegoms e Katrin Amunts. "Towards 3D Reconstruction of Neuronal Cell Distributions from Histological Human Brain Sections". In Future Trends of HPC in a Disruptive Scenario. IOS Press, 2019. http://dx.doi.org/10.3233/apc190016.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Mapping the microscopical organization of the human cerebral cortex provides a basis for multimodal brain atlases, and is indispensable for allocating functional imaging, physiological, connectivity, molecular, or genetic data to anatomically well specified structural entities of human brain organization at micrometer resolution. The analysis of histological sections is still considered a “gold standard” in brain mapping, and compared with other maps, e.g. from neuroimaging studies [1]. But while the spatial patterns of neuronal cells are inherently three-dimensional, such microscopic analysis is usually performed in individual 2D sections. Here we propose an HPC-based workflow that aims to recover the three-dimensional context from a stack of histological sections stained for neuronal cell bodies, imaged under a light microscope. Our aim is to align image data in consecutive sections at the micrometer resolution, where the texture is dominated by small objects like cell bodies, that often do not extend across sections. Therefore we cannot apply classical intensity-based image registration, where similarity of neighboring images is optimized at the pixel level. Our main contribution is a procedure to explicitly detect and match vessel-like structures in the brain tissue, guiding a feature-based image registration algorithm to 3D reconstruct regions of interest in the brain and recover the distribution of neuronal cells. To replace erroneous information in corrupted tissue areas, we further propose a simple predictive algorithm which generates realistic cell detections by learning from intact tissue parts in the local surroundings.

Atti di convegni sul tema "Explicit content detection":

1

Vaglio, Andrea, Romain Hennequin, Manuel Moussallam, Gael Richard e Florence d'Alche-Buc. "Audio-Based Detection of Explicit Content in Music". In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054278.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Chin, Hyojin, Jayong Kim, Yoonjong Kim, Jinseop Shin e Mun Y. Yi. "Explicit Content Detection in Music Lyrics Using Machine Learning". In 2018 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE, 2018. http://dx.doi.org/10.1109/bigcomp.2018.00085.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Dubettier, Adrien, Tanguy Gernot, Emmanuel Giguet e Christophe Rosenberger. "A Comparative Study of Tools for Explicit Content Detection in Images". In 2023 International Conference on Cyberworlds (CW). IEEE, 2023. http://dx.doi.org/10.1109/cw58918.2023.00077.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wang, Supeng, Yuxi Li, Ming Xie, Mingmin Chi, Yabiao Wang, Chengjie Wang e Wenbing Zhu. "Align, Perturb and Decouple: Toward Better Leverage of Difference Information for RSI Change Detection". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/166.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Change detection is a widely adopted technique in remote sense imagery (RSI) analysis in the discovery of long-term geomorphic evolution. To highlight the areas of semantic changes, previous effort mostly pays attention to learning representative feature descriptors of a single image, while the difference information is either modeled with simple difference operations or implicitly embedded via feature interactions. Nevertheless, such difference modeling can be noisy since it suffers from non-semantic changes and lacks explicit guidance from image content or context. In this paper, we revisit the importance of feature difference for change detection in RSI, and propose a series of operations to fully exploit the difference information: Alignment, Perturbation and Decoupling (APD). Firstly, alignment leverages contextual similarity to compensate for the non-semantic difference in feature space. Next, a difference module trained with semantic-wise perturbation is adopted to learn more generalized change estimators, which reversely bootstraps feature extraction and prediction. Finally, a decoupled dual-decoder structure is designed to predict semantic changes in both content-aware and content-agnostic manners. Extensive experiments are conducted on benchmarks of LEVIR-CD, WHU-CD and DSIFN-CD, demonstrating our proposed operations bring significant improvement and achieve competitive results under similar comparative conditions. Code is available at https://github.com/wangsp1999/CD-Research/tree/main/openAPD
5

Zheng, Li, Zhenpeng Li, Jian Li, Zhao Li e Jun Gao. "AddGraph: Anomaly Detection in Dynamic Graph Using Attention-based Temporal GCN". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/614.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Anomaly detection in dynamic graphs becomes very critical in many different application scenarios, e.g., recommender systems, while it also raises huge challenges due to the high flexible nature of anomaly and lack of sufficient labelled data. It is better to learn the anomaly patterns by considering all possible features including the structural, content and temporal features, rather than utilizing heuristic rules over the partial features. In this paper, we propose AddGraph, a general end-to-end anomalous edge detection framework using an extended temporal GCN (Graph Convolutional Network) with an attention model, which can capture both long-term patterns and the short-term patterns in dynamic graphs. In order to cope with insufficient explicit labelled data, we employ the negative sampling and margin loss in training of AddGraph in a semi-supervised fashion. We conduct extensive experiments on real-world datasets, and illustrate that AddGraph can outperform the state-of-the-art competitors in anomaly detection significantly.
6

Yu, Zhiyuan, e Kwun-Lon Ting. "Explicit Dynamics Analysis for Harmonic Drives". In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34759.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Harmonic drive can have a high nonlinear dynamic behavior. In order to find a way to simulate its operating process and evaluate its performance, this paper gave an introduction of different methods of tooth contact analysis (TCA) and found out that explicit dynamics as a newly used tool for TCA is the most suitable one. Because the harmonic drive’s high contact ratio and uncertainty of contact boundary match with explicit dynamics’ features of explicit algorithm, trajectory detection to deal with contact. A harmonic drive with a new tooth profile has been modeled in Ansys Workbench and solved by explicit dynamics solver. According to the results, the deformation of the simulation model has been compared with the theoretical calculation and experimental observation to make sure that the model reflects the harmonic drive’s elastic behavior correctly. And the nonlinear behavior of the harmonic drive including high contact ratio and output’s hysteresis effect can be predicted by explicit dynamics. So explicit dynamics can offer a new way to simulate harmonic drive’s working process. As a general case of gear drive, this method is possible to be widely adopted by gear industry in the future. Furthermore, the contact ratio and root fillet stress results show that the new tooth profile can significantly reduce the stress concentration to increase the fatigue life of the harmonic drive.
7

Acharya, Manoj, Anirban Roy, Kaushik Koneripalli, Susmit Jha, Christopher Kanan e Ajay Divakaran. "Detecting Out-Of-Context Objects Using Graph Contextual Reasoning Network". In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/89.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents an approach for detecting out-of-context (OOC) objects in images. Given an image with a set of objects, our goal is to determine if an object is inconsistent with the contextual relations and detect the OOC object with a bounding box. In this work, we consider common contextual relations such as co-occurrence relations, the relative size of an object with respect to other objects, and the position of the object in the scene. We posit that contextual cues are useful to determine object labels for in-context objects and inconsistent context cues are detrimental to determining object labels for out-of-context objects. To realize this hypothesis, we propose a graph contextual reasoning network (GCRN) to detect OOC objects. GCRN consists of two separate graphs to predict object labels based on the contextual cues in the image: 1) a representation graph to learn object features based on the neighboring objects and 2) a context graph to explicitly capture contextual cues from the neighboring objects. GCRN explicitly captures the contextual cues to improve the detection of in-context objects and identify objects that violate contextual relations. In order to evaluate our approach, we create a large-scale dataset by adding OOC object instances to the COCO images. We also evaluate on recent OCD benchmark. Our results show that GCRN outperforms competitive baselines in detecting OOC objects and correctly detecting in-context objects. Code and data: https://nusci.csl.sri.com/project/trinity-ooc
8

Yu, Zhe, Di Jin, Xiaobao Wang, Yawen Li, Longbiao Wang e Jianwu Dang. "Commonsense Knowledge Enhanced Sentiment Dependency Graph for Sarcasm Detection". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/269.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Sarcasm is widely utilized on social media platforms such as Twitter and Reddit. Sarcasm detection is required for analyzing people's true feelings since sarcasm is commonly used to portray a reversed emotion opposing the literal meaning. The syntactic structure is the key to make better use of commonsense when detecting sarcasm. However, it is extremely challenging to effectively and explicitly explore the information implied in syntactic structure and commonsense simultaneously. In this paper, we apply the pre-trained COMET model to generate relevant commonsense knowledge, and explore a novel scenario of constructing a commonsense-augmented sentiment graph and a commonsense-replaced dependency graph for each text. Based on this, a Commonsense Sentiment Dependency Graph Convolutional Network (CSDGCN) framework is proposed to explicitly depict the role of external commonsense and inconsistent expressions over the context for sarcasm detection by interactively modeling the sentiment and dependency information. Experimental results on several benchmark datasets reveal that our proposed method beats the state-of-the-art methods in sarcasm detection, and has a stronger interpretability.
9

Lall, Pradeep, Prashant Gupta e Kai Goebel. "Failure Mode Clustering is Electronic Assemblies Using Sammon’s Mapping With Supervised Training of Perceptrons". In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-65833.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
An anomaly detection and failure mode classification method has been developed for electronic assemblies with multiple failure modes. The presented prognostic health management method targets the pre-failure space of the electronic assembly life to trigger repair or replacement of impending failures. Presently, health monitoring systems focus on reactive diagnostic detection of failure modes. Examples of diagnostic detection include the built in self test and on-board diagnostics. In this paper, damage pre-cursors from time-spectral measurements of the electronic assemblies has been measured under applied vibration and shock stimulus. The time-evolution of spectral content of the damage pre-cursors has been studied using joint time frequency analysis in a full-field manner on the printed circuit assembly. Frequency moments have been used to build a feature vector. Evolution of the feature vector with damage initiation and progression has been studied under shock and vibration. The feature vector from multiple locations in the board assemblies has been mapped into a de-correlated feature space using Sammon’s mapping. Several chip-scale packages have been studied, with SAC305 and SAC405 leadfree second-level interconnects. Transient strain has been measured during the drop-event using digital image correlation and high-speed cameras operating at 100,000 fps. Continuity has been monitored simultaneously for failure identification. In addition, explicit finite element models have been developed and various kinds of failure modes have been simulated such as solder ball cracking, trace fracture, package falloff and solder ball failure. The neural net has been trained using simulated data-sets created from error-seeded models with specific failure modes. The neural net has then been used to identify and classify the failure modes in board assemblies experimentally. Supervised learning of multilayer neural net in conjunction with parity has been used to identify the hard-separation boundaries between failure mode clusters in the de-correlated feature space. The assemblies have been cross-sectioned to verify the identified failure modes. Cross-sections indicate that the experimentally measured failures modes correlate well with the position of the cluster in the de-correlated feature space.
10

Du, Jiachen, Ruifeng Xu, Yulan He e Lin Gui. "Stance Classification with Target-specific Neural Attention". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/557.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Stance classification, which aims at detecting the stance expressed in text towards a specific target, is an emerging problem in sentiment analysis. A major difference between stance classification and traditional aspect-level sentiment classification is that the identification of stance is dependent on target which might not be explicitly mentioned in text. This indicates that apart from text content, the target information is important to stance detection. To this end, we propose a neural network-based model, which incorporates target-specific information into stance classification by following a novel attention mechanism. In specific, the attention mechanism is expected to locate the critical parts of text which are related to target. Our evaluations on both the English and Chinese Stance Detection datasets show that the proposed model achieves the state-of-the-art performance.

Rapporti di organizzazioni sul tema "Explicit content detection":

1

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust e Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, ottobre 2019. http://dx.doi.org/10.57022/clzt5093.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.

Vai alla bibliografia