Academic literature on the topic 'Defacement Detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Defacement Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Defacement Detection"

1

Hoang, Xuan Dau, and Ngoc Tuong Nguyen. "Detecting Website Defacements Based on Machine Learning Techniques and Attack Signatures." Computers 8, no. 2 (May 8, 2019): 35. http://dx.doi.org/10.3390/computers8020035.

Full text
Abstract:
Defacement attacks have long been considered one of prime threats to websites and web applications of companies, enterprises, and government organizations. Defacement attacks can bring serious consequences to owners of websites, including immediate interruption of website operations and damage of the owner reputation, which may result in huge financial losses. Many solutions have been researched and deployed for monitoring and detection of website defacement attacks, such as those based on checksum comparison, diff comparison, DOM tree analysis, and complicated algorithms. However, some solutions only work on static websites and others demand extensive computing resources. This paper proposes a hybrid defacement detection model based on the combination of the machine learning-based detection and the signature-based detection. The machine learning-based detection first constructs a detection profile using training data of both normal and defaced web pages. Then, it uses the profile to classify monitored web pages into either normal or attacked. The machine learning-based component can effectively detect defacements for both static pages and dynamic pages. On the other hand, the signature-based detection is used to boost the model’s processing performance for common types of defacements. Extensive experiments show that our model produces an overall accuracy of more than 99.26% and a false positive rate of about 0.27%. Moreover, our model is suitable for implementation of a real-time website defacement monitoring system because it does not demand extensive computing resources.
APA, Harvard, Vancouver, ISO, and other styles
2

Bergadano, Francesco, Fabio Carretto, Fabio Cogno, and Dario Ragno. "Defacement Detection with Passive Adversaries." Algorithms 12, no. 8 (July 29, 2019): 150. http://dx.doi.org/10.3390/a12080150.

Full text
Abstract:
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company.
APA, Harvard, Vancouver, ISO, and other styles
3

Albalawi, Mariam, Rasha Aloufi, Norah Alamrani, Neaimh Albalawi, Amer Aljaedi, and Adel R. Alharbi. "Website Defacement Detection and Monitoring Methods: A Review." Electronics 11, no. 21 (November 1, 2022): 3573. http://dx.doi.org/10.3390/electronics11213573.

Full text
Abstract:
Web attacks and web defacement attacks are issues in the web security world. Recently, website defacement attacks have become the main security threats for many organizations and governments that provide web-based services. Website defacement attacks can cause huge financial and data losses that badly affect the users and website owners and can lead to political and economic problems. Several detection techniques and tools are used to detect and monitor website defacement attacks. However, some of the techniques can work on static web pages, dynamic web pages, or both, but need to focus on false alarms. Many techniques can detect web defacement. Some are based on available online tools and some on comparing and classification techniques; the evaluation criteria are based on detection accuracies with 100% standards and false alarms that cannot reach 1.5% (and never 2%); this paper presents a literature review of the previous works related to website defacement, comparing the works based on the accuracy results, the techniques used, as well as the most efficient techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Mondragón, Oscar, Andrés Felipe Mera Arcos, Christian Urcuqui, and Andrés Navarro Cadavid. "Security control for website defacement." Sistemas y Telemática 15, no. 41 (August 1, 2017): 45–55. http://dx.doi.org/10.18046/syt.v15i41.2442.

Full text
Abstract:
Cyber-attacks to websites are increasing steadily affecting the integrity and availability of information, so the implementation of safeguards to mitigate or reduce to acceptable levels the risks generated are necessary. Computer incidents produce economic and reputational impacts to different organizations. It has identified an increase in computer attacks on different organizations where one of them, and highly reputational impact, is the “Defacement” attack, which consists of unauthorized modification or alteration to the web sites using wordpress cms , affecting the integrity of information. The result of this article proposes the development of a model for establishing a security control to perform the containment and reporting of this attack type, which currently have focused on the websites of the government entities. The development model allows online control the attack on Web sites by constant reading of certain parts of the source code making the detection and maintenance of the integrity of information.
APA, Harvard, Vancouver, ISO, and other styles
5

Cho, Youngho. "Intelligent On-Off Web Defacement Attacks and Random Monitoring-Based Detection Algorithms." Electronics 8, no. 11 (November 13, 2019): 1338. http://dx.doi.org/10.3390/electronics8111338.

Full text
Abstract:
Recent cyberattacks armed with various ICT (information and communication technology) techniques are becoming advanced, sophisticated and intelligent. In security research field and practice, it is a common and reasonable assumption that attackers are intelligent enough to discover security vulnerabilities of security defense mechanisms and thus avoid the defense systems’ detection and prevention activities. Web defacement attacks refer to a series of attacks that illegally modify web pages for malicious purposes, and are one of the serious ongoing cyber threats that occur globally. Detection methods against such attacks can be classified into either server-based approaches or client-based approaches, and there are pros and cons for each approach. From our extensive survey on existing client-based defense methods, we found a critical security vulnerability which can be exploited by intelligent attackers. In this paper, we report the security vulnerability in existing client-based detection methods with a fixed monitoring cycle and present novel intelligent on-off web defacement attacks exploiting such vulnerability. Next, we propose to use a random monitoring strategy as a promising countermeasure against such attacks, and design two random monitoring defense algorithms: (1) Uniform Random Monitoring Algorithm (URMA), and (2) Attack Damage-Based Random Monitoring Algorithm (ADRMA). In addition, we present extensive experiment results to validate our idea and show the detection performance of our random monitoring algorithms. According to our experiment results, our random monitoring detection algorithms can quickly detect various intelligent web defacement on-off attacks (AM1, AM2, and AM3), and thus do not allow huge attack damage in terms of the number of defaced slots when compared with an existing fixed periodic monitoring algorithm (FPMA).
APA, Harvard, Vancouver, ISO, and other styles
6

Davanzo, G., E. Medvet, and A. Bartoli. "Anomaly detection techniques for a web defacement monitoring service." Expert Systems with Applications 38, no. 10 (September 2011): 12521–30. http://dx.doi.org/10.1016/j.eswa.2011.04.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kung, Ren-Yi, Nai-Hsin Pan, Charles C. N. Wang, and Pin-Chan Lee. "Application of Deep Learning and Unmanned Aerial Vehicle on Building Maintenance." Advances in Civil Engineering 2021 (April 19, 2021): 1–12. http://dx.doi.org/10.1155/2021/5598690.

Full text
Abstract:
Several natural and human factors are responsible for the defacement of the external walls and tiles of buildings, and the related deterioration can be a public safety hazard. Therefore, active building maintenance and repair processes are essential for ensuring building sustainability. However, conventional inspection methods are time-, cost-, and labor-intensive processes. Therefore, herein, this study proposes a convolutional neural network (CNN) model for image-based automated detection and localization of key building defects (efflorescence, spalling, cracking, and defacement). Based on a pretrained CNN VGG-16 classifier, this model applies class activation mapping for object localization. After identifying its limitations in real-life applications, this study determined the model’s robustness and ability to accurately detect and localize defects in the external wall tiles of buildings. For real-time detection and localization, this study applied this model by using mobile devices and drones. The results show that the application of deep learning with UAV can effectively detect various kinds of external wall defects and improve the detection efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

R, Newlin Shebiah, and Arivazhagan S. "Versatile Defacement Detection by Monitoring Video Sequences Using Deep Learning." European Journal of Engineering Research and Science 4, no. 7 (July 19, 2019): 37–41. http://dx.doi.org/10.24018/ejers.2019.4.7.1396.

Full text
Abstract:
The main objective of this paper is to detect vandal and vandalism by monitoring recorded video sequences. Vandalism is one of the most commonly occurring crimes in the society that indirectly affects the economy of the country. The proposed algorithm takes in the input from the video extracted from surveillance camera which prevails in public places. Further, it is converted into frames and subtracted with the background to detect the foreground object. The background subtracted image contains both human and non-human moving objects. In order to differentiate human pixels and other moving objects in the video sequence, discriminative features are extracted using deep architecture and classified using SVM classifier. Deep features proved to be highly discriminative when compared with the handcrafted Histogram of Oriented Gradients features. By analyzing the dwell time of the person in the restricted scene and his motion pattern with time and significant change in background vandalism act is declared and the person is considered as vandal. The proposed method was evaluated on the videos collected from You Tube with the contents taken during night time, multiple vandals, car vandals etc.
APA, Harvard, Vancouver, ISO, and other styles
9

R, Newlin Shebiah, and Arivazhagan S. "Versatile Defacement Detection by Monitoring Video Sequences Using Deep Learning." European Journal of Engineering and Technology Research 4, no. 7 (July 19, 2019): 37–41. http://dx.doi.org/10.24018/ejeng.2019.4.7.1396.

Full text
Abstract:
The main objective of this paper is to detect vandal and vandalism by monitoring recorded video sequences. Vandalism is one of the most commonly occurring crimes in the society that indirectly affects the economy of the country. The proposed algorithm takes in the input from the video extracted from surveillance camera which prevails in public places. Further, it is converted into frames and subtracted with the background to detect the foreground object. The background subtracted image contains both human and non-human moving objects. In order to differentiate human pixels and other moving objects in the video sequence, discriminative features are extracted using deep architecture and classified using SVM classifier. Deep features proved to be highly discriminative when compared with the handcrafted Histogram of Oriented Gradients features. By analyzing the dwell time of the person in the restricted scene and his motion pattern with time and significant change in background vandalism act is declared and the person is considered as vandal. The proposed method was evaluated on the videos collected from You Tube with the contents taken during night time, multiple vandals, car vandals etc.
APA, Harvard, Vancouver, ISO, and other styles
10

Ghaleb, Fuad A., Mohammed Alsaedi, Faisal Saeed, Jawad Ahmad, and Mohammed Alasli. "Cyber Threat Intelligence-Based Malicious URL Detection Model Using Ensemble Learning." Sensors 22, no. 9 (April 28, 2022): 3373. http://dx.doi.org/10.3390/s22093373.

Full text
Abstract:
Web applications have become ubiquitous for many business sectors due to their platform independence and low operation cost. Billions of users are visiting these applications to accomplish their daily tasks. However, many of these applications are either vulnerable to web defacement attacks or created and managed by hackers such as fraudulent and phishing websites. Detecting malicious websites is essential to prevent the spreading of malware and protect end-users from being victims. However, most existing solutions rely on extracting features from the website’s content which can be harmful to the detection machines themselves and subject to obfuscations. Detecting malicious Uniform Resource Locators (URLs) is safer and more efficient than content analysis. However, the detection of malicious URLs is still not well addressed due to insufficient features and inaccurate classification. This study aims at improving the detection accuracy of malicious URL detection by designing and developing a cyber threat intelligence-based malicious URL detection model using two-stage ensemble learning. The cyber threat intelligence-based features are extracted from web searches to improve detection accuracy. Cybersecurity analysts and users reports around the globe can provide important information regarding malicious websites. Therefore, cyber threat intelligence-based (CTI) features extracted from Google searches and Whois websites are used to improve detection performance. The study also proposed a two-stage ensemble learning model that combines the random forest (RF) algorithm for preclassification with multilayer perceptron (MLP) for final decision making. The trained MLP classifier has replaced the majority voting scheme of the three trained random forest classifiers for decision making. The probabilistic output of the weak classifiers of the random forest was aggregated and used as input for the MLP classifier for adequate classification. Results show that the extracted CTI-based features with the two-stage classification outperform other studies’ detection models. The proposed CTI-based detection model achieved a 7.8% accuracy improvement and 6.7% reduction in false-positive rates compared with the traditional URL-based model.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Defacement Detection"

1

Medvet, Eric. "Techniques for large-scale automatic detection of web site defacements." Doctoral thesis, Università degli studi di Trieste, 2008. http://hdl.handle.net/10077/2579.

Full text
Abstract:
2006/2007
Web site defacement, the process of introducing unauthorized modifications to a web site, is a very common form of attack. This thesis describes the design and experimental evaluation of a framework that may constitute the basis for a defacement detection service capable of monitoring thousands of remote web sites sistematically and automatically. With this framework an organization may join the service by simply providing the URL of the resource to be monitored along with the contact point of an administrator. The monitored organization may thus take advantage of the service with just a few mouse clicks, without installing any software locally nor changing its own daily operational processes. The main proposed approach is based on anomaly detection and allows monitoring the integrity of many remote web resources automatically while remaining fully decoupled from them, in particular, without requiring any prior knowledge about those resources. During a preliminary learning phase a profile of the monitored resource is built automatically. Then, while monitoring, the remote resource is retrieved periodically and an alert is generated whenever something "unusual" shows up. The thesis discusses about the effectiveness of the approach in terms of accuracy of detection---i.e., missed detections and false alarms. The thesis also considers the problem of misclassified readings in the learning set. The effectiveness of anomaly detection approach, and hence of the proposed framework, bases on the assumption that the profile is computed starting from a learning set which is not corrupted by attacks; this assumption is often taken for granted. The influence of leaning set corruption on our framework effectiveness is assessed and a procedure aimed at discovering when a given unknown learning set is corrupted by positive readings is proposed and evaluated experimentally. An approach to automatic defacement detection based on Genetic Programming (GP), an automatic method for creating computer programs by means of artificial evolution, is proposed and evaluated experimentally. Moreover, a set of techniques that have been used in literature for designing several host-based or network-based Intrusion Detection Systems are considered and evaluated experimentally, in comparison with the proposed approach. Finally, the thesis presents the findings of a large-scale study on reaction time to web site defacement. There exist several statistics that indicate the number of incidents of this sort but there is a crucial piece of information still lacking: the typical duration of a defacement. A two months monitoring activity has been performed over more than 62000 defacements in order to figure out whether and when a reaction to the defacement is taken. It is shown that such time tends to be unacceptably long---in the order of several days---and with a long-tailed distribution.
Il web site defacement, che consiste nell'introdurre modifiche non autorizzate ad un sito web, è una forma di attacco molto comune. Questa tesi descrive il progetto, la realizzazione e la valutazione sperimentale di una sistema che può costituire la base per un servizio capace di monitorare migliaia di siti web remoti in maniera sistematica e automatica. Con questo sistema un'organizzazione può avvalersi del servizio semplicemente fornendo l'URL della risorsa da monitorare e un punto di contatto per l'amministratore. L'organizzazione monitorata può quindi avvantaggiarsi del servizio con pochi click del mouse, senza dover installare nessun software in locale e senza dover cambiare le sue attività quotidiane. Il principale approccio proposto è basato sull'anomaly detection e permette di monitorare l'integrita di molte risorse web remote automaticamente rimanendo completamente distaccato da queste e, in particolare, non richiedendo nessuna conoscenza a priori delle stesse. Durante una fase preliminare di apprendimento viene generato automaticamente un profilo della risorsa. Successivamente, durante il monitoraggio, la risorsa è controllata periodicamente ed un allarme viene generato quando qualcosa di "unusuale" si manifesta. La tesi prende in considerazione l'efficacia dell'approccio in termini di accuratezza di rilevamento---cioè, attacchi non rilevati e falsi allarmi generati. La tesi considera anche il problema dei reading mal classificati presenti nel learning set. L'efficiacia dell'approccio anomaly detection, e quindi del sistema proposto, si basa sull'ipotesi che il profilo è generato a partire da un learning set che non è corrotto dalla presenza di attacchi; questa ipotesi viene spesso data per vera. Viene quantificata l'influenza della presenza di reading corrotti sull'efficacia del sistema proposto e viene proposta e valutata sperimentalmente una procedura atta a rilevare quando un learning set ignoto è corrotto dalla presenza di reading positivi. Viene proposto e valutato sperimentalmente un approccio per la rilevazione automatica dei defacement basato sul Genetic Programming (GP), un metodo automatico per creare programmi in termini di evoluzione artificiale. Inoltre, vengono valutate sperimentalmente, in riferimento all'approccio proposto, un insieme di tecniche che sono state utilizzate per progettare Intrusion Detection Systems, sia host based che network-based. Infine, la tesi presenta i risultati di uno studio su larga scala sul tempo di reazione ai defacement. Ci sono diverse statistiche che indicano quale sia il numero di questo tipo di attacchi ma manca un'informazione molto importante: la durata tipica di un defacement. Si è effettuato un monitoraggio di oltre 62000 pagine defacciate per circa due mesi per scoprire se e quando viene presa una contromisura in seguito ad un defacement. Lo studio mostra che i tempi sono inaccettabilmente lunghi---dell'ordine di molti giorni---e con una distribuzione a coda lunga.
XX Ciclo
1979
APA, Harvard, Vancouver, ISO, and other styles
2

Davanzo, Giorgio. "Machine learning in engineering applications." Doctoral thesis, Università degli studi di Trieste, 2011. http://hdl.handle.net/10077/4520.

Full text
Abstract:
2009/2010
Nowadays the available computing and information-storage resources grew up to a level that allows to easily collect and preserve huge amount of data. However, several organizations are still lacking the knowledge or the tools to process these data into useful informations. In this thesis work we will investigate several issues that can be solved effectively by means of machine learning techniques, ranging from web defacement detection to electricity prices forecasting, from Support Vector Machines to Genetic Programming. We will investigate a framework for web defacement detection meant to allow any organization to join the service by simply providing the URLs of the resources to be monitored along with the contact point of an administrator. Our approach is based on anomaly detection and allows monitoring the integrity of many remote web resources automatically while remaining fully decoupled from them, in particular, without requiring any prior knowledge about those resources—thus being an unsupervised system. Furthermore, we will test several machine learning algorithms normally used for anomaly detection on the web defacement detection problem. We will present a scrolling system to be used on mobile devices to provide a more natural and effective user experience on small screens. We detect device motion by analyzing the video stream generated by the camera and then we transform the motion in a scrolling of the content rendered on the screen. This way, the user experiences the device screen like a small movable window on a larger virtual view, without requiring any dedicated motion-detection hardware. As regards information retrieval, we will present an approach for information extraction for multi-page printed document; the approach is designed for scenarios in which the set of possible document classes, i.e., document sharing similar content and layout, is large and may evolve over time. Our approach is based on probability: we derived a general form for the probability that a sequence of blocks contains the searched information. A key step in the understanding of printed documents is their classification based on the nature of information they contain and their layout; we will consider both a static and a dynamic scenario, in which document classes are/are not known a priori and new classes can/can not appear at any time. Finally, we will move to the edge of machine learning: Genetic Programming. The electric power market is increasingly relying on competitive mechanisms taking the form of day-ahead auctions, in which buyers and sellers submit their bids in terms of prices and quantities for each hour of the next day. We propose a novel forecasting method based on Genetic Programming; key feature of our proposal is the handling of outliers, i.e., regions of the input space rarely seen during the learning.
Oggigiorno le risorse disponibili in termini computazionali e di archiviazione sono cresciute ad un livello tale da permettere facilmente di raccogliere e conservare enormi quantità di dati. Comunque, molte organizzazioni mancano ancora della conoscenza o degli strumenti necessari a processare tali dati in informazioni utili. In questo lavoro di tesi si investigheranno svariati problemi che possono essere efficacemente risolti attraverso strumenti di machine learning, spaziando dalla rilevazione di web defacement alla previsione dei prezzi della corrente elettrica, dalle Support Vector Machine al Genetic Programming. Si investigherà una infrastruttura per la rilevazione dei defacement studiata per permettere ad una organizzazione di sottoscrivere il servizio in modo semplice, fornendo l'URL da monitorare ed un contatto dell'amministratore. L'approccio presentato si basa sull'anomaly detection e permette di monitorare l'integrità di molte risorse web remote in modo automatico e sconnesso da esse, senza richiedere alcuna conoscenza a priori di tali risorse---ovvero, realizzando un sistema non supervisionato. A questo scopo verranno anche testati vari algoritmi di machine learning solitamente usati per la rilevazione di anomalie. Si presenterà poi un sistema di scorrimento da usare su dispositivi mobili capace di fornire una interfaccia naturale ed efficace anche su piccoli schermi. Il sistema rileva il movimento del dispositivo analizzando il flusso video generato dalla macchina fotografica integrata, trasformando lo spostamento rilevato in uno scorrimento del contenuto visualizzato sullo schermo. In questo modo, all'utente sembrerà che il proprio dispositivo sia una piccola finestra spostabile su una vista virtuale più ampia, senza che sia richiesto alcun dispositivo dedicato esclusivamente alla rilevazione dello spostamento. Verrà anche proposto un sistema per l'estrazione di informazioni da documenti stampati multi pagina; l'approccio è studiato per scenari in cui l'insieme di possibili classi di documenti (simili per contenuto ed organizzazione del testo) è ampio e può evolvere nel tempo. L'approccio si basa sulla probabilità: è stata studiata la probabilità che una sequenza di blocchi contenga l'informazione cercata. Un elemento chiave nel comprendere i documenti stampati è la loro classificazione in base alla natura delle informazioni che contengono e la loro posizione nel documento; verranno considerati sia uno scenario statico che uno dinamico, in cui il numero di classi di documenti è/non è noto a priori e nuove classi possono/non possono apparire nel tempo. Infine, ci si muoverà verso i confini del machine learning: il Genetic Programming. Il mercato della corrente elettrica si basa sempre più su aste in cui ogni giorno venditori ed acquirenti fanno delle offerte per l'acquisto di lotti di energia per il giorno successivo, con una granularità oraria della fornitura. Verrà proposto un nuovo metodo di previsione basato sul Genetic Programming; l'elemento chiave della soluzione qui presentata è la capacità di gestire i valori anomali, ovvero valori raramente osservati durante il processo di apprendimento.
XXIII Ciclo
1981
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Defacement Detection"

1

Hoang, Xuan Dau. "A Website Defacement Detection Method Based on Machine Learning." In Advances in Engineering Research and Application, 116–24. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04792-4_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Masango, Mfundo, Francois Mouton, Palesa Antony, and Bokang Mangoale. "An Approach for Detecting Web Defacement with Self-healing Capabilities." In Transactions on Computational Science XXXII, 29–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2018. http://dx.doi.org/10.1007/978-3-662-56672-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Medvet, Eric, and Alberto Bartoli. "On the Effects of Learning Set Corruption in Anomaly-Based Detection of Web Defacements." In Detection of Intrusions and Malware, and Vulnerability Assessment, 60–78. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73614-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aikins, Stephen K. "Practical Measures for Securing Government Networks." In Handbook of Research on Public Information Technology, 386–94. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-857-4.ch037.

Full text
Abstract:
The modern network and Internet security vulnerabilities expose state and local government networks to numerous threats such as denial of service (DoS) attacks, computer viruses, unauthorized access, confidentiality breaches, and so forth. For example, in June 2005, the state of Delaware saw a spike of 141,000 instances of “suspicious activity” due to a variant of the mytopb worm, which could have brought the state’s network to its knees had appropriate steps not been taken (Jarrett, 2005; National Association of State Chief Information Officers [NASCIO], 2006b). On an average day, the state of Michigan blocks 22,059 spam e-mails, 21,702 e-mail viruses, 4,239 Web defacements, and six remote computer takeover attempts. Delaware fends off nearly 3,000 attempts at entering the state’s network daily (NASCIO, 2006b). Governments have the obligation to manage their information security risks by securing mission- critical internal resources such as financial records and taxpayer sensitive information on their networks. Consequently, public-sector information security officers are faced with the challenge to contain damage from compromised systems, prevent internally and Internet-launched attacks, provide systems for logging and intrusion detection, and build frameworks for administrators to securely manage government networks (Oxlenhandler, 2003). This chapter discusses some of the cost-effective measures needed to address government agency information security vulnerabilities and related threats.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Defacement Detection"

1

Hoang, Xuan Dau, and Ngoc Tuong Nguyen. "A Multi-layer Model for Website Defacement Detection." In the Tenth International Symposium. New York, New York, USA: ACM Press, 2019. http://dx.doi.org/10.1145/3368926.3369730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hoang, Xuan Dau. "A Website Defacement Detection Method Based on Machine Learning Techniques." In the Ninth International Symposium. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3287921.3287975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sendi, Razan. "The Classification and Detection of Malicious URLs Using a Machine Learning Based Approach." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22661-ms.

Full text
Abstract:
Abstract The world wide web has become the most cardinal and salient source of information, for which in today's technological era, web pages are incessantly being utilized to broadcast pernicious activity over the web. Hence, the perpetual technological advancements have enthralled cybercriminals to easily exploit the web environment by embedding malicious code on these web pages which permits oblivious and innocent users into becoming victims by visiting these harmful pages with a click of a button. Such detrimental and malicious web pages are created by cyber-attackers to promote and host viruses and exploit frauds, attacks, and scams. Consequently, within an organization, emails containing Uniform Resource Locators (URLs) cause a considerable amount of risk towards an organization as these links become instrumental in terms of giving systematic control to the cyber-attackers. There is a critical need of filtering employees’ incoming emails in accordance with the maliciousness of the URL to protect the web environment users from such threats. Accordingly, the premier objective of this proposed paper is to constructively classify and detect malicious URLs while utilizing a machine learning based approach. This will lead towards sufficiently decreasing the number of URL-based attacks that may target organizations via email, protecting the users of the web from phishing, scam, defacement, and malware web attacks. Hence, the conceptualization behind the proposed AI framework is to construct a multi-classification model to effectively thwart and protect organizations from detrimental and malicious web-based attacks by classifying a URL as either a defacement, benign, malware, spam, or a phishing-based URL. The general framework for the proposed malicious URL detection system is illustratively displayed in Figure 1.
APA, Harvard, Vancouver, ISO, and other styles
4

Medvet, Eric, Cyril Fillon, and Alberto Bartoli. "Detection of Web Defacements by means of Genetic Programming." In Third International Symposium on Information Assurance and Security. IEEE, 2007. http://dx.doi.org/10.1109/isias.2007.4299779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Medvet, Eric, Cyril Fillon, and Alberto Bartoli. "Detection of Web Defacements by means of Genetic Programming." In Third International Symposium on Information Assurance and Security. IEEE, 2007. http://dx.doi.org/10.1109/ias.2007.13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Siyan, Xiaojun Tong, Wei Wang, Guodong Xin, Bailing Wang, and Qi Zhou. "Website Defacements Detection Based on Support Vector Machine Classification Method." In ICCDE 2018: 2018 International Conference on Computing and Data Engineering. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3219788.3219804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography