Journal articles on the topic 'Defacement Detection'

To see the other types of publications on this topic, follow the link: Defacement Detection.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 18 journal articles for your research on the topic 'Defacement Detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hoang, Xuan Dau, and Ngoc Tuong Nguyen. "Detecting Website Defacements Based on Machine Learning Techniques and Attack Signatures." Computers 8, no. 2 (May 8, 2019): 35. http://dx.doi.org/10.3390/computers8020035.

Full text
Abstract:
Defacement attacks have long been considered one of prime threats to websites and web applications of companies, enterprises, and government organizations. Defacement attacks can bring serious consequences to owners of websites, including immediate interruption of website operations and damage of the owner reputation, which may result in huge financial losses. Many solutions have been researched and deployed for monitoring and detection of website defacement attacks, such as those based on checksum comparison, diff comparison, DOM tree analysis, and complicated algorithms. However, some solutions only work on static websites and others demand extensive computing resources. This paper proposes a hybrid defacement detection model based on the combination of the machine learning-based detection and the signature-based detection. The machine learning-based detection first constructs a detection profile using training data of both normal and defaced web pages. Then, it uses the profile to classify monitored web pages into either normal or attacked. The machine learning-based component can effectively detect defacements for both static pages and dynamic pages. On the other hand, the signature-based detection is used to boost the model’s processing performance for common types of defacements. Extensive experiments show that our model produces an overall accuracy of more than 99.26% and a false positive rate of about 0.27%. Moreover, our model is suitable for implementation of a real-time website defacement monitoring system because it does not demand extensive computing resources.
APA, Harvard, Vancouver, ISO, and other styles
2

Bergadano, Francesco, Fabio Carretto, Fabio Cogno, and Dario Ragno. "Defacement Detection with Passive Adversaries." Algorithms 12, no. 8 (July 29, 2019): 150. http://dx.doi.org/10.3390/a12080150.

Full text
Abstract:
A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company.
APA, Harvard, Vancouver, ISO, and other styles
3

Albalawi, Mariam, Rasha Aloufi, Norah Alamrani, Neaimh Albalawi, Amer Aljaedi, and Adel R. Alharbi. "Website Defacement Detection and Monitoring Methods: A Review." Electronics 11, no. 21 (November 1, 2022): 3573. http://dx.doi.org/10.3390/electronics11213573.

Full text
Abstract:
Web attacks and web defacement attacks are issues in the web security world. Recently, website defacement attacks have become the main security threats for many organizations and governments that provide web-based services. Website defacement attacks can cause huge financial and data losses that badly affect the users and website owners and can lead to political and economic problems. Several detection techniques and tools are used to detect and monitor website defacement attacks. However, some of the techniques can work on static web pages, dynamic web pages, or both, but need to focus on false alarms. Many techniques can detect web defacement. Some are based on available online tools and some on comparing and classification techniques; the evaluation criteria are based on detection accuracies with 100% standards and false alarms that cannot reach 1.5% (and never 2%); this paper presents a literature review of the previous works related to website defacement, comparing the works based on the accuracy results, the techniques used, as well as the most efficient techniques.
APA, Harvard, Vancouver, ISO, and other styles
4

Mondragón, Oscar, Andrés Felipe Mera Arcos, Christian Urcuqui, and Andrés Navarro Cadavid. "Security control for website defacement." Sistemas y Telemática 15, no. 41 (August 1, 2017): 45–55. http://dx.doi.org/10.18046/syt.v15i41.2442.

Full text
Abstract:
Cyber-attacks to websites are increasing steadily affecting the integrity and availability of information, so the implementation of safeguards to mitigate or reduce to acceptable levels the risks generated are necessary. Computer incidents produce economic and reputational impacts to different organizations. It has identified an increase in computer attacks on different organizations where one of them, and highly reputational impact, is the “Defacement” attack, which consists of unauthorized modification or alteration to the web sites using wordpress cms , affecting the integrity of information. The result of this article proposes the development of a model for establishing a security control to perform the containment and reporting of this attack type, which currently have focused on the websites of the government entities. The development model allows online control the attack on Web sites by constant reading of certain parts of the source code making the detection and maintenance of the integrity of information.
APA, Harvard, Vancouver, ISO, and other styles
5

Cho, Youngho. "Intelligent On-Off Web Defacement Attacks and Random Monitoring-Based Detection Algorithms." Electronics 8, no. 11 (November 13, 2019): 1338. http://dx.doi.org/10.3390/electronics8111338.

Full text
Abstract:
Recent cyberattacks armed with various ICT (information and communication technology) techniques are becoming advanced, sophisticated and intelligent. In security research field and practice, it is a common and reasonable assumption that attackers are intelligent enough to discover security vulnerabilities of security defense mechanisms and thus avoid the defense systems’ detection and prevention activities. Web defacement attacks refer to a series of attacks that illegally modify web pages for malicious purposes, and are one of the serious ongoing cyber threats that occur globally. Detection methods against such attacks can be classified into either server-based approaches or client-based approaches, and there are pros and cons for each approach. From our extensive survey on existing client-based defense methods, we found a critical security vulnerability which can be exploited by intelligent attackers. In this paper, we report the security vulnerability in existing client-based detection methods with a fixed monitoring cycle and present novel intelligent on-off web defacement attacks exploiting such vulnerability. Next, we propose to use a random monitoring strategy as a promising countermeasure against such attacks, and design two random monitoring defense algorithms: (1) Uniform Random Monitoring Algorithm (URMA), and (2) Attack Damage-Based Random Monitoring Algorithm (ADRMA). In addition, we present extensive experiment results to validate our idea and show the detection performance of our random monitoring algorithms. According to our experiment results, our random monitoring detection algorithms can quickly detect various intelligent web defacement on-off attacks (AM1, AM2, and AM3), and thus do not allow huge attack damage in terms of the number of defaced slots when compared with an existing fixed periodic monitoring algorithm (FPMA).
APA, Harvard, Vancouver, ISO, and other styles
6

Davanzo, G., E. Medvet, and A. Bartoli. "Anomaly detection techniques for a web defacement monitoring service." Expert Systems with Applications 38, no. 10 (September 2011): 12521–30. http://dx.doi.org/10.1016/j.eswa.2011.04.038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kung, Ren-Yi, Nai-Hsin Pan, Charles C. N. Wang, and Pin-Chan Lee. "Application of Deep Learning and Unmanned Aerial Vehicle on Building Maintenance." Advances in Civil Engineering 2021 (April 19, 2021): 1–12. http://dx.doi.org/10.1155/2021/5598690.

Full text
Abstract:
Several natural and human factors are responsible for the defacement of the external walls and tiles of buildings, and the related deterioration can be a public safety hazard. Therefore, active building maintenance and repair processes are essential for ensuring building sustainability. However, conventional inspection methods are time-, cost-, and labor-intensive processes. Therefore, herein, this study proposes a convolutional neural network (CNN) model for image-based automated detection and localization of key building defects (efflorescence, spalling, cracking, and defacement). Based on a pretrained CNN VGG-16 classifier, this model applies class activation mapping for object localization. After identifying its limitations in real-life applications, this study determined the model’s robustness and ability to accurately detect and localize defects in the external wall tiles of buildings. For real-time detection and localization, this study applied this model by using mobile devices and drones. The results show that the application of deep learning with UAV can effectively detect various kinds of external wall defects and improve the detection efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

R, Newlin Shebiah, and Arivazhagan S. "Versatile Defacement Detection by Monitoring Video Sequences Using Deep Learning." European Journal of Engineering Research and Science 4, no. 7 (July 19, 2019): 37–41. http://dx.doi.org/10.24018/ejers.2019.4.7.1396.

Full text
Abstract:
The main objective of this paper is to detect vandal and vandalism by monitoring recorded video sequences. Vandalism is one of the most commonly occurring crimes in the society that indirectly affects the economy of the country. The proposed algorithm takes in the input from the video extracted from surveillance camera which prevails in public places. Further, it is converted into frames and subtracted with the background to detect the foreground object. The background subtracted image contains both human and non-human moving objects. In order to differentiate human pixels and other moving objects in the video sequence, discriminative features are extracted using deep architecture and classified using SVM classifier. Deep features proved to be highly discriminative when compared with the handcrafted Histogram of Oriented Gradients features. By analyzing the dwell time of the person in the restricted scene and his motion pattern with time and significant change in background vandalism act is declared and the person is considered as vandal. The proposed method was evaluated on the videos collected from You Tube with the contents taken during night time, multiple vandals, car vandals etc.
APA, Harvard, Vancouver, ISO, and other styles
9

R, Newlin Shebiah, and Arivazhagan S. "Versatile Defacement Detection by Monitoring Video Sequences Using Deep Learning." European Journal of Engineering and Technology Research 4, no. 7 (July 19, 2019): 37–41. http://dx.doi.org/10.24018/ejeng.2019.4.7.1396.

Full text
Abstract:
The main objective of this paper is to detect vandal and vandalism by monitoring recorded video sequences. Vandalism is one of the most commonly occurring crimes in the society that indirectly affects the economy of the country. The proposed algorithm takes in the input from the video extracted from surveillance camera which prevails in public places. Further, it is converted into frames and subtracted with the background to detect the foreground object. The background subtracted image contains both human and non-human moving objects. In order to differentiate human pixels and other moving objects in the video sequence, discriminative features are extracted using deep architecture and classified using SVM classifier. Deep features proved to be highly discriminative when compared with the handcrafted Histogram of Oriented Gradients features. By analyzing the dwell time of the person in the restricted scene and his motion pattern with time and significant change in background vandalism act is declared and the person is considered as vandal. The proposed method was evaluated on the videos collected from You Tube with the contents taken during night time, multiple vandals, car vandals etc.
APA, Harvard, Vancouver, ISO, and other styles
10

Ghaleb, Fuad A., Mohammed Alsaedi, Faisal Saeed, Jawad Ahmad, and Mohammed Alasli. "Cyber Threat Intelligence-Based Malicious URL Detection Model Using Ensemble Learning." Sensors 22, no. 9 (April 28, 2022): 3373. http://dx.doi.org/10.3390/s22093373.

Full text
Abstract:
Web applications have become ubiquitous for many business sectors due to their platform independence and low operation cost. Billions of users are visiting these applications to accomplish their daily tasks. However, many of these applications are either vulnerable to web defacement attacks or created and managed by hackers such as fraudulent and phishing websites. Detecting malicious websites is essential to prevent the spreading of malware and protect end-users from being victims. However, most existing solutions rely on extracting features from the website’s content which can be harmful to the detection machines themselves and subject to obfuscations. Detecting malicious Uniform Resource Locators (URLs) is safer and more efficient than content analysis. However, the detection of malicious URLs is still not well addressed due to insufficient features and inaccurate classification. This study aims at improving the detection accuracy of malicious URL detection by designing and developing a cyber threat intelligence-based malicious URL detection model using two-stage ensemble learning. The cyber threat intelligence-based features are extracted from web searches to improve detection accuracy. Cybersecurity analysts and users reports around the globe can provide important information regarding malicious websites. Therefore, cyber threat intelligence-based (CTI) features extracted from Google searches and Whois websites are used to improve detection performance. The study also proposed a two-stage ensemble learning model that combines the random forest (RF) algorithm for preclassification with multilayer perceptron (MLP) for final decision making. The trained MLP classifier has replaced the majority voting scheme of the three trained random forest classifiers for decision making. The probabilistic output of the weak classifiers of the random forest was aggregated and used as input for the MLP classifier for adequate classification. Results show that the extracted CTI-based features with the two-stage classification outperform other studies’ detection models. The proposed CTI-based detection model achieved a 7.8% accuracy improvement and 6.7% reduction in false-positive rates compared with the traditional URL-based model.
APA, Harvard, Vancouver, ISO, and other styles
11

Chandak, Rakhi, Manoj Chandak, Pranali Thakare, Ramhari Sathawane, Swapnil Mohod, Runal Bansod, Pranada Deshmukh, and Zareesh Akhtar. "Trending Breakthroughs in the Advances of Detection of Oral Premalignant and Malignant Lesions - A Review." Journal of Evolution of Medical and Dental Sciences 10, no. 28 (July 12, 2021): 2122–27. http://dx.doi.org/10.14260/jemds/2021/433.

Full text
Abstract:
Oral cancer is the sixth most common malignant tumour, and it is the leading cause of morbidity and mortality due to its capacity to spread and invade. Oral cancer occurs at a different rate in different areas of the world, ranging from 2 to 10 per 100,000 people each year. Oral cancer is prevalent in South Asian nations such as Sri Lanka, India, Pakistan, and Bangladesh. In India, the frequency is 7-17 per 100,000 people each year, with 75,000 - 80,000 new cases per year. Identifying oral cancer in its early stages has a significant impact on survival rates when compared to detecting it later. Despite this, almost half of all diagnosed patients die within five years. A variety of well-established cancer screening programmes have been demonstrated to lower the patient morbidity and mortality dramatically. Regular check-ups, which include a thorough inspection of the whole mouth, are critical for detecting malignant and pre-cancerous problems early on. Unfortunately, early detection of oral precancerous and cancerous lesions has proved difficult due to the lesions' asymptomatic nature, doctors' casual approach to benign lesions, and the fact that 50 % of patients had regional or distant metastases at the time of diagnosis. Oral cancer is one of the most common cancers that leads to defacement and death. Despite recent advancements in therapeutic modalities, the prognosis has not improved. Patient’s mortality rates are positively associated with the point of presentation, with 60 % of people diagnosed with late-stage illness. Early diagnosis is important for oral cancer patient’s survival rate, as it decreases morbidity and mortality. According to the World Health Organization, the bulk of oral cancer patients are diagnosed late in the disease's progression, with a mediocre 5 - year survival rate of 50 %. As a result, careful treatment of oral cancer necessitates early diagnosis and intervention. Surgical biopsy is the gold standard for medical purposes, but it requires clinical assistance. Other screening methods that are simple to use, non-invasive, and expensive are the norms for any test to be accepted as a histopathology choice. The older cancer diagnosis modalities took longer, had more inter-observer bias, and were less descriptive. A standard oral examination with digital palpation is used in traditional techniques of screening for oral possibly malignant illnesses and oral cancers. Conventional inspection has been shown to be a poor discriminator of oral mucosal lesions. A variety of visual aids have been developed to help clinicians spot anomalies in the oral mucosa and in recent years, scientific and clinical developments have aided in the early detection and treatment of this disease. This review reflects on some of the older diagnostic modalities and screening methods for oral cancer diagnosis, as well as some of the recent more sophisticated techniques. KEY WORDS Diagnostic Aids, Oral Cancer, Premalignant Lesions
APA, Harvard, Vancouver, ISO, and other styles
12

Bartoli, Alberto, Giorgio Davanzo, and Eric Medvet. "A Framework for Large-Scale Detection of Web Site Defacements." ACM Transactions on Internet Technology 10, no. 3 (October 2010): 1–37. http://dx.doi.org/10.1145/1852096.1852098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

WEI, Wenhan, and Yigui DENG. "Detection model and method of website defacements based on attributes partial changes." Journal of Computer Applications 33, no. 2 (September 24, 2013): 430–33. http://dx.doi.org/10.3724/sp.j.1087.2013.00430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kavyashree, K., C. N. Sowmyarani, and P. Dayananda. "Transmission Control Protocol Off Path Exploits in Web." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 3995–98. http://dx.doi.org/10.1166/jctn.2020.9006.

Full text
Abstract:
The network community in the current scenario has faced lot of potential threats like restricted access to unauthorized network; break in through other organizations, making the system unavailable and so on. An off-path attacker can perform various attacks like Browser page read, Web phishing, website Spoofing and defacement to know the 4 tuples in Transmission Control Protocol. The attacker can also carry out Cross Site Scripting to know the sensitive information on websites. Cross Site Request Forgery which performs defective exploit on web. This helps in detecting the four tuples Sequence number, Acknowledgement number, Global IPID Counter and ports.
APA, Harvard, Vancouver, ISO, and other styles
15

Shi, Wenzhong, Wael Ahmed, Na Li, Wenzheng Fan, Haodong Xiang, and Muyang Wang. "Semantic Geometric Modelling of Unstructured Indoor Point Cloud." ISPRS International Journal of Geo-Information 8, no. 1 (December 27, 2018): 9. http://dx.doi.org/10.3390/ijgi8010009.

Full text
Abstract:
A method capable of automatically reconstructing 3D building models with semantic information from the unstructured 3D point cloud of indoor scenes is presented in this paper. This method has three main steps: 3D segmentation using a new hybrid algorithm, room layout reconstruction, and wall-surface object reconstruction by using an enriched approach. Unlike existing methods, this method aims to detect, cluster, and model complex structures without having prior scanner or trajectory information. In addition, this method enables the accurate detection of wall-surface “defacements”, such as windows, doors, and virtual openings. In addition to the detection of wall-surface apertures, the detection of closed objects, such as doors, is also possible. Hence, for the first time, the whole 3D modelling process of the indoor scene from a backpack laser scanner (BLS) dataset was achieved and is recorded for the first time. This novel method was validated using both synthetic data and real data acquired by a developed BLS system for indoor scenes. Evaluating our approach on synthetic datasets achieved a precision of around 94% and a recall of around 97%, while for BLS datasets our approach achieved a precision of around 95% and a recall of around 89%. The results reveal this novel method to be robust and accurate for 3D indoor modelling.
APA, Harvard, Vancouver, ISO, and other styles
16

Du, Ruizhong, Yan Gao, and Cui Liu. "Fine-grained Web Service Trust Detection: A Joint Method of Machine Learning and Blockchain." Journal of Web Engineering, July 30, 2022. http://dx.doi.org/10.13052/jwe1540-9589.2157.

Full text
Abstract:
Current website defacement detection methods often ignore security and credibility in the detection process. Furthermore, with the gradual development of dynamic websites, false positives and underreports of website defacement have periodically occurred. Therefore, to enhance the credibility of website defacement detection and reduce the false-positive rate and the false-negative rate of website defacement, this paper proposes a fine-grained trust detection scheme called WebTD, that combines machine learning and blockchain. WebTD consists of two parts: an analysis layer and a verification layer. The analysis layer is the key to improving the success rate of website defacement detection. This layer mainly uses the naive Bayes (NB) algorithm to decouple and segment different types of web page content, and then preprocess the segmented data to establish a complete analysis model. Second, the verification layer is the key to establishing a credible detection mechanism. WebTD develops a new blockchain model and proposes a multi-value verification algorithm to achieve a multilayer detection mechanism for the blockchain. In addition, to quickly locate and repair the defaced data of the website, the Merkle tree (MT) algorithm is used to calculate the preprocessed data. Finally, we evaluate WebTD against two state-of-the-art research schemes. The experimental results and the security analysis show that WebTD not only establishes a credible web service detection mechanism but also keeps the detection success rate above 98%, which can effectively ensure the integrity of the website.
APA, Harvard, Vancouver, ISO, and other styles
17

R, Shalini, and Sasikala S. "Gaussian Kernel Prompted Fuzzy C Means Algorithm with Multi- Object Contouring Method for Segmenting NPDR Features in Diabetic Retinopathy Fundus Images." Global Journal of Science Frontier Research, December 23, 2019, 1–17. http://dx.doi.org/10.34257/gjsfrfvol19is5pg1.

Full text
Abstract:
Diabetic retinopathy is an ophthalmic inflammation caused by diabetes which ends in visual defacement if not diagnosed earlier and that has two types namely Non-Proliferative Diabetic Retinopathy (NPDR) and Proliferative Diabetic Retinopathy (PDR). NPDR features are present in the earliest stage and systematic detection of these features can improve the diagnosis of the disease severity formerly. Several detection methods exists previously, still there is performance lack on large datasets. The objective of this study is detecting NPDR features from diabetic retinaopathy fundus images of large datasets with good performance level. The study has investigated different fuzzy based systems and to execute the objective; GK_FCM approach is proposed which integrates Gaussian Kernel function in conventional FCM. The execution has four phases, initially the input image undergoes preprocessing using green channel extraction, median filter to enhance the image quality and background removal is performed with extended minima transform technique, mathematical arithmetic operation and pixel replacement method to remove the outlier called Fovea (FV).
APA, Harvard, Vancouver, ISO, and other styles
18

Nguyen, Trong Hung, Xuan Dau Hoang, and Duc Dung Nguyen. "Detecting Website Defacement Attacks using Web-page Text and Image Features." International Journal of Advanced Computer Science and Applications 12, no. 7 (2021). http://dx.doi.org/10.14569/ijacsa.2021.0120725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography