To see the other types of publications on this topic, follow the link: Webpage.

Journal articles on the topic 'Webpage'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Webpage.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Jie, Ling Bai, and Xiang Yang Huang. "A Method Shielding the Chinese Game Webpage Based on Ontology." Advanced Materials Research 219-220 (March 2011): 1454–58. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.1454.

Full text
Abstract:
As online gaming addiction has a serious impact on the healthy growth of young people, this paper presents a method that uses the ontology to shield Chinese game webpage. This method, based on positive example webpages and counter example webpages to calculate weight of feature words, to establish domain feature thesaurus and make ontology; then according to positive example WebPages, calculate weight of ontology elements in all parts of webpage to get ontology element weight database; then intercept webpage from the network application layer, identify candidate webpage (candidate) based on domain feature thesaurus; finally carry out semantic relevancy computing and shielding the game webpages in candidates. Experiments showed the shielding accuracy rate had arrived 98.8%. The principle of the method can be extended to other domains.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Dong Mei, Ming Ma, and Yan Sun. "Sensitive Webpage Filter Based on Multiple Filtering." Applied Mechanics and Materials 241-244 (December 2012): 2891–96. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2891.

Full text
Abstract:
In order to improve the accuracy and real-time performance of webpage filtering, a sensitive webpage filter based on multiple filtering was designed. Firstly, the URL is gained from IE browser’s address bar by BHO technology; Secondly, match the webpage text with sensitive vocabulary database using SMA algorithm; Finally, use the sensitive image detecting algorithm combing face detection, skin detection, skin text detection and classification to filter sensitive images in the webpage. The Simulation experimental results showed that the sensitive webpage filter can effectively intercept and filter sensitive webpages, meeting the accuracy and the real-time requirement of webpage filtering.
APA, Harvard, Vancouver, ISO, and other styles
3

Yi, WANG, and DENG Jiamin. "A Technical Communication Approach to Website Technical Translation of Chongqing Manufacturing Enterprises: A Case Study of About Us." Asia-Pacific Journal of Humanities and Social Sciences 2, no. 3 (2022): 130–43. http://dx.doi.org/10.53789/j.1653-0465.2022.0203.016.p.

Full text
Abstract:
In the era of globalization and internetization, a growing number of manufacturing enterprises in Chongqing plan to “go global”. The enterprise webpage, the online portal of the enterprise, plays an important role. At present, domestic translators’ understanding of webpage translation is still limited to text translation due to the influence of traditional translation ideas, resulting in poor readability and usability of English webpages of Chongqing manufacturing enterprises. Therefore, this paper compares the English webpages of local manufacturing enterprises in Chongqing with those of well-known foreign manufacturing enterprises. From the perspective of technical communication, taking the About Us as an example, this paper explores the relationship between technical communication and the English webpages of manufacturing enterprises, the presentation of webpage text, the translation of text content and the standard of the logical structure of the text, in order to help translators more accurately understand the task and the process of webpage technical translation.
APA, Harvard, Vancouver, ISO, and other styles
4

Feng, Jian, Yuqiang Qiao, Ou Ye, and Ying Zhang. "Detecting phishing webpages via homology analysis of webpage structure." PeerJ Computer Science 8 (February 1, 2022): e868. http://dx.doi.org/10.7717/peerj-cs.868.

Full text
Abstract:
Phishing webpages are often generated by phishing kits or evolved from existing kits. Therefore, the homology analysis of phishing webpages can help curb the proliferation of phishing webpages from the source. Based on the observation that phishing webpages belonging to the same family have similar page structures, a homology detection method based on webpage clustering according to structural similarity is proposed. The method consists of two stages. The first stage realizes model construction. Firstly, it extracts the structural features and style attributes of webpages through the document structure and vectorizes them, and then assigns different weights to different features, and measures the similarity of webpages and guides webpage clustering by webpage difference index. The second phase completes the detection of webpages to be tested. The fingerprint generation algorithm using double compressions generates fingerprints for the centres of the clusters and the webpages to be tested respectively and accelerates the detection process of the webpages to be tested through bitwise comparison. Experiments show that, compared with the existing methods, the proposed method can accurately locate the family of phishing webpages and can detect phishing webpages efficiently.
APA, Harvard, Vancouver, ISO, and other styles
5

Belfedhal, Alaa Eddine. "Multi-Modal Deep Learning for Effective Malicious Webpage Detection." Revue d'Intelligence Artificielle 37, no. 4 (2023): 1005–13. http://dx.doi.org/10.18280/ria.370422.

Full text
Abstract:
The pervasive threat of malicious webpages, which can lead to financial loss, data breaches, and malware infections, underscores the need for effective detection methods. Conventional techniques for detecting malicious web content primarily rely on URL-based features or features extracted from various webpage components, employing a single feature vector input into a machine learning model for classifying webpages as benign or malicious. However, these approaches insufficiently address the complexities inherent in malicious webpages. To overcome this limitation, a novel Multi-Modal Deep Learning method for malicious webpage detection is proposed in this study. Three types of automatically extracted features, specifically those derived from the URL, the JavaScript code, and the webpage text, are leveraged. Each feature type is processed by a distinct deep learning model, facilitating a comprehensive analysis of the webpage. The proposed method demonstrates a high degree of effectiveness, achieving an accuracy rate of 97.90% and a false negative rate of a mere 2%. The results highlight the advantages of utilizing multi-modal features and deep learning techniques for detecting malicious webpages. By considering various aspects of web content, the proposed method offers improved accuracy and a more comprehensive understanding of malicious activities, thereby enhancing web user security and effectively mitigating the risks associated with malicious webpages.
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Shanshan. "The Study of Interpersonal Function of Yellow River Culture Tourism Webpage from the Perspective of the Appraisal Theory." Higher Education and Practice 2, no. 1 (2025): 67–73. https://doi.org/10.62381/h251111.

Full text
Abstract:
With the development of multimedia technology, people’s channels for obtaining tourism information have undergone tremendous changes, and the tourism webpage has become an important window for them to understand Yellow River culture, and it is also the fastest and most convenient information carrier for communication with tourism product sellers. Taking the interpersonal function construction as the perspective, the evaluation theory as the framework, the Yellow River cultural tourism webpage and the foreign cultural tourism webpage as the research object, this paper builds its own corpus, and compare the similarities and differences in the construction of interpersonal functions of Chinese and foreign cultural tourism webpages, so as to reveal the problems existing in the construction of interpersonal functions of Yellow River cultural tourism webpages, and propose corresponding solution strategies based on the interpersonal function construction methods of foreign cultural tourism webpages.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Yilin, Siqing Xue, and Jun Song. "A Malicious Webpage Detection Method Based on Graph Convolutional Network." Mathematics 10, no. 19 (2022): 3496. http://dx.doi.org/10.3390/math10193496.

Full text
Abstract:
In recent years, with the rapid development of the Internet and information technology, video websites, shopping websites, and other portals have grown rapidly. However, malicious webpages can disguise themselves as benign websites and steal users’ private information, which seriously threatens network security. Current detection methods for malicious webpages do not fully utilize the syntactic and semantic information in the web source code. In this paper, we propose a GCN-based malicious webpage detection method (GMWD), which constructs a text graph to describe and then a GCN model to learn the syntactic and semantic correlations within and between webpage source codes. We replace word nodes in the text graph with phrase nodes to better maintain the syntactic and semantic integrity of the webpage source code. In addition, we use the URL links appearing in the source code as auxiliary detection information to further improve the detection accuracy. The experiments showed that the proposed method can achieve 99.86% accuracy and a 0.137% false negative rate, achieving a better performance than other related malicious webpage detection methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Yuk, Simun, and Youngho Cho. "A Time-Based Dynamic Operation Model for Webpage Steganography Methods." Electronics 9, no. 12 (2020): 2113. http://dx.doi.org/10.3390/electronics9122113.

Full text
Abstract:
The webpage steganography technique has been used for a covert communication method for various purposes in which a sender embeds a secret message into a plain webpage file like an HTML file by using various steganography methods. With human eyes, it is very difficult to distinguish between the original webpage (cover webpage) and the modified webpage with the secret data (stego webpage) because both are displayed alike in a web browser. In this approach, when two communicating entities want to share a secret message, a sender uploads a stego webpage to a web server or modifies an existing webpage in the web server by using a webpage steganography method, and then a receiver accesses the stego webpage to download and extract the embedded secret data from it. Meanwhile, according to our extensive survey, we observed that most webpage steganography methods focused on proposing or improving steganography algorithms but did not well address how to operate a stego webpage as time passes. However, if a stego webpage is used in a static way such that the stego webpage does not change and is constantly exposed to web clients until the sender removes it, such a static operation approach will limit or badly affect the hiding capacity and undetectability of a webpage steganography method. By this motivation, in this paper, we proposed a time-based dynamic operation model (TDOM) that improves the performance of existing webpage steganography methods in terms of hiding capacity and undetectability by dynamically replacing the stego webpage with other stego webpages or the original webpage. In addition, we designed two time-based dynamic operation algorithms (TDOA-C and TDOA-U), which improve the hiding capacity of existing methods and TDOA-U for improving the undetectability of existing methods, respectively. To validate our model and show the performance of our proposed algorithms, we conducted extensive comparative experiments and numerical analysis by implementing two webpage steganography methods with our TDOM (CCL with TDOA-C and COA with TDOA-C) and tested them in the web environment. According to our experiments and analysis, our proposed algorithms could significantly improve the hiding capacity and undetectability of two existing webpage steganography methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Yuan, Xiao Yan. "Research on Search Sorting Algorithm Based on Multi-Dimensional Matching." Advanced Materials Research 926-930 (May 2014): 3195–99. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.3195.

Full text
Abstract:
Since the current search sorting algorithms cannot find the desirable webpages quickly and accurately, a novel search sorting algorithm based on multi-dimensional matching is proposed in this study. This algorithm computes the semantic similarity of search terms based on ontology concept, and then the relevance of temporal information of those search terms with time of webpage. As a result, the relevance of the search term with the content of the webpage is calculated, hence realizing most appropriate webpage sorting. Finally, several methods are compared in terms of their average precisions and average recall ratios.
APA, Harvard, Vancouver, ISO, and other styles
10

Jayalakshmi, N., P. Padmaja, and G. Jaya Suma. "Webpage Recommendation System Using Interesting Subgraphs and Laplace Based k-Nearest Neighbor." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 03 (2019): 2053003. http://dx.doi.org/10.1142/s0218001420530031.

Full text
Abstract:
An interesting research area that permits the user to mine the significant information, called frequent subgraph, is Graph-Based Data Mining (GBDM). One of the well-known algorithms developed to extract frequent patterns is GASTON algorithm. Retrieving the interesting webpages from the log files contributes heavily to various applications. In this work, a webpage recommendation system has been proposed by introducing Chronological Cuckoo Search (Chronological-CS) algorithm and the Laplace correction based k-Nearest Neighbor (LKNN) to retrieve the useful webpage from the interesting webpage. Initially, W-Gaston algorithm extracts the interesting subgraph from the log files and provides it to the proposed webpage recommendation system. The interesting subgraphs subjected to clustering with the proposed Chronological-CS algorithm, which is developed by integrating the chronological concept into Cuckoo Search (CS) algorithm, provide various cluster groups. Then, the proposed LKNN algorithm recommends the webpage from the clusters. Simulation of the proposed webpage recommendation algorithm is done by utilizing the data from MSNBC and weblog database. The results are compared with various existing webpage recommendation models and analyzed based on precision, recall, and F-measure. The proposed webpage recommendation model achieved better performance than the existing models with the values of 0.9194, 0.8947, and 0.86736, respectively, for the precision, recall, and F-measure.
APA, Harvard, Vancouver, ISO, and other styles
11

Sharra, Mae B. Fernandez, Gertrude B. Alpasan Bella, Jane V. Esimos Mary, Karyl P. Maligang Audrey, Mae S. Pagayonan Sheila, and Rose A. Zaragosa April. "A Comparative Study on Webpage Browsing Performance between Proprietary and Open Source Operating Systems on Wireless Networks." INTERNATIONAL JOURNAL OF MULTIDISCIPLINARY RESEARCH AND ANALYSIS 05, no. 01 (2022): 53–62. https://doi.org/10.47191/ijmra/v5-i1-07.

Full text
Abstract:
This experimental research study determined and compared the webpage browsing performance of proprietary and open source operating systems on wireless networks. It was intended to reveal the significant differences in the webpage browsing performance between proprietary and open source operating systems on wireless networks when classified as to hardware specifications and type’s web content. The researchers used the JavaScript Console of the Google Chrome web browser application to determine the time of the webpage to fully load. Operating system was the independent variable. Hardware specifications which were classified as old system and new system and types of web content which was also classified as static and dynamic webpages were the intervening variables. Webpages browsing performance was the dependent variable. The statistical tools used were arithmetic mean, and t-test. It also revealed that there were significant differences in the webpage browsing performance between proprietary and open source operating system on wireless networks when classified as to hardware specification and web content. The proprietary and open source operating systems were statistically different when classified as to hardware specifications and type of web content.  
APA, Harvard, Vancouver, ISO, and other styles
12

Maity, Ranjan, and Samit Bhattacharya. "A Quantitative Approach to Measure Webpage Aesthetics." International Journal of Technology and Human Interaction 16, no. 2 (2020): 53–68. http://dx.doi.org/10.4018/ijthi.2020040105.

Full text
Abstract:
Aesthetics measurement is important in determining and improving the usability of a webpage. Wireframe models, the collection of the rectangular objects, can approximate the size and positions of the different webpage elements. The positional geometry of these objects is primarily responsible for determining aesthetics as shown in studies. In this work, the authors propose a computational model for predicting webpage aesthetics based on the positional geometry features. In this study, the authors found that ten out of the thirteen reported features are statistically significant for webpage aesthetics. Using these ten features, the authors developed a computational model for webpage aesthetics prediction. The model works on the basis of support vector regression. The authors rated the wireframe models of 209 webpages by 150 participants. The average users' ratings and the ten significant features' values were used to train and test the aesthetics prediction model. Five-fold cross-validation technique shows the model can predict aesthetics with a Root Mean Square Error (RMSE) of only 0.42.
APA, Harvard, Vancouver, ISO, and other styles
13

Cai, Zengyu, Chunchen Tan, Jianwei Zhang, Tengteng Xiao, and Yuan Feng. "An Unhealthy Webpage Discovery System Based on Convolutional Neural Network." International Journal of Digital Crime and Forensics 14, no. 3 (2022): 1–15. http://dx.doi.org/10.4018/ijdcf.315614.

Full text
Abstract:
Currently, with the popularity of the internet, people are surrounded by a large number of unhealthy pages which have a serious impact on the physical and mental health of visitors. To protect the legitimate rights and interests of internet users from infringement and maintain the harmonious and stable development of society, a new unhealthy webpage discovery system is needed. First, this paper proposed the knowledge of unhealthy webpages and web crawlers, and then the whole system's plan and design were introduced. The test results show that the unhealthy webpage discovery system can meet the needs of users. This experiment uses a CNN algorithm to classify the text and completes the collection and classification of unhealthy information through URL acquisition and URL filtering. The experimental results show that the unhealthy webpage discovery system based on a convolutional neural network can greatly improve the accuracy of unhealthy webpage discovery and reduce the omission rate, which can meet the needs of users for unhealthy webpage discovery.
APA, Harvard, Vancouver, ISO, and other styles
14

Lybrand, Evan, Mary Micciche, Burton Tienken, et al. "1426. Increased Visits to Respiratory Protection Webpages during COVID-19." Open Forum Infectious Diseases 7, Supplement_1 (2020): S719. http://dx.doi.org/10.1093/ofid/ofaa439.1608.

Full text
Abstract:
Abstract Background COVID-19 transmission is thought to occur mainly via respiratory droplets produced when an infected person coughs or sneezes. Respiratory protection devices, when properly fitted and used, can prevent this type of illness transmission. Methods The Centers for Disease Control and Prevention’s National Personal Protective Technology Laboratory (NPPTL) within the National Institute of Occupational Safety and Health (NIOSH) is responsible for the certification and approval of respirators for use in occupational settings. NPPTL maintains webpages with information on respiratory protection devices. We monitored the number of webpage views for several NPPTL webpages from January 1, 2020 to May 8, 2020. The number of webpage visits was then compared to significant events during the COVID-19 outbreak as well as the previous year’s webpage visits. Results The landing page for NIOSH-approved N95 respirators received the most visits. This page received a total of 1,637,250 webpage visits with a peak of 63,715 on February 26, 2020, a 38,989% increase from the average daily page visits for that same month in 2019. This occurred on the same day as the White House gave its first televised briefing on COVID-19. The page providing general information on filtering facepiece respirators, including N95 respirators received 834,186 webpage visits with a peak of 20,520 on April 3, 2020, a 13,659% increase from the average daily webpage visits for that same month in 2019. This occurred on the same date as new CDC recommendations were provided for using cloth face masks in public places. The counterfeit respirator page maintained a steady, but small number of webpage visits, but increased from 4,261 on March 27, 2020 to 68,145 on March 29, 2020. This occurred as the first reports of fraudulent respirators were being published in the media. It then fell to 18,562 on March 30, 2020 and 13,093 on March 31, 2020. While most webpage visits were from the United States, visits from China, and Canada were higher than previous years. Conclusion During COVID-19 there was a large increase in the number of webpage visits for respiratory protection information which coincided with major events and media reports. Accurate, accessible respiratory protection information is an important resource during public health emergencies. Disclosures All Authors: No reported disclosures
APA, Harvard, Vancouver, ISO, and other styles
15

Thangasamy, Vasantha. "Efficacious Hyperlink Based Similarity Measure Using Heterogeneous Propagation of PageRank Scores." International Journal of Information Retrieval Research 9, no. 4 (2019): 36–49. http://dx.doi.org/10.4018/ijirr.2019100104.

Full text
Abstract:
Information available on the internet is wide, diverse, and dynamic. Since an enormous amount of information is available online, finding similarity between webpages using efficient hyperlink analysis is a challenging task. In this article, the researcher proposes an improved PageSim algorithm which measurse the importance of a webpage based on the PageRank values of connected webpage. Therefore, the proposed algorithm uses heterogeneous propagation of the PageRank score, based on the prestige measure of each webpage. The existing and the improved PageSim algorithms are implemented with a sample web graph. Real time Citation Networks, namely the ZEWAIL Citation Network and the DBLP Citation Network are used to test and compare the existing and improved PageSim algorithms. By using this proposed algorithm, it has been found that a similarity score between two different webpages significantly increases based on common information features and significantly decreases based on distinct factors.
APA, Harvard, Vancouver, ISO, and other styles
16

Rozi, Muhammad Fakhrur, Seiichi Ozawa, Tao Ban, Sangwook Kim, Takeshi Takahashi, and Daisuke Inoue. "Understanding the Influence of AST-JS for Improving Malicious Webpage Detection." Applied Sciences 12, no. 24 (2022): 12916. http://dx.doi.org/10.3390/app122412916.

Full text
Abstract:
JavaScript-based attacks injected into a webpage to perpetrate malicious activities are still the main problem in web security. Recent works have leveraged advances in artificial intelligence by considering many feature representations to improve the performance of malicious webpage detection. However, they did not focus on extracting the intention of JavaScript content, which is crucial for detecting the maliciousness of a webpage. In this study, we introduce an additional feature extraction process that can capture the intention of the JavaScript content of the webpage. In particular, we developed a framework for obtaining a JavaScript representation based on the abstract syntax tree for JavaScript (AST-JS), which enriches the webpage features for a better detection model. Moreover, we investigated the influence of our proposed feature on improving the model’s performance by using the Shapley additive explanation method to define the significance of each feature category compared to our proposed feature. The evaluation shows that adding the AST-JS feature can improve the performance for detecting malicious webpage compared to previous work. We also found that AST significantly influences performance, especially for webpages with JavaScript content.
APA, Harvard, Vancouver, ISO, and other styles
17

Christoper, Yoyogi. "Pengaruh Laman Resmi Kementerian Terhadap Reputasi." Brand Communication 3, no. 2 (2024): 125–37. https://doi.org/10.70704/bc.v3i2.279.

Full text
Abstract:
The Ministry of Industry (Kemenperin) is a governmental institution responsible for developing industries in Indonesia. One of the programs initiated by Kemenperin is the National Industrial Information System (SIINAS), aimed at providing industry-related information to the public. This study aims to analyze the influence of the SIINAS Ministry of Industry webpage on reputation through a regression study of the SIINAS program, using a data collection technique involving questionnaires. The research method employed is quantitative, distributing online questionnaires to respondents who have used or accessed the Ministry of Industry webpage and have an understanding of the SIINAS Program. Regression analysis is used to test the relationship between the independent variable (use of the SIINAS Ministry of Industry webpage) and the dependent variable (reputation). This research is expected to contribute to understanding the influence of government webpages, particularly the SIINAS Ministry of Industry webpage, on public perception of institutional reputation.
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Jiankun. "M. Shen, Y. Liu, L. Zhu, X. Du and J. Hu, "Fine-Grained Webpage Fingerprinting Using Only Packet Length Information of Encrypted Traffic," in IEEE Transactions on Information Forensics and Security, vol. 16, pp. 2046-2059, 2021, doi: 10.1109/TIFS.2020.3046876." IEEE Transactions on Information Forensics and Security 16 (April 12, 2024): pp. 2046–2059, 2021. https://doi.org/10.5281/zenodo.10963553.

Full text
Abstract:
Abstract: Encrypted web traffic can reveal sensitive information of users, such as their browsing behaviors. Existing studies on encrypted traffic analysis focus on website fingerprinting. We claim that fine-grained webpage fingerprinting, which speculates specific webpages on a same website visited by a victim, allows exploiting more user private information, e.g., shopping interests in an online shopping mall. Since webpages from the same website usually have very similar traffic traces that make them indistinguishable, existing solutions may end up with low accuracy. In this paper, we propose FineWP, a novel fine-grained webpage fingerprinting method. We make an observation that the length information of packets in bidirectional client-server interactions can be distinctive features for webpage fingerprinting. The extracted features are then fed into traditional machine learning models to train classifiers, which achieve both high accuracy and low training overhead. We collect two real-world traffic datasets and construct closed- and open-world evaluations to verify the effectiveness of FineWP. The experimental results demonstrate that FineWP is superior to the state-of-the-art methods in terms of accuracy, time complexity and stability.
APA, Harvard, Vancouver, ISO, and other styles
19

Pleasants, Elizabeth, Sylvia Guendelman, Karen Weidert, and Ndola Prata. "Quality of top webpages providing abortion pill information for Google searches in the USA: An evidence-based webpage quality assessment." PLOS ONE 16, no. 1 (2021): e0240664. http://dx.doi.org/10.1371/journal.pone.0240664.

Full text
Abstract:
Background In the United States, the internet is widely used to seek health information. Despite an estimated 18 million Google searches on abortion per year and the demonstrated importance of the abortion pill as an option for pregnancy termination, the top webpage search results for abortion pill searches, as well as the content and quality of those webpages, are not well understood. Methods We used Google’s Custom Search Application Programming Interface (API) to identify the top 10 webpages presented for “abortion pill” searches on August 06, 2018. We developed a comprehensive, evidence-based Family Planning Webpage Quality Assessment Tool (FPWQAT), which was used to assess webpage quality for the five top webpages presenting text-based educational content. Results Of the top webpages for “abortion pill” searches, a plannedparenthood.com page was the top result and scored highest on our assessment (81%), providing high-quality and useable information. The other four webpages, a Wikipedia.com page and three anti-abortion information webpages, scored much lower on our assessment (14%-43%). These four webpages had lower quality of information in less useable formats. The anti-abortion pages also presented a variety of disinformation about the abortion pill. Conclusions Both the lack of accurate clinical content on the majority of top webpages and the concerning disinformation they contained raise concerns about the quality of online abortion pill information, while underlining challenges posed by Google search results to informed choice for consumers. Healthcare providers and consumers must be informed of online abortion pill content that is not based in current clinical evidence, while advocates and policymakers should push for online information that is credible and useable. These changes are imperative given the importance of sound abortion pill information for reproductive decision-making at a time when in-person abortion services are further challenged in the US.
APA, Harvard, Vancouver, ISO, and other styles
20

Chapman, Lara, Charlotte Brooks, Jem Lawson, Cynthia Russell, and Jo Adams. "Accessibility of online self-management support websites for people with osteoarthritis: A text content analysis." Chronic Illness 15, no. 1 (2017): 27–40. http://dx.doi.org/10.1177/1742395317746471.

Full text
Abstract:
Objectives This study assessed accessibility of online self-management support webpages for people with osteoarthritis by considering readability of text and inclusion of images and videos. Methods Eight key search terms developed and agreed with patient and public involvement representatives were entered into the Google search engine. Webpages from the first page of Google search results were identified. Readability of webpage text was assessed using two standardised readability indexes, and the number of images and videos included on each webpage was recorded. Results Forty-nine webpages met the inclusion criteria and were assessed. Only five of the webpages met the recommended reading level for health education literature. Almost half (44.9%) of webpages did not include any informative images to support written information. A minority of the webpages (6.12%) included relevant videos. Discussion Information provided on health webpages aiming to support patients to self-manage osteoarthritis may not be read, understood or used effectively by many people accessing it. Recommendations include using accessible language in health information, supplementing written information with visual resources and reviewing content and readability in collaboration with patient and public involvement representatives.
APA, Harvard, Vancouver, ISO, and other styles
21

Guercio, Angela, Kathleen A. Stirbens, Joseph Williams, and Charles Haiber. "Addressing Challenges in Web Accessibility for the Blind and Visually Impaired." International Journal of Distance Education Technologies 9, no. 4 (2011): 1–13. http://dx.doi.org/10.4018/ijdet.2011100101.

Full text
Abstract:
Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is slowed by screen readers which do not interact well with web pages. This paper introduces WAVES, a tool for the fast retrieval of information in a web page for blind and visually impaired people. The paper describes the WAVES prototype, a system that performs a page restructuring of webpages. The system analyzes webpages, identifies elements of interests from a webpage, evaluates their importance by using semantic information and visual cues, sorts them by importance and uses them to restructure the webpage so that data from the original webpage are presented to the reader in a concise format. A preliminary evaluation test of the prototype system has been performed with a sample set of users. The results of the preliminary test show an increase in speed and accuracy when the WAVES system has been used.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Huiyou, Shuangyuan Li, and Mingqian Jia. "Design and Implementation of 3D Effect Web Page Based on JavaScript Technology." ITM Web of Conferences 25 (2019): 02004. http://dx.doi.org/10.1051/itmconf/20192502004.

Full text
Abstract:
With the development of Internet technology, people's production and life are greatly influenced. Web design is widely used in Internet world. Web design and user experience are inseparable. In the pursuit of webpage layout and aesthetics, but also to add user interaction functions, Dynamic effects technology based on JavaScript can achieve user interaction requirements. The perfect combination of JavaScript dynamic effects and interactive features is the best way to achieve webpage effects. This paper expounds the application method of JavaScript, and studies the special effect application of JavaScript technology in webpages through examples.
APA, Harvard, Vancouver, ISO, and other styles
23

Schmidt, Kristi E., and Yili Liu. "Design of Consumer Product Webpages: Experimental Investigations of Aesthetic and Performance Factors." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 18 (2005): 1743–46. http://dx.doi.org/10.1177/154193120504901814.

Full text
Abstract:
Consumer products are increasingly being purchased online, yet the surge in e-commerce for the sale of retail products was not paired with widespread design savvy or consideration for usability. In this paper, two experimental investigations of aesthetic and performance factors affecting the design of consumer product webpages yield results that designers can use to help create webpages that are useful, usable, and desirable in the competitive e-commerce environment. This research shows how aesthetic and overall preference, ease of use, and interaction speed vary as a function of webpage link color and style, quantity of webpage information, loading speed, and display complexity. Aesthetic preference increases as column width, display complexity, and loading speed increase, and varies with link style. Ease of use and interaction time also vary with link style.
APA, Harvard, Vancouver, ISO, and other styles
24

Sankpal, Lata Jaywant, and Suhas H. Patil. "Rider-Rank Algorithm-Based Feature Extraction for Re-ranking the Webpages in the Search Engine." Computer Journal 63, no. 10 (2020): 1479–89. http://dx.doi.org/10.1093/comjnl/bxaa032.

Full text
Abstract:
Abstract The webpage re-ranking is a challenging task while retrieving the webpages based on the query of the user. Even though the webpages in the search engines are ordered depends on the importance of the content, retrieving the necessary documents based on the input query is quite difficult. Hence, it is required to re-rank the webpages available in the websites based on the features of the pages in the search engines, like Google and Bing. Thus, an effective Rider-Rank algorithm is proposed to re-rank the webpages based on the Rider Optimization Algorithm (ROA). The input queries are forwarded to different search engines, and the webpages generated from the search engines with respect to the input query are gathered. Initially, the keywords are generated for the webpages. Then, the top keyword is selected, and the features are extracted from the top keyword using factor-based, text-based and rank-based features of the webpage. Finally, the webpages are re-ranked using the Rider-Rank algorithm. The performance of the proposed approach is analyzed based on the metrics, such as F-measure, recall and precision. From the analysis, it can be shown that the proposed algorithm obtains the F-measure, recall and precision of 0.90, 0.98 and 0.84, respectively.
APA, Harvard, Vancouver, ISO, and other styles
25

Manugunta, Ramya Krishna, Rytis Maskeliūnas, and Robertas Damaševičius. "Deep Learning Based Semantic Image Segmentation Methods for Classification of Web Page Imagery." Future Internet 14, no. 10 (2022): 277. http://dx.doi.org/10.3390/fi14100277.

Full text
Abstract:
Semantic segmentation is the task of clustering together parts of an image that belong to the same object class. Semantic segmentation of webpages is important for inferring contextual information from the webpage. This study examines and compares deep learning methods for classifying webpages based on imagery that is obscured by semantic segmentation. Fully convolutional neural network architectures (UNet and FCN-8) with defined hyperparameters and loss functions are used to demonstrate how they can support an efficient method of this type of classification scenario in custom-prepared webpage imagery data that are labeled multi-class and semantically segmented masks using HTML elements such as paragraph text, images, logos, and menus. Using the proposed Seg-UNet model achieved the best accuracy of 95%. A comparison with various optimizer functions demonstrates the overall efficacy of the proposed semantic segmentation approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Nie, Zaiqing, Ji-Rong Wen, and Wei-Ying Ma. "Webpage understanding." ACM SIGMOD Record 37, no. 4 (2009): 48–54. http://dx.doi.org/10.1145/1519103.1519111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hasan, Fares, Koo Kwong Ze, Rozilawati Razali, Abudhahir Buhari, and Elisha Tadiwa. "An IMPROVED PAGERANK ALGORITHM BASED ON A HYBRID APPROACH." Science Proceedings Series 2, no. 1 (2020): 17–21. http://dx.doi.org/10.31580/sps.v2i1.1215.

Full text
Abstract:
PageRank is an algorithm that brings an order to the Internet by returning the best result to the users corresponding to a search query. The algorithm returns the result by calculating the outgoing links that a webpage has thus reflecting whether the webpage is relevant or not. However, there are still problems existing which relate to the time needed to calculate the page rank of all the webpages. The turnaround time is long as the webpages in the Internet are a lot and keep increasing. Secondly, the results returned by the algorithm are biased towards mainly old webpages resulting in newly created webpages having lower page rankings compared to old webpages even though new pages might have comparatively more relevant information. To overcome these setbacks, this research proposes an alternative hybrid algorithm based on an optimized normalization technique and content-based approach. The proposed algorithm reduces the number of iterations required to calculate the page rank hence improving efficiency by calculating the mean of all page rank values and normalising the page rank value through the use of the mean. This is complemented by calculating the valid links of web pages based on the validity of the links rather than the conventional popularity.
APA, Harvard, Vancouver, ISO, and other styles
28

Mylsami, T., and B. L. Shivakumar. "Effectiveness of Ant Colony Optimization for Weighted Page Rank Algorithm in Web Access." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (2017): 375. http://dx.doi.org/10.23956/ijarcsse/v7i7/0223.

Full text
Abstract:
In general the web is growing very rapidly and data generation is also vast and high. Search Engines play an eminent role in retrieving data from web. The user searching for a topic in a web and it retrieves more than hundreds of searchresults as websites. Among the all websites it is difficult for the user to access all the web pages to find relevant information. Weighted Page rank algorithms play a dominant role to make navigation easier to the user. The popularity of a web page depends on the number of its in links and out links and each webpage gets a proportional page rank value. This algorithm considers only link structure not thecontent of the page, so it returns lesssignificant pages to the user query. To overcome the above issues the study focuses on Ant Colony optimization. This study proposes application of ant colony algorithm for modified weighted page rank algorithm. The ACO concept will discovery of redundant components, use clustering based on the structure similarity or web behavior for user and similar WebPages matching. User and webpage similarity matching using Ant colony Optimization based clustering will leads to better access of the webpage in less time and required webpage.
APA, Harvard, Vancouver, ISO, and other styles
29

Lu, Ronghua. "Construction of Unhealthy Webpage Filtering Mode Based on Data Mining Technology." Academic Journal of Science and Technology 3, no. 3 (2022): 211–14. http://dx.doi.org/10.54097/ajst.v3i3.2984.

Full text
Abstract:
The unhealthy category eigenvector library is constructed by adaptive sample library, and the unhealthy category model is constructed on this basis to realize the filtering of unhealthy webpages. Our expermient proves that this mode can filter unhealthy webpage at higher speed and satisfying precision.
APA, Harvard, Vancouver, ISO, and other styles
30

Gupta, Brij B., and Ankit Kumar Jain. "Phishing Attack Detection using a Search Engine and Heuristics-based Technique." Journal of Information Technology Research 13, no. 2 (2020): 94–109. http://dx.doi.org/10.4018/jitr.2020040106.

Full text
Abstract:
The language used in the textual content of the webpage is the barrier in most of the existing anti-phishing methods. Most of the existing anti-phishing methods can identify the fake webpages written in the English language only. Therefore, we present a search engine-based method in this article, which identifies phishing webpages accurately regardless of the textual language used within the webpage. The proposed search engine-based method uses a lightweight, consistent and language independent search query to detect the legality of the suspicious URL. We have also integrated five heuristics with the search engine-based mechanism to improve the detection accuracy, as some newly created legitimate sites may not appear in the search engine. The proposed method can also correctly classify the newly created legitimate sites that are not classified by available search engine-based methods. Evaluation results show that our method outperforms the available search-based techniques and achieves 98.15% TPR of and only 0.05% FPR.
APA, Harvard, Vancouver, ISO, and other styles
31

Rijal, Yusron, and Awalia Nofitasari. "Filter Halaman Web Pornografi Menggunakan Kecocokan Kata dan Deteksi Warna Kulit." CAUCHY 1, no. 4 (2011): 207. http://dx.doi.org/10.18860/ca.v1i4.1795.

Full text
Abstract:
This paper presents an effort to detect pornographic webpages. It was stated that a positive relationship exists between percentage of human skin color in an image and the image itself (Jones et.al., 1998). Based on the statement, rather than using the traditional method of text-filtering, this paper propose a new approach to detect pornographic images by using skin color detection. The skin color detection performed by using RGB, HSI, and YCbCr color model. Using algorithm stated by Ap-apid (Ap-apid, 2005), the system will classify nude and not-nude images. If one or more nude images are found, the system will block the webpage. Keywords: Webpage Filtering, Image Processing, Pornography, Nudity, Skin Color, Nude Images
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Shih-Ting, and Chia-Wei Huang. "A Two-Dimensional Webpage Classification Model." International Journal of Data Warehousing and Mining 13, no. 2 (2017): 13–44. http://dx.doi.org/10.4018/ijdwm.2017040102.

Full text
Abstract:
Regarding the webpage classification topics, most classification mechanisms may lack of consideration from the webpage article writer's perspective and the display characteristics of the webpage (color, graphic layout). Hence, this paper develops a Two-dimensional Webpage Classification model to analyze the webpage textual information and display characteristics from the perspectives of webpage users and designers. This model is consisted of the Webpage Block Distribution Analysis (WBDA) module, Webpage Emotion Category Determination (WECD) module and Webpage Specialty Category Determination (WSCD) module. Firstly, in WBDA module, the user and designer habits (such as the web browsing movement and the writing perspective of webpage) should be considered by combining with the eye movement tracking and tag-region judgment to determine the critical blocks and information of the webpage. Secondly, in WECD module, the webpage color codes are acquired to calculate the major colors of the webpage, and further determine the emotional category of webpage. Thirdly, the WSCD module analyzes the webpage textual information by integrating the keyword acquisition technology to identify the specialty category of the webpage. After that, the Two-dimensional category of the webpage can be obtained. In addition, this paper develops a web-based system accordingly for case verification to confirm the feasibility of the methodology. The verification results show that firstly for webpage emotion category judgment when 128 webpage files for training are imported into this system, the respondent's emotion evaluation score is increased to above Level 5 and the system recommendation success rate is increased to 75.78%. Secondly, for specialty category determination, when system uses 1010 to 1120 webpage files for training, the system performance can be increased to above 80%. Hence, the developed system has a high-performance level in webpage emotion category and specialty category determination. That is, this paper proposes a methodology of Two-dimensional Webpage Classification to classify the webpage file information contents and the effects on the emotions of the demanders to assist webpage providers in providing webpage suitable for demanders with the generated two-dimensional information of the webpage.
APA, Harvard, Vancouver, ISO, and other styles
33

Yu, Chii-Zen, and Fong-Gong Wu. "Influences of viewing angle, product position, and consumers’ physical characteristics on their Kansei images." PLOS ONE 17, no. 10 (2022): e0276421. http://dx.doi.org/10.1371/journal.pone.0276421.

Full text
Abstract:
Changes in consumer behavior in recent years have led to a steady increase in the number of online shoppers. The viewing angle of a product and its position on a webpage can affect consumers’ Kansei images and purchase intentions. This study aimed to determine the influences of product viewing angle and position on a webpage on consumers’ Kansei images; the influences of consumers’ physical characteristics (i.e., sex, dominant hand, and dominant eye) on their Kansei images were also explored. An experiment was designed to evaluate the influences of viewing angle and position on consumers’ Kansei images. A product’s viewing angle and position on the webpage in question served as independent variables, and the participants’ Kansei images served as the dependent variable. Seven representative viewing angles were selected. A predetermined product was placed in nine positions in a 3×3 grid on a webpage. A total of 63 combinations were obtained, and an experiment and interview were designed to investigate the participants’ Kansei images for all 63. The following conclusions were drawn: 1. Viewing angle affected the participants’ Kansei images; 2. the position of the product on the webpage affected the participants’ Kansei images; and 3. some physical characteristics affected the participants’ Kansei images. In summary, online marketing platforms could document shoppers’ physical characteristics to provide them with personalized product displays in order to cater to their Kansei images preferences. These research findings could be applied to online shopping platforms, which could attract more diverse groups of clients by adjusting product display angles and product positions on webpages based on consumers’ physical characteristics and preferences.
APA, Harvard, Vancouver, ISO, and other styles
34

Nikhila, T. Bhuvan, and Sudheep Elayidom M. "A Multimodal Learning to Rank model for Web Pages." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 6 (2020): 308–13. https://doi.org/10.35940/ijeat.F1442.089620.

Full text
Abstract:
“Learning-to-rank” or LTR utilizes machine learning technologies to optimally combine many features to solve the problem of ranking. Web search is one of the prominent applications of LTR. To improve the ranking of webpages, multimodality based Learning to Rank model is proposed and implemented. Multimodality is the fusion or the process of integrating multiple unimodal representations into one compact representation. The main problem with the web search is that the links that appear on the top of the search list may be either irrelevant or less relevant to the user than the one appearing at a lower rank. Researches have proven that a multimodality based search would improve the rank list populated. The multiple modalities considered here are the text on a webpage as well as the images on a webpage. The textual features of the webpages are extracted from the LETOR dataset and the image features of the webpages are extracted from the images inside the webpages using the concept of transfer learning. VGG-16 model, pre-trained on ImageNet is used as the image feature extractor. The baseline model which is trained only using textual features is compared against the multimodal LTR. The multimodal LTR which integrates the visual and textual features shows an improvement of 10-15% in web search accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

Miller, James, Abhimanyu Panwar, and Iosif Viorel Onut. "Towards Building a New Age Commercial Contextual Advertising System." International Journal of Systems and Service-Oriented Engineering 7, no. 3 (2017): 1–14. http://dx.doi.org/10.4018/ijssoe.2017070101.

Full text
Abstract:
Advertising via the Internet is a significant industry; however, in many ways, the industry is still in its infancy and still requires significant refinement to achieve its full potential. In contextual advertising (CA), the ad-network places ads related to the content of the publishers' webpages. In this article, the authors introduce an approach to implement a CA system for an ad-network. Their contributions are threefold: First, they propose schemes to prepare feature vectors of a webpage for the purpose of classification by its subject. To do so, the authors extract information from its peer webpages as well. Secondly, they prepare a suitable taxonomy from ODP. This taxonomy fulfils the requirements of a CA system such as broad coverage of semantically relevant topics etc. Thirdly, they conduct experiments on the proposed CA system architecture. The results establish the competence of the proposed approach. The authors empirically establish that the scheme which extracts information from the intersection of cues from web accessibility and search engine optimisation, of the target webpage provides the best accuracy among all the CA systems.
APA, Harvard, Vancouver, ISO, and other styles
36

Elahi, Ehsan, Jorge Morato, and Ana Iglesias. "Improving Web Readability Using Video Content: A Relevance-Based Approach." Applied Sciences 14, no. 23 (2024): 11055. http://dx.doi.org/10.3390/app142311055.

Full text
Abstract:
With the increasing integration of multimedia elements into webpages, videos have emerged as a popular medium for enhancing user engagement and knowledge retention. However, irrelevant or poorly placed videos can hinder readability and distract users from the core content of a webpage. This paper proposes a novel approach leveraging natural language processing (NLP) techniques to assess the relevance of video content on educational websites, thereby enhancing readability and user engagement. By using a cosine similarity-based relevance scoring method, we measured the alignment between video transcripts and webpage text, aiming to improve the user’s comprehension of complex topics presented on educational platforms. Our results demonstrated a strong correlation between automated relevance scores and user ratings, with an improvement of over 35% in relevance alignment. The methodology was evaluated across 50 educational websites representing diverse subjects, including science, mathematics, and language learning. We conducted a two-phase evaluation process: an automated scoring phase using cosine similarity, followed by a user study with 100 participants who rated the relevance of videos to webpage content. The findings support the significance of integrating NLP-driven video relevance assessments for enhanced readability on educational websites, highlighting the potential for broader applications in e-learning.
APA, Harvard, Vancouver, ISO, and other styles
37

Daszyńska-Daszkiewicz, J. "Wroc_aw HELAS Webpage." Communications in Asteroseismology 153 (2008): 106–7. http://dx.doi.org/10.1553/cia_153s106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Okoroafo, Sam C. "AFRICAN WEBPAGE REVIEW." Journal of African Business 1, no. 1 (2000): 123–24. http://dx.doi.org/10.1300/j156v01n01_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Nazar, Shenara, Neethu Babu, Mariya K T, and Shyam Krishna K. "Multiuser Webpage Revisitation." International Journal of Engineering Trends and Technology 67, no. 5 (2019): 7–12. http://dx.doi.org/10.14445/22315381/ijett-v67i5p202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gayathri Devi, Mrs. "Result Analysis Webpage." International Scientific Journal of Engineering and Management 04, no. 05 (2025): 1–9. https://doi.org/10.55041/isjem03599.

Full text
Abstract:
Abstract - The semester result analysis project is designed to analyze and interpret student performance data over a given academic term this provides the structured approach to evaluate semester examination results, identifying trends, success rate, subject wise performance, and individual student progress. By automating the collection and analysis of the results, the project enables institutions to pinpoint areas of academic excellence as well as subjects or courses needed improvement. This provides the comprehensive platform for semester result analysis with the goal of enhancing institutional Performance monitoring.
APA, Harvard, Vancouver, ISO, and other styles
41

Qi, Tao, Bo Wang, and Su Juan Zhao. "The Research of Website Tamper-Resistant Technology." Advanced Materials Research 850-851 (December 2013): 475–78. http://dx.doi.org/10.4028/www.scientific.net/amr.850-851.475.

Full text
Abstract:
Webpage tamperproof technology is a website technology to protect webpage not to display tamper contents and to carry out real-time recovery. Webpage tamperproof system uses advanced Web server core embedded technology and tamper detection technology based on cryptography to give a comprehensive protection of static webpage and dynamic webpage of the website. Webpage tamperproof system supports the automatic release, tamper detection, application protection, warning, and real-time recovery of webpage, guarantees the safety of various links of transmission, identification, address access, form submission, and auditing, fully eliminates the possibility of access to tampered webpage in a real-time manner, and puts an end to any tamper of backstage database in a manner of Web.
APA, Harvard, Vancouver, ISO, and other styles
42

Koo Kwong Ze, Fares Hasan, Rozilawati Razali, Abudhahir Buhari, and Elisha Tadiwa. "An Enhanced PageRank Algorithm based on Optimized Normalized Technique and Content-based Approach." Open Journal of Science and Technology 3, no. 2 (2020): 78–86. http://dx.doi.org/10.31580/ojst.v3i2.1468.

Full text
Abstract:
PageRank is an algorithm concerning search queries over the Internet. The algorithm returns the best search results to the user based on the webpage relevancy by calculating the outgoing links from each webpage. Although useful, the algorithm consumes a considerable amount of time as it needs to calculate the available webpages, which are also increasing in number over time. Moreover, the returned results by the algorithm are biased towards old webpages because they have the volume due to their lifetime, thus resulting in newly created webpages to have lower page ranks even though they have comparatively more relevant and useful information. To overcome these issues, this paper proposes an alternative hybrid PageRank algorithm based on optimized normalization technique and content-based approach. The proposed algorithm reduces the number of iterations required to calculate the page rank, hence improves the efficiency, by calculating the mean of all page rank values and normalizes them through the use of the mean. Through this approach, the algorithm is also able to determine the relevancy of webpages based on validity of links rather than popularity. These claims are demonstrated by an experiment conducted on the proposed algorithm using a dummy web structure consisting of 12 webpages. The results showed that the traditional PageRank algorithm has 74% more iterations than the proposed algorithm. The proposed algorithm returned a mean value of 1.00 compared to 1.32 for the traditional algorithm. These results confirm that the proposed algorithm saves a substantial amount of computing power while being more precise and not biased.
APA, Harvard, Vancouver, ISO, and other styles
43

Choi, Ben. "Knowledge Engineering the Web." International Journal of Machine Learning and Computing 11, no. 1 (2021): 68–76. http://dx.doi.org/10.18178/ijmlc.2021.11.1.1016.

Full text
Abstract:
This paper focuses on the largest source of human knowledge: The Web. It presents the state of the art and patented technologies on search engine, automatic organization of webpages, and knowledge-based automatic webpage summarization. For the patented search engine technology, it describes new methods to present search results to the users and through browsers to allow the users to customize and organize webpages. For the patented classification technology, it describes new methods to automatically organize webpages into categories. For the knowledge-based summarization technology, it presents new technics for computers to "read" webpages and then to "write" a summary by creating new sentences to describe the contents of the webpages. These search engine, classification, and summarization technologies build a strong framework for knowledge engineering the Web.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhang, Shuai, Guang Hong, and Bing Xu. "An Improved Semantic Annotation Method." Applied Mechanics and Materials 198-199 (September 2012): 495–99. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.495.

Full text
Abstract:
Semantic annotation is the fundament for the progress and realization of semantic web, meanwhile, provides formatted description for the knowledge in web pages and its semantic meaning in the field. A method for semantic annotation to webpage was presented under the instruction of domain ontology in this paper. By Edit distance and Wordnet distance and from the two aspects the semantic meaning of the word, the semantic correlation degree was measured, then the mapping relation of webpage and ontology was built. Moreover, after the semantic annotating to the WebPages, the ontology was expanded effectively by the annotation results, to domanialize the ontology. At the end, experimental results show the tagging method bases on the weight coefficient acquired form Edit distance, wordnet distance and extended ontology concept is provided with the best performance and the method is effective and applicable.
APA, Harvard, Vancouver, ISO, and other styles
45

Xiong, Aiping, Robert W. Proctor, Weining Yang, and Ninghui Li. "Embedding Training Within Warnings Improves Skills of Identifying Phishing Webpages." Human Factors: The Journal of the Human Factors and Ergonomics Society 61, no. 4 (2018): 577–95. http://dx.doi.org/10.1177/0018720818810942.

Full text
Abstract:
Objective: Evaluate the effectiveness of training embedded within security warnings to identify phishing webpages. Background: More than 20 million malware and phishing warnings are shown to users of Google Safe Browsing every week. Substantial click-through rate is still evident, and a common issue reported is that users lack understanding of the warnings. Nevertheless, each warning provides an opportunity to train users about phishing and how to avoid phishing attacks. Method: To test use of phishing-warning instances as opportunities to train users’ phishing webpage detection skills, we conducted an online experiment contrasting the effectiveness of the current Chrome phishing warning with two training-embedded warning interfaces. The experiment consisted of three phases. In Phase 1, participants made login decisions on 10 webpages with the aid of warning. After a distracting task, participants made legitimacy judgments for 10 different login webpages without warnings in Phase 2. To test the long-term effect of the training, participants were invited back a week later to participate in Phase 3, which was conducted similarly as Phase 2. Results: Participants differentiated legitimate and fraudulent webpages better than chance. Performance was similar for all interfaces in Phase 1 for which the warning aid was present. However, training-embedded interfaces provided better protection than the Chrome phishing warning on both subsequent phases. Conclusion: Embedded training is a complementary strategy to compensate for lack of phishing webpage detection skill when phishing warning is absent. Application: Potential applications include development of training-embedded warnings to enable security training at scale.
APA, Harvard, Vancouver, ISO, and other styles
46

Zheng, Yingying. "Design and Implementation of Webpage Addition Technology in Aerobics Courses." International Journal of Emerging Technologies in Learning (iJET) 11, no. 09 (2016): 25. http://dx.doi.org/10.3991/ijet.v11i09.6118.

Full text
Abstract:
The rapid development of modern information technology facilitates the reform and innovation of college teaching. A series of technologies, including webpage addition technology, can meet the requirement of aerobics and other shape-related teaching for animation display. Thus, webpage addition technology is a new viewpoint in the education modernization process. The combination of webpage addition technology and Aerobics courses will provide help for Aerobics teaching. Starting from teaching features and the website learning status of Aerobics courses, this paper carried out an application design for webpage addition design. Then, an Aerobics course served as an experimental course. The control experiment method was applied to explore the application practice of webpage addition technology in the control experiment. Furthermore, this paper conducted contrastive analysis on the teaching effect difference with and without webpage addition technology, and drew some conclusions, in the hope of offering reference for combining webpage addition technology with Aerobics courses.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Xue, Jian Xu, and Guojun Li. "Efficient Fingerprinting Attack on Web Applications: An Adaptive Symbolization Approach." Electronics 12, no. 13 (2023): 2948. http://dx.doi.org/10.3390/electronics12132948.

Full text
Abstract:
Website fingerprinting is valuable for many security solutions as it provides insights into applications that are active on the network. Unfortunately, the existing techniques primarily focus on fingerprinting individual webpages instead of webpage transitions. However, it is a common scenario for users to follow hyperlinks to carry out their actions. In this paper, an adaptive symbolization method based on packet distribution information is proposed to represent network traffic. The Profile Hidden Markov Model (PHMM exploits positional information contained in network traffic sequences and is sensitive to webpage transitional information) is used to construct users’ action patterns. We also construct user role models to represent different kinds of users and apply them to our web application identification framework to uncover more information. The experimental results demonstrate that compared to the equal interval and K-means symbolization algorithms, the adaptive symbolization method retains the maximum amount of information and is less time-consuming. The PHMM-based user action identification method has higher accuracy than the existing traditional classifiers do.
APA, Harvard, Vancouver, ISO, and other styles
48

Radomski, Ashley D., Alexa Bagnell, Sarah Curtis, Lisa Hartling, and Amanda S. Newton. "Examining the Usage, User Experience, and Perceived Impact of an Internet-Based Cognitive Behavioral Therapy Program for Adolescents With Anxiety: Randomized Controlled Trial." JMIR Mental Health 7, no. 2 (2020): e15795. http://dx.doi.org/10.2196/15795.

Full text
Abstract:
Background Internet-based cognitive behavioral therapy (iCBT) increases treatment access for adolescents with anxiety; however, completion rates of iCBT programs are typically low. Understanding adolescents’ experiences with iCBT, what program features and changes in anxiety (minimal clinically important difference [MCID]) are important to them, may help explain and improve iCBT program use and impact. Objective Within a randomized controlled trial comparing a six-session iCBT program for adolescent anxiety, Being Real, Easing Anxiety: Tools Helping Electronically (Breathe), with anxiety-based resource webpages, we aimed to (1) describe intervention use among adolescents allocated to Breathe or webpages and those who completed postintervention assessments (Breathe or webpage respondents); (2) describe and compare user experiences between groups; and (3) calculate an MCID for anxiety and explore relationships between iCBT use, experiences, and treatment response among Breathe respondents. Methods Enrolled adolescents with self-reported anxiety, aged 13 to 19 years, were randomly allocated to Breathe or webpages. Self-reported demographics and anxiety symptoms (Multidimensional Anxiety Scale for Children—2nd edition [MASC-2]) were collected preintervention. Automatically-captured Breathe or webpage use and self-reported symptoms and experiences (User Experience Questionnaire for Internet-based Interventions) were collected postintervention. Breathe respondents also reported their perceived change in anxiety (Global Rating of Change Scale [GRCS]) following program use. Descriptive statistics summarized usage and experience outcomes, and independent samples t tests and correlations examined relationships between them. The MCID was calculated using the mean MASC-2 change score among Breathe respondents reporting somewhat better anxiety on the GRCS. Results Adolescents were mostly female (382/536, 71.3%), aged 16.6 years (SD 1.7), with very elevated anxiety (mean 92.2, SD 18.1). Intervention use was low for adolescents allocated to Breathe (mean 2.2 sessions, SD 2.3; n=258) or webpages (mean 2.1 visits, SD 2.7; n=278), but was higher for Breathe (median 6.0, range 1-6; 81/258) and webpage respondents (median 2.0, range 1-9; 148/278). Total user experience was significantly more positive for Breathe than webpage respondents (P<.001). Breathe respondents reported program design and delivery factors that may have challenged (eg, time constraints and program support) or facilitated (eg, demonstration videos, self-management activities) program use. The MCID was a mean MASC-2 change score of 13.8 (SD 18.1). Using the MCID, a positive treatment response was generated for 43% (35/81) of Breathe respondents. Treatment response was not correlated with respondents’ experiences or use of Breathe (P=.32 to P=.88). Conclusions Respondents reported positive experiences and changes in their anxiety with Breathe; however, their reports were not correlated with program use. Breathe respondents identified program design and delivery factors that help explain their experiences and use of iCBT and inform program improvements. Future studies can apply our measures to compare user experiences between internet-based interventions, interpret treatment outcomes and improve treatment decision making for adolescents with anxiety. Trial Registration ClinicalTrials.gov Identifier: NCT02970734 https://clinicaltrials.gov/ct2/show/NCT02970734
APA, Harvard, Vancouver, ISO, and other styles
49

Wan, Hongyan, Wanting Ji, Guoqing Wu, et al. "A novel webpage layout aesthetic evaluation model for quantifying webpage layout design." Information Sciences 576 (October 2021): 589–608. http://dx.doi.org/10.1016/j.ins.2021.06.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Levine, Emma C., James E. Fanning, Shayan McGee, Lauren Okamoto, Samantha Steinmetz-Wood, and Matthew Philip Gilbert. "LBODP043 Analysis Of Plastic Surgery Websites And Their Recommendations For Diabetics." Journal of the Endocrine Society 6, Supplement_1 (2022): A269. http://dx.doi.org/10.1210/jendso/bvac150.553.

Full text
Abstract:
Abstract Diabetic patients are predisposed to adverse complications after surgery, especially when their A1c is above 8. 0% (1). Despite the heightened risk for post-operative complications such as wound dehiscence and infection (2), no formal recommendations by the American Council of Academic Plastic Surgeons (ACAPS) exist regarding preoperative A1c thresholds. This study reviews the current online recommendations for diabetic patients undergoing plastic surgery and examines the readability and technical metrics presented on these webpages. We hypothesized that these webpages would enforce stricter preoperative A1c levels (< 8%) in comparison to the 8% threshold put forth by Diabetes Care in 2014. An anonymous, depersonalized Google search with the search term "diabetes and plastic surgery" was run. The initial 50 results were analyzed and 11 webpages meeting inclusion criteria extracted. 45.45% (5/11) of the webpages stated specific A1c recommendations, with 4 of the webpages recommending an A1c <7% and one webpage recommending an A1c <6%. A number of the websites recommended that patients consult their primary care physician (PCP) or endocrinologist, 63.6% (7/11). 100% (11/11) of webpages discussed poor wound healing and 45% (5/11) discussed the heightened risk of postoperative infection in diabetic surgical patients. Webpage average readability scores for seven readability measures greatly exceeded the 6th grade reading level recommended for medical information. This study determined that online resources for diabetic patients undergoing plastic surgery utilized stricter than standard preoperative criteria and exceeded recommended readability levels. Standardizing requirements for diabetic patients and improving readability may help patients better understand the preoperative expectation for better surgical outcomes. References: (1) Underwood, Patricia et al. "Preoperative A1C and clinical outcomes in patients with diabetes undergoing major noncardiac surgical procedures." Diabetes care vol. 37,3 (2014): 611-6. doi: 10.2337/dc13-1929(2) Goltsman, David et al. "Defining the Association between Diabetes and Plastic Surgery Outcomes: An Analysis of Nearly 40,000 Patients." Plastic and reconstructive surgery. Global open vol. 5,8 e1461. 17 Aug. 2017, doi: 10.1097/GOX. 0000000000001461 Presentation: No date and time listed
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!