To see the other types of publications on this topic, follow the link: Web page.

Journal articles on the topic 'Web page'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Web page.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lei, Shi. "Modeling an web community discovery method with web page attraction." Journal of Intelligent & Fuzzy Systems 40, no. 6 (June 21, 2021): 11159–69. http://dx.doi.org/10.3233/jifs-202366.

Full text
Abstract:
An improved Web community discovery algorithm is proposed in this paper based on the attraction between Web pages to effectively reduce the complexity of Web community discovery. The proposed algorithm treats each Web page in the Web pages collection as an individual with attraction based on the theory of universal gravitation, elaborates the discovery and evolution process of Web community from a Web page in the Web pages collection, defines the priority rules of Web community size and Web page similarity, and gives the calculation formula of the change in Web page similarity. Finally, an experimental platform is built to analyze the specific discovery process of the Web community in detail, and the changes in cumulative distribution of Web page similarity are discussed. The results show that the change in the similarity of a new page satisfies the power-law distribution, and the similarity of a new page is proportional to the size of Web community that the new page chooses to join.
APA, Harvard, Vancouver, ISO, and other styles
2

Apandi, Siti Hawa, Jamaludin Sallim, Rozlina Mohamed, and Norkhairi Ahmad. "Automatic Topic-Based Web Page Classification Using Deep Learning." JOIV : International Journal on Informatics Visualization 7, no. 3-2 (November 30, 2023): 2108. http://dx.doi.org/10.30630/joiv.7.3-2.1616.

Full text
Abstract:
The internet is frequently surfed by people by using smartphones, laptops, or computers in order to search information online in the web. The increase of information in the web has made the web pages grow day by day. The automatic topic-based web page classification is used to manage the excessive amount of web pages by classifying them to different categories based on the web page content. Different machine learning algorithms have been employed as web page classifiers to categorise the web pages. However, there is lack of study that review classification of web pages using deep learning. In this study, the automatic topic-based classification of web pages utilising deep learning that has been proposed by many key researchers are reviewed. The relevant research papers are selected from reputable research databases. The review process looked at the dataset, features, algorithm, pre-processing used in classification of web pages, document representation technique and performance of the web page classification model. The document representation technique used to represent the web page features is an important aspect in the classification of web pages as it affects the performance of the web page classification model. The integral web page feature is the textual content. Based on the review, it was found that the image based web page classification showed higher performance compared to the text based web page classification. Due to lack of matrix representation that can effectively handle long web page text content, a new document representation technique which is word cloud image can be used to visualize the words that have been extracted from the text content web page.
APA, Harvard, Vancouver, ISO, and other styles
3

Y., Klushyn, and Zakharchin Y. "INCREASE THE SPEED OF WEB APPLICATIONS." Computer systems and network 2, no. 1 (March 23, 2017): 33–43. http://dx.doi.org/10.23939/csn2020.01.033.

Full text
Abstract:
The article presents a method of creating a web application based on SPA technology (one-page web application), as a method of increasing the speed of web applications based on the use of modern frameworks, tools and tools for developing client and server part of a one-page web application. One-page web applications are web application technologies that consist of a single web page that interacts with the user, dynamically generating the current page rather than downloading entire new pages from the server. Based on this technique, we developed our own web application and based on it we determined the response rate, which is less than the optimal response rate for single-page web applications. An explanation is given as to which solutions increase response speed and performance in a one-page web application, and why creating a multi-page site is not the best idea. Keywords: single-page web application, database, multi-page web application, non-relational database, relational database, Backend technologies, server, JavaScript
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Yuanchao, Yuliang Lu, Zulie Pan, Juxing Chen, Fan Shi, Yang Li, and Yonghui Jiang. "APIMiner: Identifying Web Application APIs Based on Web Page States Similarity Analysis." Electronics 13, no. 6 (March 18, 2024): 1112. http://dx.doi.org/10.3390/electronics13061112.

Full text
Abstract:
Modern web applications offer various APIs for data interaction. However, as the number of these APIs increases, so does the potential for security threats. Essentially, more APIs in an application can lead to more detectable vulnerabilities. Thus, it is crucial to identify APIs as comprehensively as possible in web applications. However, this task faces challenges due to the increasing complexity of web development techniques and the abundance of similar web pages. In this paper, we propose APIMiner, a framework for identifying APIs in web applications by dynamically traversing web pages based on web page state similarity analysis. APIMiner first builds a web page model based on the HTML elements of the current web page. APIMiner then uses this model to represent the state of the page. Then, APIMiner evaluates each element’s similarity in the page model and determines the page state similarity based on these similarity values. From the different states of the page, APIMiner extracts the data interaction APIs on the page. We conduct extensive experiments to evaluate APIMiner’s effectiveness. In the similarity analysis, our method surpasses state-of-the-art methods like NDD and mNDD in accurately distinguishing similar pages. We compare APIMiner with state-of-the-art tools (e.g., Enemy of the State, Crawlergo, and Wapiti3) for API identification. APIMiner excels in the number of identified APIs (average 1136) and code coverage (average 28,470). Relative to these tools, on average, APIMiner identifies 7.96 times more APIs and increases code coverage by 142.72%.
APA, Harvard, Vancouver, ISO, and other styles
5

Apandi, Siti Hawa, Jamaludin Sallim, and Rozlina Mohamed. "A Convolutional Neural Network (CNN) Classification Model for Web Page: A Tool for Improving Web Page Category Detection Accuracy." JITSI : Jurnal Ilmiah Teknologi Sistem Informasi 4, no. 3 (September 7, 2023): 110–21. http://dx.doi.org/10.30630/jitsi.4.3.181.

Full text
Abstract:
Game and Online Video Streaming are the most viewed web pages. Users who spend too much time on these types of web pages may suffer from internet addiction. Access to Game and Online Video Streaming web pages should be restricted to combat internet addiction. A tool is required to recognise the category of web pages based on the text content of the web pages. Due to the unavailability of a matrix representation that can handle long web page text content, this study employs a document representation known as word cloud image to visualise the words extracted from the text content web page after data pre-processing. The most popular words are shown in large size and appear in the centre of the word cloud image. The most common words are the words that appear frequently in the text content web page and are related to describing what the web page content is about. The Convolutional Neural Network (CNN) recognises the pattern of words presented in the core portions of the word cloud image to categorise the category to which the web page belongs. The proposed model for web page classification has been compared with the other web page classification models. It shows the good result that achieved an accuracy of 85.6%. It can be used as a tool that helps to make identifying the category of web pages more accurate
APA, Harvard, Vancouver, ISO, and other styles
6

Nandanwar, Amit Kumar, and Jaytrilok Choudhary. "Semantic Features with Contextual Knowledge-Based Web Page Categorization Using the GloVe Model and Stacked BiLSTM." Symmetry 13, no. 10 (September 23, 2021): 1772. http://dx.doi.org/10.3390/sym13101772.

Full text
Abstract:
Internet technologies are emerging very fast nowadays, due to which web pages are generated exponentially. Web page categorization is required for searching and exploring relevant web pages based on users’ queries and is a tedious task. The majority of web page categorization techniques ignore semantic features and the contextual knowledge of the web page. This paper proposes a web page categorization method that categorizes web pages based on semantic features and contextual knowledge. Initially, the GloVe model is applied to capture the semantic features of the web pages. Thereafter, a Stacked Bidirectional long short-term memory (BiLSTM) with symmetric structure is applied to extract the contextual and latent symmetry information from the semantic features for web page categorization. The performance of the proposed model has been evaluated on the publicly available WebKB dataset. The proposed model shows superiority over the existing state-of-the-art machine learning and deep learning methods.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Xin Li. "Web Page Ranking Algorithm Based on the Meta-Information." Applied Mechanics and Materials 596 (July 2014): 292–96. http://dx.doi.org/10.4028/www.scientific.net/amm.596.292.

Full text
Abstract:
PageRank algorithms only consider hyperlink information, without other page information such as page hits frequency, page update time and web page category. Therefore, the algorithms rank a lot of advertising pages and old pages pretty high and can’t meet the users' needs. This paper further studies the page meta-information such as category, page hits frequency and page update time. The Web page with high hits frequency and with smaller age should get a high rank, while the above two factors are more or less dependent on page category. Experimental results show that the algorithm has good results.
APA, Harvard, Vancouver, ISO, and other styles
8

Arase, Yuki, Takahiro Hara, Toshiaki Uemukai, and Shojiro Nishio. "Annotation and Auto-Scrolling for Web Page Overview in Mobile Web Browsing." International Journal of Handheld Computing Research 1, no. 4 (October 2010): 63–80. http://dx.doi.org/10.4018/jhcr.2010100104.

Full text
Abstract:
Due to advances in mobile phones, mobile Web browsing has become increasingly popular. In this regard, small screens and poor input capabilities of mobile phones prevent users from comfortably browsing Web pages that are designed for desktop PCs. One of the serious problems of mobile Web browsing is that users often get lost in a Web page and can only view a small portion of a Web page at a time, not able to grasp the entire page’s structure to decide which direction their information of interest is located. To solve this problem, an effective technique is to present an overview of the page. Although prior studies adopted the conventional style of overview, that is, a scaled-down image of the page, this is not sufficient because users cannot see details of the contents. Therefore, in this paper, the authors present annotations on a Web page that provides a functionality which automatically scrolls the page. Results of a user experiment show that annotations are informative for users who want to find contents from a large Web page.
APA, Harvard, Vancouver, ISO, and other styles
9

Lingaraju, Dr G. M., and Dr S. Jagannatha. "Review of Web Page Classification and Web Content Mining." Journal of Advanced Research in Dynamical and Control Systems 11, no. 10 (October 31, 2019): 142–47. http://dx.doi.org/10.5373/jardcs/v11i10/20193017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meara, J. "Web page." Age and Ageing 32, no. 3 (May 1, 2003): 355. http://dx.doi.org/10.1093/ageing/32.3.355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Satish Babu, J., T. Ravi Kumar, and Dr Shahana Bano. "Optimizing webpage relevancy using page ranking and content based ranking." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 1025. http://dx.doi.org/10.14419/ijet.v7i2.7.12220.

Full text
Abstract:
Systems for web information mining can be isolated into a few classifications as indicated by a sort of mined data and objectives that specif-ic classifications set: Web structure mining, Web utilization mining, and Web Content Mining. This paper proposes another Web Content Mining system for page significance positioning taking into account the page content investigation. The strategy, we call it Page Content Rank (PCR) in the paper, consolidates various heuristics that appear to be critical for breaking down the substance of Web pages. The page significance is resolved on the base of the significance of terms which the page contains. The significance of a term is determined concern-ing a given inquiry q and it depends on its measurable and linguistic elements. As a source set of pages for mining we utilize an arrangement of pages reacted by a web search tool to the question q. PCR utilizes a neural system as its inward order structure. We depict a usage of the proposed strategy and an examination of its outcomes with the other existing characterization framework –page rank algorithm.
APA, Harvard, Vancouver, ISO, and other styles
12

Muneeb Ahmed Farooqi, Muhammad Arslan Ashraf, and Muhammad Umer Shaukat. "Google Page Rank Site Structure Strategies for Marketing Web Pages." Journal of Computing & Biomedical Informatics 2, no. 02 (September 15, 2021): 140–57. http://dx.doi.org/10.56979/202/2021/30.

Full text
Abstract:
There are several Search Engines to categorize the web content and show us on base of our search query.These search engines are continuously visiting the pages/sites and gather the information using different techniques called crawling/spidering. On basis of daily content collection all search engines are managing their own indexes for searches.For every business there is a need to make its pages as most top rated/ranked pages by making structurally and content wise batter so that any crawler can easily crawl it and can rank it among the top 10 results.In this thesis only, structural behavior is being discussed in which internal graphical relationships between pages and the loading time of pages in terms of all supportive content .Structural overview includes the HTML tags structure which works as tree flow starting from tag and then moving towards child nodes.Page speed identifies the loading time of a page, it helps search engine to categories pages for mobile devices as well.Sometimes there is a thunder option on mobile search with rank it represents that this page is super-fast in loading. Page loading includes the loading of all content except Ajax base content.According to Google Page, rank is based on page content, page structure and page loading time.As discussion is only Page structure and page loading time so Google already gave some instructions related to Page structure and page loading speed but those instructions are not enough also it need to discover the new dimensions to make business pages among the top-rated results.
APA, Harvard, Vancouver, ISO, and other styles
13

Putra Eka Prismana, Gusti Lanang. "Automatic Web News Content Extraction." Journal Research of Social, Science, Economics, and Management 1, no. 7 (February 18, 2022): 785–94. http://dx.doi.org/10.36418/jrssem.v1i7.107.

Full text
Abstract:
The extraction of the main content of web pages is widely used in search engines, but a lot of irrelevant information, such as advertisements, navigation, and junk information, is included in web pages. Such irrelevant information reduces the efficiency of web content processing in content-based applications. This study aimed to extract web pages using DOM Tree in the rationality of segmentation results and efficiency based on the information entropy of nodes from the DOM Tree. The first step of this research was to classify web page tags and only processed tags that affected the structure of the page. The second step was to consider the content features and structural features of the DOM Tree node comprehensively. The next was to perform node fusion to obtain segmentation results. Segmentation testing was carried out with several web pages with different structures so that it showed that the proposed method accurately and quickly segmented and removed noise from web page content. After the DOM Tree was formed, the DOM Tree would be matched with the database to eliminate information noise using the Firefly Optimization algorithm. Then, testing and evaluating the Firefly Optimization method in effectiveness aspect were done to detect and eliminate web page noise and produce clear documents.
APA, Harvard, Vancouver, ISO, and other styles
14

Putra Eka Prismana, Gusti Lanang. "Automatic Web News Content Extraction." Journal Research of Social Science, Economics, and Management 1, no. 7 (February 18, 2022): 785–94. http://dx.doi.org/10.59141/jrssem.v1i7.107.

Full text
Abstract:
The extraction of the main content of web pages is widely used in search engines, but a lot of irrelevant information, such as advertisements, navigation, and junk information, is included in web pages. Such irrelevant information reduces the efficiency of web content processing in content-based applications. This study aimed to extract web pages using DOM Tree in the rationality of segmentation results and efficiency based on the information entropy of nodes from the DOM Tree. The first step of this research was to classify web page tags and only processed tags that affected the structure of the page. The second step was to consider the content features and structural features of the DOM Tree node comprehensively. The next was to perform node fusion to obtain segmentation results. Segmentation testing was carried out with several web pages with different structures so that it showed that the proposed method accurately and quickly segmented and removed noise from web page content. After the DOM Tree was formed, the DOM Tree would be matched with the database to eliminate information noise using the Firefly Optimization algorithm. Then, testing and evaluating the Firefly Optimization method in effectiveness aspect were done to detect and eliminate web page noise and produce clear documents.
APA, Harvard, Vancouver, ISO, and other styles
15

Kapusta, Jozef, Michal Munk, and Martin Drlik. "Website Structure Improvement Based on the Combination of Selected Web Structure and Web Usage Mining Methods." International Journal of Information Technology & Decision Making 17, no. 06 (November 2018): 1743–76. http://dx.doi.org/10.1142/s0219622018500402.

Full text
Abstract:
The different web mining methods and techniques can help to solve some typical issues of the contemporary websites, contribute to more effective personalization, improve a website structure and reorganize its web pages. However, only several papers tried to combine web structure and web usage mining (WUM) methods with this aim. The paper researches if and how the combination of selected web structure and WUM methods can identify misplaced web pages and how they can contribute to improving the website structure. The paper analyzes the relationship between the estimated importance of the web page from the web page creator’s point of view using the web structure mining method based on PageRank and visitors’ real perception of the importance of that individual web page using the WUM method based on sequence patterns analysis, which eliminates the problem with repeated visits of the same web page during one session. The results prove that the expected probability of accesses to the individual web page correlates with the observed visit rate obtained from the log files using the WUM method. Furthermore, the website can be improved based on the consequent application of the residual analysis on the obtained results. The applicability of the proposed combination of the web structure and WUM methods is presented on two case studies from different application domains of the contemporary web. As a result, the web pages, which are underestimated or overestimated by the web page creators, are successfully identified in both cases.
APA, Harvard, Vancouver, ISO, and other styles
16

hra, Chait, Dr G. M. Lingaraju, and Dr S. Jagannatha. "Automatic Web Page Classification System with Improved Accuracy." Webology 18, no. 2 (December 23, 2021): 225–42. http://dx.doi.org/10.14704/web/v18i2/web18318.

Full text
Abstract:
Nowadays, the Internet contain s a wide variety of online documents, making finding useful information about a given subject impossible, as well as retrieving irrelevant pages. Web document and page recognition software is useful in a variety of fields, including news, medicine, and fitness, research, and information technology. To enhance search capability, a large number of web page classification methods have been proposed, especially for news web pages. Furthermore existing classification approaches seek to distinguish news web pages while still reducing the high dimensionality of features derived from these pages. Due to the lack of automated classification methods, this paper focuses on the classification of news web pages based on their scarcity and importance. This work will establish different models for the identification and classification of the web pages. The data sets used in this paper were collected from popular news websites. In the research work we have used BBC dataset that has five predefined categories. Initially the input source can be preprocessed and the errors can be eliminated. Then the features can be extracted depend upon the web page reviews using Term frequency-inverse document frequency vectorization. In the work 2225 documents are represented with the 15286 features, which represents the tf-idf score for different unigrams and bigrams. This type of the representation is not only used for classification task also helpful to analyze the dataset. Feature selection is done by using the chi-squared test which will be in the task of finding the terms that are most correlated with each of the categories. Then the pointed features can be selected using chi-squared test. Finally depend upon the classifier the web page can be classified. The results showed that list has obtained the highest percentage, which reflect its effectiveness on the classification of web pages.
APA, Harvard, Vancouver, ISO, and other styles
17

Mani Sekhar, S. R., G. M. Siddesh, Sunilkumar S. Manvi, and K. G. Srinivasa. "Optimized Focused Web Crawler with Natural Language Processing Based Relevance Measure in Bioinformatics Web Sources." Cybernetics and Information Technologies 19, no. 2 (June 1, 2019): 146–58. http://dx.doi.org/10.2478/cait-2019-0021.

Full text
Abstract:
Abstract In the fast growing of digital technologies, crawlers and search engines face unpredictable challenges. Focused web-crawlers are essential for mining the boundless data available on the internet. Web-Crawlers face indeterminate latency problem due to differences in their response time. The proposed work attempts to optimize the designing and implementation of Focused Web-Crawlers using Master-Slave architecture for Bioinformatics web sources. Focused Crawlers ideally should crawl only relevant pages, but the relevance of the page can only be estimated after crawling the genomics pages. A solution for predicting the page relevance, which is based on Natural Language Processing, is proposed in the paper. The frequency of the keywords on the top ranked sentences of the page determines the relevance of the pages within genomics sources. The proposed solution uses a TextRank algorithm to rank the sentences, as well as ensuring the correct classification of Bioinformatics web page. Finally, the model is validated by being compared with a breadth first search web-crawler. The comparison shows significant reduction in run time for the same harvest rate.
APA, Harvard, Vancouver, ISO, and other styles
18

Agyapong, Kwame, J. B. Hayfron Acquah, and M. Asante. "AN OPTIMIZED PAGE RANK ALGORITHM WITH WEB MINING, WEB CONTENT MINING AND WEB STRUCTURE MINING." International Journal of Engineering Technologies and Management Research 4, no. 8 (February 1, 2020): 22–27. http://dx.doi.org/10.29121/ijetmr.v4.i8.2017.91.

Full text
Abstract:
With the rapid increase in internet technology, users get easily confused in large hypertext structure. The primary goal of the web site owner is to provide the relevant information to the users to fulfill their needs. In order to achieve this goal, they use the concept of web mining. Web mining is used to categorize users and pages by analyzing the users‟ behaviour, the content of the pages, and the order of the URLs that tend to be accessed in order. Most of the search engines are ranking their search results in response to users' queries to make their search navigation easier. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia, and navigate between them via hyperlinks. It is very difficult for a user to find the high quality information which he wants. Page Ranking algorithm is needed which provide the higher ranking to the important pages. In this paper, we discuss the improvement of Page ranking algorithm to provide the higher ranking to important pages. Most of the search engines are ranking their search results in response to user’s queries to make their search navigations easier.
APA, Harvard, Vancouver, ISO, and other styles
19

Om Prakash, P. G., K. Suresh Kumar, Balajee Maram, and C. Priya. "Deep Fuzzy Clustering and Deep Residual Network for Prediction of Web Pages from Weblog Data with Fractional Order Based Ranking." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 31, no. 03 (June 2023): 413–36. http://dx.doi.org/10.1142/s0218488523500216.

Full text
Abstract:
Web page recommendation system has attracted more attention in recent decades. The web page recommendation has various characteristics than the classical recommenders. It is the process of predicting the request of the next web page that users are significantly interested while searching the web. It helps the users to find relevant pages in the field of web mining. In particular, web user may spend more time to identify expected information. To understand behavior of users and to visit the page based on their interest at a specific time, an effective web page recommendation method is developed by developed Multi-Verse Sailfish Optimization (MVSFO)-based Deep Residual network. Accordingly, proposed MVSFO is derived by the integration of Multi-Verse Optimizer (MVO) and Sailfish Optimizer (SFO), respectively. Here, the process of recommendation is carried out using weblog data and the web page image. The sequential patterns are acquired from weblog data, and the patterns are grouped with Deep fuzzy clustering based on cosine similarity. The matching process among test pattern and sequential patterns are made using Canberra distance. Here, the recommended web pages obtained from the weblog data and pages obtained from web pages image using the Deep Residual network are enable to generate the output using fractional order-based ranking. The developed scheme attained more effectiveness by the measures, such as F-measure, precision, and recall as 85.30%, 86.59%, and 86.04%, respectively for MSNBC dataset.
APA, Harvard, Vancouver, ISO, and other styles
20

GAO, XIAOYING, MENGJIE ZHANG, and PETER ANDREAE. "AUTOMATIC PATTERN CONSTRUCTION FOR WEB INFORMATION EXTRACTION." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 12, no. 04 (August 2004): 447–70. http://dx.doi.org/10.1142/s0218488504002928.

Full text
Abstract:
This paper describes a domain independent approach for automatically constructing information extraction patterns for semi-structured web pages. Given a randomly chosen page from a web site of similarly structured pages, the system identifies a region of the page that has a regular "tabular" structure, and then infers an extraction pattern that will match the "rows" of the region and identify the data elements. The approach was tested on three corpora containing a series of tabular web sites from different domains and achieved a success rate of at least 80%. A significant strength of the system is that it can infer extraction patterns from a single training page and does not require any manual labeling of the training page.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Xingchen, Weizhe Zhang, Desheng Wang, Bin Zhang, and Hui He. "Algorithm of web page similarity comparison based on visual block." Computer Science and Information Systems 16, no. 3 (2019): 815–30. http://dx.doi.org/10.2298/csis180915028l.

Full text
Abstract:
Phishing often deceives users due to the relative similarity to the true pages on a layout and leads to considerable losses for the society. Consequently, detecting phishing sites has been an urgent activity. By researching phishing web pages using web page screenshots, we discover that this kind of web pages use numerous web page screenshots to achieve the close similarity to the true page and avoid the text and structure similarity detection. This study introduces a new similarity matching algorithm based on visual blocks. First, the RenderLayer tree of the web page is obtained to extract the visual block. Second, an algorithm that will settle the jumbled visual blocks, including the deletion of the small visual blocks and the emergence of the overlapping visual blocks, is designed. Finally, the similarity between the two web pages is assessed. The proposed algorithm sets different thresholds to achieve the optimal missing and false alarm rates.
APA, Harvard, Vancouver, ISO, and other styles
22

Kohei Sakamoto1, Chieko Kato. "Effects on Memorized Information Quantity in Web Pages Using Bicolor Design-from the Perspective of Color Blind People and Non-Color Blind People." Indian Journal of Public Health Research & Development 11, no. 1 (January 31, 2020): 1839–43. http://dx.doi.org/10.37506/ijphrd.v11i1.1389.

Full text
Abstract:
In Japan according to the Japanese Ophthamlogical Society, people with color blindness make up about 5% of men and about 0.2% of women. Colorblind people often experience inconvenient situations in their lives because they cannot distinguish colors. Web pages are no exception. Companies and governments use web pages to transmit information to people. In this study, we created a web page with a color scheme obtained from previous research. This study confirmed that colorblind people, after viewing such a web page, remembered its contents better. From these results, this study analyzed the role of information transmission on the web page. As a result, the viewers color vision and hue of colors on the web page had no effects on their memory. The result also showed that the color of high chroma on the web page had no effects on their memory. The result also showed that the color of high chroma had bad effects on their memory.
APA, Harvard, Vancouver, ISO, and other styles
23

El Louadi, Mohamed, and Imen Ben Ali. "Perceived and Actual Web Page Loading Delay." Journal of Information Technology Research 3, no. 2 (April 2010): 50–66. http://dx.doi.org/10.4018/jitr.2010040104.

Full text
Abstract:
The major complaint users have about using the Web is that they must wait for information to load onto their screen. This is more acute in countries where bandwidth is limited and fees are high. Given bandwidth limitations, Web pages are often hard to accelerate. Predictive feedback information is assumed to distort Internet users’ perception of time, making them more tolerant of low speed. This paper explores the relationship between actual Web page loading delay and perceived Web page loading delay and two aspects of user satisfaction: the Internet user’s satisfaction with the Web page loading delay and satisfaction with the Web page displayed. It also investigates whether predictive feedback information can alter Internet user’s perception of time. The results show that, though related, perceived time and actual time differ slightly in their effect on satisfaction. In this case, it is the perception of time that counts. The results also show that the predictive feedback information displayed on the Web page has an effect on the Internet user’s perception of time, especially in the case of slow Web pages.
APA, Harvard, Vancouver, ISO, and other styles
24

Ahmad Sabri, Ily Amalina, and Mustafa Man. "Improving Performance of DOM in Semi-structured Data Extraction using WEIDJ Model." Indonesian Journal of Electrical Engineering and Computer Science 9, no. 3 (March 1, 2018): 752. http://dx.doi.org/10.11591/ijeecs.v9.i3.pp752-763.

Full text
Abstract:
<p>Web data extraction is the process of extracting user required information from web page. The information consists of semi-structured data not in structured format. The extraction data involves the web documents in html format. Nowadays, most people uses web data extractors because the extraction involve large information which makes the process of manual information extraction takes time and complicated. We present in this paper WEIDJ approach to extract images from the web, whose goal is to harvest images as object from template-based html pages. The WEIDJ (Web Extraction Image using DOM (Document Object Model) and JSON (JavaScript Object Notation)) applies DOM theory in order to build the structure and JSON as environment of programming. The extraction process leverages both the input of web address and the structure of extraction. Then, WEIDJ splits DOM tree into small subtrees and applies searching algorithm by visual blocks for each web page to find images. Our approach focus on three level of extraction; single web page, multiple web page and the whole web page. Extensive experiments on several biodiversity web pages has been done to show the comparison time performance between image extraction using DOM, JSON and WEIDJ for single web page. The experimental results advocate via our model, WEIDJ image extraction can be done fast and effectively.</p>
APA, Harvard, Vancouver, ISO, and other styles
25

Khalil, Nida, Saniah Rehan, Abeer Javed Syed, Khalid Mahboob, Fayyaz Ali, and Fatima Waseem. "Optimizing the Efficiency of Web Mining through Comparative Web Ranking Algorithms." VFAST Transactions on Software Engineering 11, no. 4 (December 31, 2023): 105–23. http://dx.doi.org/10.21015/vtse.v11i4.1667.

Full text
Abstract:
Millions of web pages carrying massive amounts of data make up the World Wide Web. Real-time data has been generated on a wide scale on the websites. However, not every piece of data is relevant to the user. While scouring the web for information, a user may come upon a web page that contains irrelevant or incomplete information. As a response, search engines can alleviate this issue by displaying the most relevant pages. Two web page ranking algorithms are proposed in this study along with the Dijkstra algorithm; the PageRank algorithm and the Weighted PageRank algorithm. The algorithms are used to evaluate a web page's importance or relevancy within a network, such as the Internet. PageRank evaluates a page's value based on the quantity and quality of links leading to it. It is commonly utilized by nearly all search engines around the world to rank web pages in order of relevance. This algorithm is used by Google, the most widespread Internet search engine. In the process of Web mining, page rank is quite weighty. The most important component of marketing is online use mining, which investigates how people browse and operate a business on a company's website. The study presents two proposed models that try to optimize web links and improve search engine results relevancy for users.
APA, Harvard, Vancouver, ISO, and other styles
26

Papacharissi, Zizi. "The Presentation of Self in Virtual Life: Characteristics of Personal Home Pages." Journalism & Mass Communication Quarterly 79, no. 3 (September 2002): 643–60. http://dx.doi.org/10.1177/107769900207900307.

Full text
Abstract:
This study focused on how individuals used personal home pages to present themselves online. Content analysis was used to examine, record, and analyze the characteristics of personal home pages. Data interpretation revealed popular tools for self-presentation, a desire for virtual homesteaders to affiliate with online homestead communities, and significant relationships among home page characteristics. Web page design was influenced, to a certain extent, by the tools Web page space providers supplied. Further studies should consider personality characteristics, design templates, and Web author input to determine factors that influence self-presentation through personal home pages.
APA, Harvard, Vancouver, ISO, and other styles
27

Kaur, Satinder, and Sunil Gupta. "PREDICTION OF DESIGN ASPECTS OF WEB PAGE BY HTML PARSER." International Journal of Engineering Technologies and Management Research 5, no. 2 (February 8, 2020): 143–58. http://dx.doi.org/10.29121/ijetmr.v5.i2.2018.157.

Full text
Abstract:
Inform plays a very important role in life and nowadays, the world largely depends on the World Wide Web to obtain any information. Web comprises of a lot of websites of every discipline, whereas websites consists of web pages which are interlinked with each other with the help of hyperlinks. The success of a website largely depends on the design aspects of the web pages. Researchers have done a lot of work to appraise the web pages quantitatively. Keeping in mind the importance of the design aspects of a web page, this paper aims at the design of an automated evaluation tool which evaluate the aspects for any web page. The tool takes the HTML code of the web page as input, and then it extracts and checks the HTML tags for the uniformity. The tool comprises of normalized modules which quantify the measures of design aspects. For realization, the tool has been applied on four web pages of distinct sites and design aspects have been reported for comparison. The tool will have various advantages for web developers who can predict the design quality of web pages and enhance it before and after implementation of website without user interaction.
APA, Harvard, Vancouver, ISO, and other styles
28

Jaganathan, B., and Kalyani Desikan. "Enhanced Web Page Ranking Method Using Laplacian Centrality." International Journal of Engineering & Technology 7, no. 4.10 (October 2, 2018): 566. http://dx.doi.org/10.14419/ijet.v7i4.10.21282.

Full text
Abstract:
In today's era of computer technology where users want not only the most relevant data but they also want the data as quickly as possible. Hence, ranking web pages becomes a crucial task. The purpose of this research is to find a centrality measure that can be used in place of original page rank. In this article concept of Laplacian centrality measure for directed web graph has been introduced to identify the web page ranks. Comparison between the original page rank and Laplacian centrality based Page rank has been made. Kendall's correlation co-efficient has been used as a measure to find the correlation between the original page rank and Laplacian centrality measure based page rank.
APA, Harvard, Vancouver, ISO, and other styles
29

Guha, Sutirtha Kumar, Anirban Kundu, and Rana Duttagupta. "Introducing Link Based Weightage for Web Page Ranking." International Journal of Artificial Life Research 5, no. 1 (January 2015): 41–55. http://dx.doi.org/10.4018/ijalr.2015010103.

Full text
Abstract:
In this paper the authors are going to propose a new rank measurement technique by introducing weightage factor based on number of Web links available on a particular Web page. Available Web links are considered as an important importance indicator. Distinct weightage factor is assigned to the Web pages as these are calculated based on the Web links. Different Web pages are evaluated more accurately due to the independent and uniqueness of weightage factor. Better Web page ranking is achieved as it depends on specific weightage factor. Impact of unwanted intruder is minimized by the introduction of this weightage factor.
APA, Harvard, Vancouver, ISO, and other styles
30

HAYASHI, Takahiro, Syo KATAHIRA, Atsushi INUZUKA, and Rikio ONAI. "Retrieval of Personal Web Pages Based on Web Page Clustering." Journal of Japan Society for Fuzzy Theory and Intelligent Informatics 18, no. 2 (2006): 161–72. http://dx.doi.org/10.3156/jsoft.18.161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Massaro, Alessandro, Daniele Giannone, Vitangelo Birardi, and Angelo Maurizio Galiano. "An Innovative Approach for the Evaluation of the Web Page Impact Combining User Experience and Neural Network Score." Future Internet 13, no. 6 (May 31, 2021): 145. http://dx.doi.org/10.3390/fi13060145.

Full text
Abstract:
The proposed paper introduces an innovative methodology useful to assign intelligent scores to web pages. The approach is based on the simultaneous use of User eXperience (UX), Artificial Neural Network (ANN), and Long Short-Term Memory (LSTM) algorithms, providing the web page scoring and taking into account outlier conditions to construct the training dataset. Specifically, the UX tool analyses different parameters addressing the score, such as navigation time, number of clicks, and mouse movements for page, finding possible outliers, the ANN are able to predict outliers, and the LSTM processes the web pages tags together with UX and user scores. The final web page score is assigned by the LSTM model corrected by the UX output and improved by the navigation user score. This final score is useful for the designer by suggesting the tags typologies structuring a new web page layout of a specific topic. By using the proposed methodology, the web designer is addressed to allocate contents in the web page layout. The work has been developed within a framework of an industry project oriented on the formulation of an innovative AI interface for web designers.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Zuping, Jing Zhao, and Xiping Yan. "A Web Page Clustering Method Based on Formal Concept Analysis." Information 9, no. 9 (September 6, 2018): 228. http://dx.doi.org/10.3390/info9090228.

Full text
Abstract:
Web page clustering is an important technology for sorting network resources. By extraction and clustering based on the similarity of the Web page, a large amount of information on a Web page can be organized effectively. In this paper, after describing the extraction of Web feature words, calculation methods for the weighting of feature words are studied deeply. Taking Web pages as objects and Web feature words as attributes, a formal context is constructed for using formal concept analysis. An algorithm for constructing a concept lattice based on cross data links was proposed and was successfully applied. This method can be used to cluster the Web pages using the concept lattice hierarchy. Experimental results indicate that the proposed algorithm is better than previous competitors with regard to time consumption and the clustering effect.
APA, Harvard, Vancouver, ISO, and other styles
33

Abdulrahman, Ayad. "Web Pages Ranking Algorithms: A Survey." Qubahan Academic Journal 1, no. 3 (July 1, 2021): 29–34. http://dx.doi.org/10.48161/qaj.v1n3a79.

Full text
Abstract:
Due to the daily expansion of the web, the amount of information has increased significantly. Thus, the need for retrieving relevant information has also increased. In order to explore the internet, users depend on various search engines. Search engines face a significant challenge in returning the most relevant results for a user's query. The search engine's performance is determined by the algorithm used to rank web pages, which prioritizes the pages with the most relevancy to appear at the top of the result page. In this paper, various web page ranking algorithms such as Page Rank, Time Rank, EigenRumor, Distance Rank, SimRank, etc. are analyzed and compared based on some parameters, including the mining technique to which the algorithm belongs (for instance, Web Content Mining, Web Structure Mining, and Web Usage Mining), the methodology used for ranking web pages, time complexity (amount of time to run an algorithm), input parameters (parameters utilized in the ranking process such as InLink, OutLink, Tag name, Keyword, etc.), and the result relevancy to the user query.
APA, Harvard, Vancouver, ISO, and other styles
34

Frikh, Bouchra, and Brahim Ouhbi. "Web Algorithms for Information Retrieval." International Journal of Mobile Computing and Multimedia Communications 6, no. 1 (January 2014): 1–16. http://dx.doi.org/10.4018/ijmcmc.2014010101.

Full text
Abstract:
The World Wide Web has emerged to become the biggest and most popular way of communication and information dissemination. Every day, the Web is expending and people generally rely on search engine to explore the web. Because of its rapid and chaotic growth, the resulting network of information lacks of organization and structure. It is a challenge for service provider to provide proper, relevant and quality information to the internet users by using the web page contents and hyperlinks between web pages. This paper deals with analysis and comparison of web pages ranking algorithms based on various parameters to find out their advantages and limitations for ranking web pages and to give the further scope of research in web pages ranking algorithms. Six important algorithms: the Page Rank, Query Dependent-PageRank, HITS, SALSA, Simultaneous Terms Query Dependent-PageRank (SQD-PageRank) and Onto-SQD-PageRank are presented and their performances are discussed.
APA, Harvard, Vancouver, ISO, and other styles
35

Van Horn, Royal. "Web Page Accessibility." Phi Delta Kappan 85, no. 2 (October 2003): 103–73. http://dx.doi.org/10.1177/003172170308500204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hyman, William A. "WEB PAGE REVIEWS." Journal of Clinical Engineering 22, no. 4 (July 1997): 209. http://dx.doi.org/10.1097/00004669-199707000-00008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hyman, William A. "WEB PAGE REVIEWS." Journal of Clinical Engineering 23, no. 1 (January 1998): 12. http://dx.doi.org/10.1097/00004669-199801000-00008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hyman, William A. "WEB PAGE REVIEWS." Journal of Clinical Engineering 23, no. 3 (May 1998): 155–57. http://dx.doi.org/10.1097/00004669-199805000-00012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hyman, William A. "WEB PAGE REVIEWS." Journal of Clinical Engineering 24, no. 1 (January 1999): 25–26. http://dx.doi.org/10.1097/00004669-199901000-00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Doyle, D. John. "Web page review." Canadian Journal of Anesthesia/Journal canadien d'anesthésie 48, no. 1 (January 2001): 99–100. http://dx.doi.org/10.1007/bf03019824.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Drife, James. "The web page." Obstetrician & Gynaecologist 2, no. 2 (April 2000): 56. http://dx.doi.org/10.1576/toag.2000.2.2.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Qi, Xiaoguang, and Brian D. Davison. "Web page classification." ACM Computing Surveys 41, no. 2 (February 2009): 1–31. http://dx.doi.org/10.1145/1459352.1459357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Xing-Hua, Lu, Ye Wen-Quan, and Liu Ming-Yuan. "Personalized Recommendation Algorithm for Web Pages Based on Associ ation Rule Mining." MATEC Web of Conferences 173 (2018): 03020. http://dx.doi.org/10.1051/matecconf/201817303020.

Full text
Abstract:
In order to improve the user ' s ability to access websites and web pages, according to the interest preference of the user, the personalized recommendation design is carried out, and the personalized recommendation model for web page visit is established to meet the personalized interest demand of the user to browse the web page. A webpage personalized recommendation algorithm based on association rule mining is proposed. Based on the semantic features of web pages, user browsing behavior is calculated by similarity computation, and web crawler algorithm is constructed to extract the semantic features of web pages. The autocorrelation matching method is used to match the features of web page and user browsing behavior, and the association rules feature quantity of user browsing website behavior is mined. According to the semantic relevance and semantic information of web users to search words, fuzzy registration is taken, Web personalized recommendation is obtained to meet the needs of the users browse the web. The simulation results show that the method is accurate and user satisfaction is higher.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhou, Junzan, Yun Zhang, Bo Zhou, and Shanping Li. "Predicting web page performance level based on web page characteristics." International Journal of Web Engineering and Technology 10, no. 2 (2015): 152. http://dx.doi.org/10.1504/ijwet.2015.072338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lu, Houqing, Donghui Zhan, Lei Zhou, and Dengchao He. "An Improved Focused Crawler: Using Web Page Classification and Link Priority Evaluation." Mathematical Problems in Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/6406901.

Full text
Abstract:
A focused crawler is topic-specific and aims selectively to collect web pages that are relevant to a given topic from the Internet. However, the performance of the current focused crawling can easily suffer the impact of the environments of web pages and multiple topic web pages. In the crawling process, a highly relevant region may be ignored owing to the low overall relevance of that page, and anchor text or link-context may misguide crawlers. In order to solve these problems, this paper proposes a new focused crawler. First, we build a web page classifier based on improved term weighting approach (ITFIDF), in order to gain highly relevant web pages. In addition, this paper introduces an evaluation approach of the link, link priority evaluation (LPE), which combines web page content block partition algorithm and the strategy of joint feature evaluation (JFE), to better judge the relevance between URLs on the web page and the given topic. The experimental results demonstrate that the classifier using ITFIDF outperforms TFIDF, and our focused crawler is superior to other focused crawlers based on breadth-first, best-first, anchor text only, link-context only, and content block partition in terms of harvest rate and target recall. In conclusion, our methods are significant and effective for focused crawler.
APA, Harvard, Vancouver, ISO, and other styles
46

Prieto, Víctor, Manuel Álvarez, Víctor Carneiro, and Fidel Cacheda. "Distributed and collaborative Web Change Detection system." Computer Science and Information Systems 12, no. 1 (2015): 91–114. http://dx.doi.org/10.2298/csis131120081p.

Full text
Abstract:
Search engines use crawlers to traverse the Web in order to download web pages and build their indexes. Maintaining these indexes up-to-date is an essential task to ensure the quality of search results. However, changes in web pages are unpredictable. Identifying the moment when a web page changes as soon as possible and with minimal computational cost is a major challenge. In this article we present the Web Change Detection system that, in a best case scenario, is capable to detect, almost in real time, when a web page changes. In a worst case scenario, it will require, on average, 12 minutes to detect a change on a low PageRank web site and about one minute on a web site with high PageRank. Meanwhile, current search engines require more than a day, on average, to detect a modification in a web page (in both cases).
APA, Harvard, Vancouver, ISO, and other styles
47

Scott, S. D., and Y. H. Koh. "Design Metrics and the Adaptation of Web-Page Content Chunks for PDAs." Journal of IT in Asia 1, no. 1 (July 21, 2017): 35–51. http://dx.doi.org/10.33736/jita.404.2005.

Full text
Abstract:
The majority of web-pages are unsuitable for viewing on PDAs, WAP phones and similar devices without first being adapted. However, little empirical work has been done on what actually constitutes a good PDA or WAP web-page. This paper ranks a number of PDA web-pages from different categories empirically and correlates the result against the design metrics present. The findings are then compared against a similar set of experiments for PC web-pages. The results of this comparison suggest that, as well as omitting, summarizing and converting individual multimedia objects in the web-page to a less resource intensive form, the design metrics need to be changed during adaptation to enhance the presentation of web-content on non-PC devices. The paper concludes by investigating the effect of applying some suitable changes to the design metrics on web=page content chunks, which form the basic units in automatic content adaptation systems.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Xue, Fang Tao, and Wu Chao. "Indexing Associated Knowledge Flow on the Web." Advanced Engineering Forum 1 (September 2011): 305–9. http://dx.doi.org/10.4028/www.scientific.net/aef.1.305.

Full text
Abstract:
The Associated Knowledge Flow (AKF) on the Web is an ordered sequence of Web pages that have associated relation. The associated relation from page A to page B indicates that users who have browsed page A is likely to also browse page B. The motivation of this paper is to index the AKFs on the Web and provide users AKFs instead of discrete resources. We build a scalable P2P-based Web resource-sharing system and design two kinds of ID spaces (hash ID space and semantic ID space) on it to index resources and facilitate AKF discovery. Theoretical analysis and simulations show that such a system can achieve logarithmic performance and cost.
APA, Harvard, Vancouver, ISO, and other styles
49

Shen, Qi, Qing Ming Song, and Bo Chen. "Research of the Web Information Extraction Technology on Tourism Theme." Applied Mechanics and Materials 614 (September 2014): 503–6. http://dx.doi.org/10.4028/www.scientific.net/amm.614.503.

Full text
Abstract:
With the development of web technology, the use of dynamic web pages and the personalization of page contents become more and more popular. Currently, the information of page is protean and the structures of different pages are vastly different, the traditional thinking of web information extraction technology has been difficult to adapt to the situation. In this paper, proposes a web information extraction method based on extended XPath policy through the analysis of structural features of web pages on tourist theme. This algorithm avoids the defects of traditional web information extraction technology; it is simple, practical, high cleaning efficiency, accuracy, and saving the overhead of the system.
APA, Harvard, Vancouver, ISO, and other styles
50

Priya, V. Banu, T. Meyyapan ., SM Thamarai, and . "Page Ranking Algorithm for Ranking Web Pages." International Journal of Computer Sciences and Engineering 6, no. 7 (July 31, 2018): 1502–5. http://dx.doi.org/10.26438/ijcse/v6i7.15021505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography