Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Huge Pages.

Artykuły w czasopismach na temat „Huge Pages”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Huge Pages”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Panwar, Ashish, Aravinda Prasad i K. Gopinath. "Making Huge Pages Actually Useful". ACM SIGPLAN Notices 53, nr 2 (30.11.2018): 679–92. http://dx.doi.org/10.1145/3296957.3173203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Al-Kabi, Mohammed, Heider Wahsheh, Izzat Alsmadi, Emad Al-Shawakfa, Abdullah Wahbeh i Ahmed Al-Hmoud. "Content-based analysis to detect Arabic web spam". Journal of Information Science 38, nr 3 (19.04.2012): 284–96. http://dx.doi.org/10.1177/0165551512439173.

Pełny tekst źródła
Streszczenie:
Search engines are important outlets for information query and retrieval. They have to deal with the continual increase of information available on the web, and provide users with convenient access to such huge amounts of information. Furthermore, with this huge amount of information, a more complex challenge that continuously gets more and more difficult to illuminate is the spam in web pages. For several reasons, web spammers try to intrude in the search results and inject artificially biased results in favour of their websites or pages. Spam pages are added to the internet on a daily basis, thus making it difficult for search engines to keep up with the fast-growing and dynamic nature of the web, especially since spammers tend to add more keywords to their websites to deceive the search engines and increase the rank of their pages. In this research, we have investigated four different classification algorithms (naïve Bayes, decision tree, SVM and K-NN) to detect Arabic web spam pages, based on content. The three groups of datasets used, with 1%, 15% and 50% spam contents, were collected using a crawler that was customized for this study. Spam pages were classified manually. Different tests and comparisons have revealed that the Decision Tree was the best classifier for this purpose.
Style APA, Harvard, Vancouver, ISO itp.
3

Feliu, Josué, Julio Sahuquillo, Salvador Petit i José Duato. "Using Huge Pages and Performance Counters to Determine the LLC Architecture". Procedia Computer Science 18 (2013): 2557–60. http://dx.doi.org/10.1016/j.procs.2013.05.440.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Hioual, Ouided, Sofiane Mounine Hemam, Ouassila Hioual i Lyes Maif. "A Hybrid Approach for Web Pages Classification". Ingénierie des systèmes d information 27, nr 5 (31.10.2022): 747–55. http://dx.doi.org/10.18280/isi.270507.

Pełny tekst źródła
Streszczenie:
Currently, the internet is growing at an exponential rate and can cover just some required data. However, the immense amount of web pages makes the discovery of the target data more difficult for the user. Therefore, an efficient method to classify this huge amount of data is essential where web pages can be exploited to their full potential. In this paper, we propose an approach to classify Web pages based on their textual content. This approach is based on an unsupervised statistical technique (TF-IDF) for keyword extraction (textual content) combined with a supervised machine learning approach, namely recurrent neural networks.
Style APA, Harvard, Vancouver, ISO itp.
5

Kumar, Santosh, i Ravi Kumar. "WDPMA". International Journal of Information Technology and Web Engineering 16, nr 2 (kwiecień 2021): 1–24. http://dx.doi.org/10.4018/ijitwe.2021040101.

Pełny tekst źródła
Streszczenie:
The internet is very huge in size and increasing exponentially. Finding any relevant information from such a huge information source is now becoming very difficult. Millions of web pages are returned in response to a user's ordinary query. Displaying these web pages without ranking makes it very challenging for the user to find the relevant results of a query. This paper has proposed a novel approach that utilizes web content, usage, and structure data to prioritize web documents. The proposed approach has applications in several major areas like web personalization, adaptive website development, recommendation systems, search engine optimization, business intelligence solutions, etc. Further, the proposed approach has been compared experimentally by other approaches, WDPGA, WDPSA, and WDPII, and it has been observed that with a little trade off time, it has an edge over these approaches.
Style APA, Harvard, Vancouver, ISO itp.
6

Ahamed, B. Bazeer, D. Yuvaraj, S. Shitharth, Olfat M. Mizra, Aisha Alsobhi i Ayman Yafoz. "An Efficient Mechanism for Deep Web Data Extraction Based on Tree-Structured Web Pattern Matching". Wireless Communications and Mobile Computing 2022 (27.05.2022): 1–10. http://dx.doi.org/10.1155/2022/6335201.

Pełny tekst źródła
Streszczenie:
The World Wide Web comprises of huge web databases where the data are searched using web query interface. Generally, the World Wide Web maintains a set of databases to store several data records. The distinct data records are extracted by the web query interface as per the user requests. The information maintained in the web database is hidden and retrieves deep web content even in dynamic script pages. In recent days, a web page offers a huge amount of structured data and is in need of various web-related latest applications. The challenge lies in extracting complicated structured data from deep web pages. Deep web contents are generally accessed by the web queries, but extracting the structured data from the web database is a complex problem. Moreover, making use of such retrieved information in combined structures needs significant efforts. No further techniques are established to address the complexity in data extraction of deep web data from various web pages. Despite the fact that several ways for deep web data extraction are offered, very few research address template-related issues at the page level. For effective web data extraction with a large number of online pages, a unique representation of page generation using tree-based pattern matches (TBPM) is proposed. The performance of the proposed technique TBPM is compared to that of existing techniques in terms of relativity, precision, recall, and time consumption. The performance metrics such as high relativity is about 17-26% are achieved when compared to FiVaTech approach.
Style APA, Harvard, Vancouver, ISO itp.
7

Suleymanzade, Suleyman, i Fargana Abdullayeva. "Full Content-based Web Page Classification Methods by using Deep Neural Networks". Statistics, Optimization & Information Computing 9, nr 4 (30.07.2021): 963–73. http://dx.doi.org/10.19139/soic-2310-5070-1056.

Pełny tekst źródła
Streszczenie:
The quality of the web page classification process has a huge impact on information retrieval systems. In this paper, we proposed to combine the results of text and image data classifiers to get an accurate representation of the web pages. To get and analyse the data we created the complicated classifier system with data miner, text classifier, and aggregator. The process of image and text data classification has been achieved by the deep learning models. In order to represent the common view onto the web pages, we proposed three aggregation techniques that combine the data from the classifiers.
Style APA, Harvard, Vancouver, ISO itp.
8

T, Anuradha, i Tayyaba Nousheen. "MACHINE LEARNING BASED SEARCH ENGINE WITH CRAWLING, INDEXING AND RANKING". International Journal of Computer Science and Mobile Computing 10, nr 7 (30.07.2021): 76–83. http://dx.doi.org/10.47760/ijcsmc.2021.v10i07.011.

Pełny tekst źródła
Streszczenie:
The web is the heap and huge collection of wellspring of data. The Search Engine are used for retrieving the information from World Wide Web (WWW). Search Engines are helpful for searching user keywords and provide the accurate result in fraction of seconds. This paper proposed Machine Learning based search engine which will give more relevant user searches in the form of web pages. To display the user entered query search engine plays a major role of basic interface. Every site comprises of the heaps of site pages that are being made and sent on the server.
Style APA, Harvard, Vancouver, ISO itp.
9

Pronskikh, A. A., i Y. S. Fedorov. "Report on the IV Plenum of the Association of Traumatologists and Orthopedists of Russia and the conference "Diagnostics and Treatment of Polytraumas"". N.N. Priorov Journal of Traumatology and Orthopedics 6, nr 4 (15.11.1999): 72–74. http://dx.doi.org/10.17816/vto105624.

Pełny tekst źródła
Streszczenie:
The problem of polytrauma is increasingly becoming the subject of discussion in various medical forums and pages of special publications. However, many of its questions remain unresolved. This dictates the need to identify priority areas, the development of which will help a huge army of practitioners who constantly face serious injuries in their work.
Style APA, Harvard, Vancouver, ISO itp.
10

Et. al., Shilpa Deshmukh,. "Efficient Methodology for Deep Web Data Extraction". Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, nr 1S (11.04.2021): 286–93. http://dx.doi.org/10.17762/turcomat.v12i1s.1769.

Pełny tekst źródła
Streszczenie:
Deep Web substance are gotten to by inquiries submitted to Web information bases and the returned information records are enwrapped in progressively created Web pages (they will be called profound Web pages in this paper). Removing organized information from profound Web pages is a difficult issue because of the fundamental mind boggling structures of such pages. As of not long ago, an enormous number of strategies have been proposed to address this issue, however every one of them have characteristic impediments since they are Web-page-programming-language subordinate. As the mainstream two-dimensional media, the substance on Web pages are constantly shown routinely for clients to peruse. This inspires us to look for an alternate path for profound Web information extraction to beat the constraints of past works by using some fascinating normal visual highlights on the profound Web pages. In this paper, a novel vision-based methodology that is Visual Based Deep Web Data Extraction (VBDWDE) Algorithm is proposed. This methodology basically uses the visual highlights on the profound Web pages to execute profound Web information extraction, including information record extraction and information thing extraction. We additionally propose another assessment measure amendment to catch the measure of human exertion expected to create wonderful extraction. Our investigations on a huge arrangement of Web information bases show that the proposed vision-based methodology is exceptionally viable for profound Web information extraction.
Style APA, Harvard, Vancouver, ISO itp.
11

Claridge, Andrew, i David Lloyd. "Gastrointestinal Emergencies". Acute Medicine Journal 8, nr 2 (1.04.2009): 90. http://dx.doi.org/10.52964/amja.0242.

Pełny tekst źródła
Streszczenie:
Gastrointestinal Emergencies describes itself as “the definitive reference guide for the management of gastrointestinal emergencies and endoscopic complications”, The book covers the huge topic of acute gastroenterology in a succinct and easy to read format. At just over 200 pages it covers a lot more than what you might expect and makes for easy and enjoyable reading.
Style APA, Harvard, Vancouver, ISO itp.
12

Palshin, N. A. "MEMORY PAGES. LEONID LVOVICH VANYAN (1932–2001)". Journal of Oceanological Research 50, nr 1 (28.04.2022): 100–107. http://dx.doi.org/10.29006/1564-2291.jor-2022.50(1).9.

Pełny tekst źródła
Streszczenie:
March 2022 marks the 90th anniversary of the birth of the outstanding Soviet and Russian geophysicist, Professor Leonid Lvovich Vanyan. L. L. Vanyan was the author of bright scientific ideas and classic books, a talented scientist who brought up more than dozens followers. The works of L. L. Vanyan, together with the works of M. N. Berdichevsky, made a huge contribution to the theory of electromagnetic sounding, developing the fundamental works of the founders of electromagnetic methods – V. R. Bursian, A. P. Kraev, S. M. Sheinman, A. S. Semenov, L. M. Alpin. Most of the modern technologies and methods of electromagnetic studies of the deep structure of the Earth, including seafloor electromagnetic sounding, are based on this theoretical basis.
Style APA, Harvard, Vancouver, ISO itp.
13

Sumathi, G., S. Sendhilkumar i G. S. Mahalakshmi. "Ranking Pages of Clustered Users using Weighted Page Rank Algorithm with User Access Period". International Journal of Intelligent Information Technologies 11, nr 4 (październik 2015): 16–36. http://dx.doi.org/10.4018/ijiit.2015100102.

Pełny tekst źródła
Streszczenie:
The World Wide Web comprises billions of web pages and a tremendous amount of information accessible inside of web pages. To recover obliged data from the World Wide Web, search engines perform number of tasks in light of their separate structural planning. The point at which a user gives a query to the search engine, it commonly returns a bulky number of pages related to the user's query. To backing the users to explore in the returned list, different ranking techniques are connected on the search results. The vast majority of the ranking calculations, which are given in the related work, are either link or content based. The existing works don't consider user access patterns. In this paper, a page ranking approach of Weighted Page Rank Score Algorithm taking user access is being conceived for search engines, which deals with the premise of weighted page rank method and considers user access period of web pages into record. For this reason, the web users are clustered based on the Particle Swarm Optimization (PSO) approach. From those groups, the pages are ranked by improving the weighted page rank approach with usage based parameter of user access period. This calculation is utilized to discover more applicable pages as per user's query. In this way, this idea is extremely helpful to show the most important pages on the uppermost part of the search list on the principle of user searching behavior, which shrinks the search space on a huge scale.
Style APA, Harvard, Vancouver, ISO itp.
14

WANG, WEI-NENG, KAI NI, JIAN-SHE MA, YI ZHAO, ZONG-CHAO WANG i LONG-FA PAN. "AN EFFICIENT DYNAMIC WEAR LEVELING FOR HUGE-CAPACITY FLASH STORAGE SYSTEMS WITH CACHE". Journal of Circuits, Systems and Computers 21, nr 04 (czerwiec 2012): 1250030. http://dx.doi.org/10.1142/s0218126612500302.

Pełny tekst źródła
Streszczenie:
Flash memory won its edge than other storage media for its advantages, such as shock resistance, low power consumption and high data transmission speed. However, new data is written out-of-place due to the characteristics of flash memory, which is diverse from traditional magnetic media. Out-of-place update results in the wear-leveling issue over flash memory for erasing blocks to reclaim invalid pages. This paper proposed a dynamic wear (DW)-leveling design without substantially increasing overhead and without modifying Flash Translation Layer (FTL) for huge-capacity flash storage systems with cache, which is based on segmentation threshold and Least Recently Used (LRU). Experimental results show that our design levels the wear of different physical blocks, reduces extra page coping and block erasing, and improves the read/write performance. Additionally, different thresholds impacting wear leveling are also discussed.
Style APA, Harvard, Vancouver, ISO itp.
15

ALMUKHTAR, Firas, Nawzad MAHMOODD i Shahab KAREEM. "SEARCH ENGINE OPTIMIZATION: A REVIEW". Applied Computer Science 17, nr 1 (30.03.2021): 70–80. http://dx.doi.org/10.35784/acs-2021-07.

Pełny tekst źródła
Streszczenie:
The Search Engine has a critical role in presenting the correct pages to the user because of the availability of a huge number of websites, Search Engines such as Google use the Page Ranking Algorithm to rate web pages according to the nature of their content and their existence on the world wide web. SEO can be characterized as methodology used to elevate site keeping in mind the end goal to have a high rank i.e., top outcome. In this paper the authors present the most search engine optimization like (Google, Bing, MSN, Yahoo, etc.), and compare by the performance of the search engine optimization. The authors also present the benefits, limitation, challenges, and the search engine optimization application in business.
Style APA, Harvard, Vancouver, ISO itp.
16

Saraç, Esra, i Selma Ayşe Özel. "An Ant Colony Optimization Based Feature Selection for Web Page Classification". Scientific World Journal 2014 (2014): 1–16. http://dx.doi.org/10.1155/2014/649260.

Pełny tekst źródła
Streszczenie:
The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines’ performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, andknearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.
Style APA, Harvard, Vancouver, ISO itp.
17

Abbasi, Burhan Ud Din, Iram Fatima, Hamid Mukhtar, Sharifullah Khan, Abdulaziz Alhumam i Hafiz Farooq Ahmad. "Autonomous schema markups based on intelligent computing for search engine optimization". PeerJ Computer Science 8 (8.12.2022): e1163. http://dx.doi.org/10.7717/peerj-cs.1163.

Pełny tekst źródła
Streszczenie:
With advances in artificial intelligence and semantic technology, search engines are integrating semantics to address complex search queries to improve the results. This requires identification of well-known concepts or entities and their relationship from web page contents. But the increase in complex unstructured data on web pages has made the task of concept identification overly complex. Existing research focuses on entity recognition from the perspective of linguistic structures such as complete sentences and paragraphs, whereas a huge part of the data on web pages exists as unstructured text fragments enclosed in HTML tags. Ontologies provide schemas to structure the data on the web. However, including them in the web pages requires additional resources and expertise from organizations or webmasters and thus becoming a major hindrance in their large-scale adoption. We propose an approach for autonomous identification of entities from short text present in web pages to populate semantic models based on a specific ontology model. The proposed approach has been applied to a public dataset containing academic web pages. We employ a long short-term memory (LSTM) deep learning network and the random forest machine learning algorithm to predict entities. The proposed methodology gives an overall accuracy of 0.94 on the test dataset, indicating a potential for automated prediction even in the case of a limited number of training samples for various entities, thus, significantly reducing the required manual workload in practical applications.
Style APA, Harvard, Vancouver, ISO itp.
18

Gu, Wei, i Lian Jun Chen. "User Behavior Prediction Analysis Based on Time Context". Advanced Materials Research 989-994 (lipiec 2014): 4920–25. http://dx.doi.org/10.4028/www.scientific.net/amr.989-994.4920.

Pełny tekst źródła
Streszczenie:
Analysis of user behavior management control has important significance for network users. This paper uses the clustering algorithm to cluster the user behavior for clicking certain software or Website in different times, through the time context of user behavior to analyze user behavior rules. Additionally, this paper analyzes the clustering result in detail, then, divides user behavior into different types. Finally, according to the clustering result, more targeted pages and applications are recommended to the network users to create huge business value.
Style APA, Harvard, Vancouver, ISO itp.
19

Basaligheh, Prof Parvaneh. "Mining Of Deep Web Interfaces Using Multi Stage Web Crawler". International Journal of New Practices in Management and Engineering 9, nr 04 (31.12.2020): 11–16. http://dx.doi.org/10.17762/ijnpme.v9i04.91.

Pełny tekst źródła
Streszczenie:
As deep web develops at an exceptionally high speed, there has been expanded interest in procedures that help productively find deep-web interfaces. Nonetheless, because of the huge volume of web assets and the dynamic idea of deep web, accomplishing wide inclusion and high proficiency is a difficult issue. In this venture propose a three-stage framework, for proficient reaping deep web interfaces. In the main stage, web crawler performs website based looking for focus pages with the assistance of web indexes, trying not to visit an enormous number of pages. To accomplish more exact outcomes for an engaged slither, Web Crawler positions websites to organize profoundly applicable ones for a given subject. In the second stage the proposed framework opens the web pages inside in application with the assistance of Jsoup API and preprocess it. At that point it plays out the word include of inquiry in web pages. In the third stage the proposed framework performs recurrence investigation dependent on TF and IDF. It additionally utilizes a blend of TF*IDF for positioning web pages. To kill inclination on visiting some exceptionally applicable connections in shrouded web registries, In this paper we propose plan a connection tree information structure to accomplish more extensive inclusion for a website. Venture trial results on a bunch of delegate areas show the deftness and exactness of our proposed crawler framework, which proficiently recovers deep-web interfaces from enormous scope destinations and accomplishes higher reap rates than different crawlers utilizing gullible Bayes calculation.
Style APA, Harvard, Vancouver, ISO itp.
20

Önder, Irem, Ulrich Gunter i Stefan Gindl. "Utilizing Facebook Statistics in Tourism Demand Modeling and Destination Marketing". Journal of Travel Research 59, nr 2 (18.03.2019): 195–208. http://dx.doi.org/10.1177/0047287519835969.

Pełny tekst źródła
Streszczenie:
Facebook is a popular social media platform used by both the demand and the supply sides of the tourism industry. Since there is a huge amount of information on the Internet, which can lead to information overload, individuals tend to apply the principle of least effort in attempting to obtain useful information as quickly and easily as possible. One of the easiest ways to retrieve travel information is by visiting the Facebook pages of destinations. This study investigates the foundations of the usefulness of Facebook Statistics: in particular of likes on DMO Facebook pages as a potential predictor of tourism demand, in addition to previous arrival numbers. In- and out-of-sample results show that the DMOs of Graz, Innsbruck, Salzburg, and Vienna can already utilize likes as an expedient leading indicator for demand, albeit not the only one. These findings are recommended to be incorporated into the DMOs’ marketing efforts.
Style APA, Harvard, Vancouver, ISO itp.
21

Hoang, Xuan Dau, i Ngoc Tuong Nguyen. "Detecting Website Defacements Based on Machine Learning Techniques and Attack Signatures". Computers 8, nr 2 (8.05.2019): 35. http://dx.doi.org/10.3390/computers8020035.

Pełny tekst źródła
Streszczenie:
Defacement attacks have long been considered one of prime threats to websites and web applications of companies, enterprises, and government organizations. Defacement attacks can bring serious consequences to owners of websites, including immediate interruption of website operations and damage of the owner reputation, which may result in huge financial losses. Many solutions have been researched and deployed for monitoring and detection of website defacement attacks, such as those based on checksum comparison, diff comparison, DOM tree analysis, and complicated algorithms. However, some solutions only work on static websites and others demand extensive computing resources. This paper proposes a hybrid defacement detection model based on the combination of the machine learning-based detection and the signature-based detection. The machine learning-based detection first constructs a detection profile using training data of both normal and defaced web pages. Then, it uses the profile to classify monitored web pages into either normal or attacked. The machine learning-based component can effectively detect defacements for both static pages and dynamic pages. On the other hand, the signature-based detection is used to boost the model’s processing performance for common types of defacements. Extensive experiments show that our model produces an overall accuracy of more than 99.26% and a false positive rate of about 0.27%. Moreover, our model is suitable for implementation of a real-time website defacement monitoring system because it does not demand extensive computing resources.
Style APA, Harvard, Vancouver, ISO itp.
22

Chan, Marty. "Phoenix Rising". Canadian Theatre Review 136 (wrzesień 2008): 26–29. http://dx.doi.org/10.3138/ctr.136.005.

Pełny tekst źródła
Streszczenie:
What started as a workshop of a play about a Chinatown fire was now ember and ashes, after the wildfire of dramaturgical questions from my actors and director. When a playwright neglects to fireproof his script, he's asking for trouble, and I had left a huge fire hazard in my script. I had tried to write a history lesson rather than a dramatic play and so, on the first day of the workshop, my cast and crew exposed the dry pages and frayed wires and sparked a four-alarm fire.
Style APA, Harvard, Vancouver, ISO itp.
23

Gaikwad, Suchetadevi M., i Sanjay B. Thakare. "Enhanced Crawler with Multiple Search Techniques using Adaptive Link-Ranking and Pre-Query Processing". Circulation in Computer Science 1, nr 1 (24.08.2016): 40–44. http://dx.doi.org/10.22632/ccs-2016-251-24.

Pełny tekst źródła
Streszczenie:
As deep web enlarges; there has been increased interest in methods which help efficiently trace deep-web interfaces. However, because of huge volume and varying nature of deep-web, achieving wide coverage and high efficiency is difficult issue. We proposed a three stage framework, an Enhanced Crawler, for efficiently gathering deep web interfaces. In first stage, enhanced crawler performs site based searching of center pages using automated search engines, avoiding visiting an oversized variety of pages and consuming time. In second stage, enhanced crawler achieves quick in site browsing by fetching most relevant links with associate degree of reconciling link ranking. For further enhancement, our system ranks and priorities websites and also uses a link tree data structure to achieve deep coverage. In third stage, our system provides pre-query processing mechanism so as to help users to write their search query easily by providing char by char keyword search with ranked indexing.
Style APA, Harvard, Vancouver, ISO itp.
24

Takale, Sheetal A., Prakash J. Kulkarni i Sahil K. Shah. "An Intelligent Web Search Using Multi-Document Summarization". International Journal of Information Retrieval Research 6, nr 2 (kwiecień 2016): 41–65. http://dx.doi.org/10.4018/ijirr.2016040103.

Pełny tekst źródła
Streszczenie:
Information available on the internet is huge, diverse and dynamic. Current Search Engine is doing the task of intelligent help to the users of the internet. For a query, it provides a listing of best matching or relevant web pages. However, information for the query is often spread across multiple pages which are returned by the search engine. This degrades the quality of search results. So, the search engines are drowning in information, but starving for knowledge. Here, we present a query focused extractive summarization of search engine results. We propose a two level summarization process: identification of relevant theme clusters, and selection of top ranking sentences to form summarized result for user query. A new approach to semantic similarity computation using semantic roles and semantic meaning is proposed. Document clustering is effectively achieved by application of MDL principle and sentence clustering and ranking is done by using SNMF. Experiments conducted demonstrate the effectiveness of system in semantic text understanding, document clustering and summarization.
Style APA, Harvard, Vancouver, ISO itp.
25

Boppana, Venugopal, i Sandhya P. "Focused crawling from the basic approach to context aware notification architecture". Indonesian Journal of Electrical Engineering and Computer Science 13, nr 2 (1.02.2019): 492. http://dx.doi.org/10.11591/ijeecs.v13.i2.pp492-498.

Pełny tekst źródła
Streszczenie:
<p><span lang="EN-IN">The large and wide range of information has become a tough time for crawlers and search engines to extract related information. This paper discusses about focused crawlers also called as topic specific crawler and variations of focused crawlers leading to distributed architecture, i.e., context aware notification architecture. To get the relevant pages from a huge amount of information available in the internet we use the focused crawler. This can bring out the relevant pages for the given topic with less number of searches in a short time. Here the input to the focused crawler is a topic specified using exemplary documents, but not using the keywords. Focused crawlers avoid the searching of all the web documents instead it searches over the links that are relevant to the crawler boundary. The Focused crawling mechanism helps us to save CPU time to large extent to keep the crawl up-to-date.</span></p>
Style APA, Harvard, Vancouver, ISO itp.
26

Zhao, Jie, Jianfei Wang, Jia Yang i Peiquan Jin. "Extracting Top-k Company Acquisition Relations From the Web". International Journal on Semantic Web and Information Systems 13, nr 4 (październik 2017): 27–41. http://dx.doi.org/10.4018/ijswis.2017100102.

Pełny tekst źródła
Streszczenie:
Company acquisition relation reflects a company's development intent and competitive strategies, which is an important type of enterprise competitive intelligence. In the traditional environment, the acquisition of competitive intelligence mainly relies on newspapers, internal reports, and so on, but the rapid development of the Web introduces a new way to extract company acquisition relation. In this paper, the authors study the problem of extracting company acquisition relation from huge amounts of Web pages, and propose a novel algorithm for company acquisition relation extraction. The authors' algorithm considers the tense feature of Web content and classification technology of semantic strength when extracting company acquisition relation from Web pages. It first determines the tense of each sentence in a Web page, which is then applied in sentences classification so as to evaluate the semantic strength of the candidate sentences in describing company acquisition relation. After that, the authors rank the candidate acquisition relations and return the top-k company acquisition relation. They run experiments on 6144 pages crawled through Google, and measure the performance of their algorithm under different metrics. The experimental results show that the algorithm is effective in determining the tense of sentences as well as the company acquisition relation.
Style APA, Harvard, Vancouver, ISO itp.
27

Mututwa, Wishes, i Trust Matsilele. "COVID-19 infections on international celebrities: self presentation and tweeting down pandemic awareness". Journal of Science Communication 19, nr 05 (30.09.2020): A09. http://dx.doi.org/10.22323/2.19050209.

Pełny tekst źródła
Streszczenie:
The novel coronavirus (COVID-19) which was first reported in China's Wuhan province in December 2019 became a global pandemic within a few months. The exponential rise in COVID-19 cases globally was accompanied by a spike in misinformation about the pandemic, particularly on social media. Employing Social Network Theory as a lens, this qualitative study explores how selected international celebrities appropriated their Twitter micro-blogging pages to announce their COVID-19 infection to the world. The study finds that these celebrities can take advantage of their huge social media following to counter disinfodemic and promote awareness about health pandemics.
Style APA, Harvard, Vancouver, ISO itp.
28

Tselischevaya, A. D., i M. I. Mirkina. "Significance of the Sachs-Witebsky cytochol reaction for serodiagnosis of syphilis". Kazan medical journal 32, nr 10-11 (2.10.2021): 884–89. http://dx.doi.org/10.17816/kazmj80684.

Pełny tekst źródła
Streszczenie:
Serology of syphilis, which has a relatively recent past (since 1907), in recent years has written into its pages a number of huge successes, which greatly facilitated the diagnosis of syphilis. Serology of syphilis owes such success to the so-called groove. flocculation reactions, the number of which is increasing every year, since despite the great successes in this area, still no other reaction has been proposed, in 100% of cases positive at different stages of syphilis and not possessing a groove. nonspecificity, i.e. the ability to give a positive result in the absence of syphilis.
Style APA, Harvard, Vancouver, ISO itp.
29

Albalawi, Mariam, Rasha Aloufi, Norah Alamrani, Neaimh Albalawi, Amer Aljaedi i Adel R. Alharbi. "Website Defacement Detection and Monitoring Methods: A Review". Electronics 11, nr 21 (1.11.2022): 3573. http://dx.doi.org/10.3390/electronics11213573.

Pełny tekst źródła
Streszczenie:
Web attacks and web defacement attacks are issues in the web security world. Recently, website defacement attacks have become the main security threats for many organizations and governments that provide web-based services. Website defacement attacks can cause huge financial and data losses that badly affect the users and website owners and can lead to political and economic problems. Several detection techniques and tools are used to detect and monitor website defacement attacks. However, some of the techniques can work on static web pages, dynamic web pages, or both, but need to focus on false alarms. Many techniques can detect web defacement. Some are based on available online tools and some on comparing and classification techniques; the evaluation criteria are based on detection accuracies with 100% standards and false alarms that cannot reach 1.5% (and never 2%); this paper presents a literature review of the previous works related to website defacement, comparing the works based on the accuracy results, the techniques used, as well as the most efficient techniques.
Style APA, Harvard, Vancouver, ISO itp.
30

Hazaa, Muneer A. S., Fadl M. Ba-Alwi i Mohammed Albared. "A Proposed Model for Focused Crawling and Automatic Text Classification of Online Crime Web Pages". Thamar University Journal of Natural & Applied Sciences 6, nr 6 (28.01.2023): 65–81. http://dx.doi.org/10.59167/tujnas.v6i6.1329.

Pełny tekst źródła
Streszczenie:
With the exponential growth of textual information available from the Internet, there has been an emergent need to find relevant, in-time and in-depth knowledge about crime topic. The huge size of such data makes the process of retrieving and analyzing and use of the valuable information in such texts manually a very difficult task. In this paper, we attempt to address a challenging task i.e. a crawling and classification of crime-specific knowledge on the Web. To do that, a model for online crime text crawling and classification is introduced. First, a crime-specific web crawler is designed to collect web pages of crime topic from the news websites. In this crawler, a binary Naive Bayes classifier is used for filtering crime web pages from others. Second, a multi-classes classification model is applied to categorize the crime pages into their appropriate crime types. In both steps, several feature selection methods are applied to select the most important features. Finally, the model has been evaluated on manually labeled corpus and also on online real world data. The experimental results on manually labeled corpus indicate that Naive Bayes with mutual information and odd ratio feature selection methods can accurately distinguish crime web pages from others with an F1 measure of 0.99. In addition, the experimental results also show that the Naive Bayes classification models can accurately classify crime documents to their appropriate crime types with Macro-F1 measure of 0.87. Our results also on online real word data show that the focused crawler with two-level classification is very effective for gathering high-quality collections of crime Web documents and also for classifying them.
Style APA, Harvard, Vancouver, ISO itp.
31

Patel, Ketul, i Dr A. R. Patel. "Process of Web Usage Mining to find Interesting Patterns from Web Usage Data". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 3, nr 1 (1.08.2012): 144–48. http://dx.doi.org/10.24297/ijct.v3i1c.2767.

Pełny tekst źródła
Streszczenie:
The traffic on World Wide Web is increasing rapidly and huge amount of data is generated due to users’ numerous interactions with web sites. Web Usage Mining is the application of data mining techniques to discover the useful and interesting patterns from web usage data. It supports to know frequently accessed pages, predict user navigation, improve web site structure etc. In order to apply Web Usage Mining, various steps are performed. This paper discusses the process of Web Usage Mining consisting steps: Data Collection, Pre-processing, Pattern Discovery and Pattern Analysis. It has also presented Web Usage Mining applications and some Web Mining software.
Style APA, Harvard, Vancouver, ISO itp.
32

Hamasha, Mohammad M., Mohammad Al-Rabayah i Faisal Aqlan. "Standard tables of truncated standard normal distribution using a new summarizing method". World Journal of Engineering 15, nr 2 (9.04.2018): 216–47. http://dx.doi.org/10.1108/wje-02-2017-0041.

Pełny tekst źródła
Streszczenie:
Purpose The single- and double-sided truncated normal distributions have been used in a wide range of engineering fields. However, most of the previous research works have focused primarily on the non-truncated population distributions. The authors present reference tables to estimate the values of density and cumulative density functions of truncated normal distribution for practitioners. Finally, the authors explain how to use the tables to estimate other properties, such as mean, median and variance. The purpose of this paper is to provide an efficient method to summarize tables, and furthermore, to provide readers with statistical tables on truncated standard normal distribution. Design/methodology/approach A new methodology is developed to summarize the tables with ordered values. The introduced method allows for the reduction of the number of pages required for such tables into a reasonable level by using linear interpolation. Moreover, it allows for the estimation of the required truncation values accurately with an error value less than 0.005. Findings The data in the tables can be summarized into a significantly reduced amount. The new summarized table can be designed for any number of pages and/or level of error wanted. However, with reducing the level of error, the number of pages increases and vice versa. Originality/value The value of this work is through two major points. First, all provided summarized tables in the literature are for single-sided and symmetry truncation cases. However, there is no attempt to summarize the tables of the asymmetry truncation normal distribution due to the requirement of huge number of pages. In this paper, the case of asymmetry truncation is included. Second, the methodology provided in this research can be used to summarize similar large tables.
Style APA, Harvard, Vancouver, ISO itp.
33

Vavilov, Stanislav. "WAS THE COMMANDER ABLE TO DANCE TANGO? (Essay not for fans of Astor Piazzolla)". Muzykal'nyj al'manah Tomskogo gosudarstvennogo universiteta, nr 11 (2021): 22–35. http://dx.doi.org/10.17223/26188929/11/6.

Pełny tekst źródła
Streszczenie:
The article continues the theme of musical associations, which are an integral part of the novels "The Twelve Chairs" and "The Golden Calf" by I. Ilf and E. Petrov. Musical images used in the novel allow creating a multifaceted world of events in which the characters of the novel live and act. On the pages of the novel there are musical images, covering a huge range of musical genres and forms. The article contains information about rare and forgotten music. Musical images, used by the authors of the novel, allow recreating a full view of the musical life of those years. The article has been illustrated with photos of rare musical editions from the author's collection.
Style APA, Harvard, Vancouver, ISO itp.
34

Jones, Norman L. "Why Read the Classics?" Canadian Respiratory Journal 7, nr 1 (2000): 10–12. http://dx.doi.org/10.1155/2000/310198.

Pełny tekst źródła
Streszczenie:
With the editorial staff of theCanadian Respiratory Journal, I extend our best wishes for the New Year, and heartfelt thanks to everyone who has helped theJournalestablish itself in the competitive field of quality, peer-reviewed publications in chest medicine. It may seem odd to start the new millennium with an editorial eulogizing the past, but even in these 'postmodern' days of chaos, complexity and ordered unpredictability, the past can be seen to have a huge influence on the present and the future. The importance of looking back on work that has influenced our present views on chest medicine, and why, was the main reason for the series inaugurated in the present issue - 'Modern Classics Revisited' (pages 71-76).
Style APA, Harvard, Vancouver, ISO itp.
35

Shaver, P., F. Bertola, J. Narlikar, S. Okamura, J. Peacock, A. Szalay i V. Trimble. "Division VIII: Galaxies and the Universe: (Les Galaxies Et L’Univers)". Transactions of the International Astronomical Union 24, nr 1 (2000): 297–98. http://dx.doi.org/10.1017/s0251107x00003229.

Pełny tekst źródła
Streszczenie:
The fields of research covered by Division VIII and its two Commissions have experienced remarkable progress over the last several years. This is due at least in part to the proliferation of major new observational facilities, and the addition of the several 8-m class telescopes presently being completed and new space facilities which will have a huge impact in the years to come. Many of the important recent scientific developments are summarized on the following pages in the reports of Commission 28 and Commission 47. These reports have been prepared in the “short” form, and are intended both to present the major scientific highlights and the most important conference proceedings and reviews for further reading.
Style APA, Harvard, Vancouver, ISO itp.
36

Joseph Kalayathankal, Sunny, Joseph Varghese Kureethara i John T. Abraham. "A modified fuzzy approach to prioritize project activities". International Journal of Engineering & Technology 7, nr 2.6 (11.03.2018): 158. http://dx.doi.org/10.14419/ijet.v7i2.6.10143.

Pełny tekst źródła
Streszczenie:
Project management is an important task in business although project is not just confined to business. Due to the uncertainty of the various variables involved in a project, over several past decades research is going on in the search for an efficient project management model. Although numerous crisp models are easily implementable, the potential of fuzzy models are huge. In the case of software development, the variables involved are highly dynamic. In this paper, we propose a ranking based fuzzy model that can prioritize various activities. We use a popular crisp model to test the effectiveness of the fuzzy model proposed. Simulation is done through Java Server Pages (JSP). There is considerable computational and managerial advantage in implementing the fuzzy model.
Style APA, Harvard, Vancouver, ISO itp.
37

Jones, Norman L. "Number 1: Chronic Obstructive Pulmonary Disease". Canadian Respiratory Journal 7, nr 1 (2000): 35–36. http://dx.doi.org/10.1155/2000/592581.

Pełny tekst źródła
Streszczenie:
With the editorial staff of the Canadian Respiratory Journal, I extend our best wishes for the New Year, and heartfelt thanks to everyone who has helped the Journal establish itself in the competitive field of quality, peerreviewed publications in chest medicine. It may seem odd to start the new millennium with an editorial eulogizing the past, but even in these "postmodern" days of chaos, complexity and ordered unpredictability, the past can be seen to have a huge influence on the present and the future. The importance of looking back on work that has influenced our present views on chest medicine, and why, was the main reason for the series inaugurated in the present issue - "Modern Classics Revisited" (pages 71-76).
Style APA, Harvard, Vancouver, ISO itp.
38

Shailesh, K. S., i Suresh Pachigolla Venkata. "Personalized Chunk Framework for High Performance Personalized Web". International Journal of Web Portals 9, nr 1 (styczeń 2017): 52–63. http://dx.doi.org/10.4018/ijwp.2017010104.

Pełny tekst źródła
Streszczenie:
Dividing the web site page content or web portal page into logical chunks is one of the prominent methods for better management of web site content and for improving web site's performance. While this works well for public web page scenarios, personalized pages have challenges with dynamic data, data caching, privacy and security concerns which pose challenges in creating and caching content chunks. Web portals has huge dependence on personalized data. In this paper the authors have introduced a novel concept called “personalized content chunk” and “personalized content spot” that can be used for segregating and efficiently managing the personalized web scenarios. The authors' experiments show that performance can be improved by 30% due to the personalized content chunk framework.
Style APA, Harvard, Vancouver, ISO itp.
39

Katz, Frank. "The Unusual Case of Leslie Lapidus: The Purposes of the Remarkably Long Joke in William Styron's Sophie's Choice". Prospects 28 (październik 2004): 543–76. http://dx.doi.org/10.1017/s0361233300001605.

Pełny tekst źródła
Streszczenie:
In Discussing the humor of William Styron's humor-filled novel Sophie's Choice, I am particularly interested in focusing upon the nature of the joke that fills a huge portion of the novel, the Leslie Lapidus affair. Rarely (if ever) in the history of the written word, I'd be willing to venture, has a joke of the outrageous length of this one been set down. The Leslie Lapidus affair, from start to finish, actually takes up about a full fifth of a long novel. The reader first hears of Leslie as a “hot dish” promised to Stingo, the main character and the narrator, on page 82 of the 1992 Vintage edition, but the punch line doesn't come until page 193, followed by a few pages of denouement. What a buildup! That's a startlingly long joke. The over-length of the Leslie Lapidus affair, as well as its late-in-the-novel resurrection in the briefer “coda” that is the Mary Alice Grimball encounter, should be enough to make the reader take pause. What in the world is a joke of this size doing in a novel about the Holocaust? How does it relate to the major ideas of the novel? At 100+ pages, it's practically a major theme of its own.
Style APA, Harvard, Vancouver, ISO itp.
40

Omri, Mohamed Nazih, i Fethi Fkih. "Dynamic Editing Distance-based Extracting Relevant Information Approach from Social Networks". International Journal of Computer Network and Information Security 14, nr 6 (8.12.2022): 1–13. http://dx.doi.org/10.5815/ijcnis.2022.06.01.

Pełny tekst źródła
Streszczenie:
Online social networks, such as Facebook, Twitter, LinkedIn, etc., have grown exponentially in recent times with a large amount of information. These social networks have huge volumes of data especially in structured, textual, and unstructured forms which have often led to cyber-crimes like cyber terrorism, cyber bullying, etc., and extracting information from these data has now become a serious challenge in order to ensure the data safety. In this work, we propose a new, supervised approach for Information Extraction (IE) from Web resources based on remote dynamic editing, called EIDED. Our approach is part of the family of IE approaches based on masks extraction and is articulated around three algorithms: (i) a labeling algorithm, (ii) a learning and inference algorithm, and (iii) an extended edit distance algorithm. Our proposed approach is able to work even in the presence of anomalies in the tuples such as missing attributes, multivalued attributes, permutation of attributes, and in the structure of web pages. The experimental study, which we conducted, on a standard database of web pages, shows the performance of our EIDED approach compared to approaches based on the classic edit distance, and this with respect to the standard metrics recall coefficient, precision, and F1-measure.
Style APA, Harvard, Vancouver, ISO itp.
41

Miri Rostami, Samaneh, Mohammad Reza Parsaei i Marzieh Ahmadzadeh. "A survey on predicting breast cancer survivability and its challenges". Journal of Research in Science, Engineering and Technology 4, nr 03 (13.09.2019): 37–42. http://dx.doi.org/10.24200/jrset.vol4iss03pp37-42.

Pełny tekst źródła
Streszczenie:
Data mining is a powerful technology that can be used in all domains in order to detect hidden patterns from a large volume of data. A huge amount of medical data gives opportunities to health research community to extract new knowledge in different parts of medicine such as diagnosis, prognosis, and treatment by using data mining applications in order to improve the quality of patient care and reduce healthcare costs. Breast cancer is the most common cancer in women worldwide and it is the leading cause of death among women. Data mining can be used as a decision support system to predict survival of new patients. In this study, related works in the field of breast cancer survival prediction are reviewed and by compromising these works challenging issues are presented. Pages:37-42
Style APA, Harvard, Vancouver, ISO itp.
42

Mandavkar, Omkar A., Tejas P. Komawar, Dipesh R. Gawad, Shubham V. Gupta i Shaikh Abdul Bari. "Foot Operated Washing Machine". International Journal for Research in Applied Science and Engineering Technology 10, nr 4 (30.04.2022): 489–94. http://dx.doi.org/10.22214/ijraset.2022.41296.

Pełny tekst źródła
Streszczenie:
Abstract: The foot operated washing machine is a huge innovation all by itself. Foot operated washing machine is especially designed for its use for washing laundry by means of foot application. Today, because of non-renewable energy cries out its basic need to use energy in another way or to save energy. This project involves the construction and use of the foot operated washing machine. The next pages in the paper include the constructions of foot operated washing machine, its raw material, its operation , benefits of the foot washing machine in terms of the actual electronic washing machine save time, water, electricity and not very expensive. His main expectation is exercises with the application of the foot to wash the cloths. Keywords: foot operated, pedal, chain, cloths, washing, rinsing
Style APA, Harvard, Vancouver, ISO itp.
43

Gupta, Sonali, i Komal Kumar Bhatia. "Design of a Parallel and Scalable Crawler for the Hidden Web". International Journal of Information Retrieval Research 12, nr 1 (styczeń 2022): 1–23. http://dx.doi.org/10.4018/ijirr.289612.

Pełny tekst źródła
Streszczenie:
The WWW contains huge amount of information from different areas. This information may be present virtually in the form of web pages, media, articles (research journals / magazine), blogs etc. A major portion of the information is present in web databases that can be retrieved by raising queries at the interface offered by the specific database and is thus called the Hidden Web. An important issue is to efficiently retrieve and provide access to this enormous amount of information through crawling. In this paper, we present the architecture of a parallel crawler for the Hidden Web that avoids download overlaps by following a domain-specific approach. The experimental results further show that the proposed parallel Hidden web crawler (PSHWC), not only effectively but also efficiently extracts and download the contents in the Hidden web databases
Style APA, Harvard, Vancouver, ISO itp.
44

Ghimire, Him Lal. "Tourism in Gorkha: A proposition to Revive Tourism After Devastating Earthquakes". Journal of Tourism and Hospitality Education 6 (10.05.2016): 67–94. http://dx.doi.org/10.3126/jthe.v6i0.14768.

Pełny tekst źródła
Streszczenie:
Gorkha, the epicenter of devastating earthquake 2015 is one of the important tourist destinations of Nepal. Tourism is vulnerable sector that has been experiencing major crises from disasters. Nepal is one of the world’s 20 most disaster-prone countries where earthquakes are unique challenges for tourism. Nepal has to be very optimistic about the future of tourism as it has huge potentials to be the top class tourist destinations by implementing best practices and services. Gorkha tourism requires a strategy that will help manage crises and rapid recovery from the damages and losses. This paper attempts to explain tourism potentials of Gorkha, analyze the impacts of devastating earthquakes on tourism and outline guidelines to revive tourism in Gorkha.Journal of Tourism and Hospitality Education (Vol. 6), 2016, Pages: 67-94
Style APA, Harvard, Vancouver, ISO itp.
45

P. P., Dr Joby. "Expedient Information Retrieval System for Web Pages Using the Natural Language Modeling". June 2020 2, nr 2 (1.06.2020): 100–110. http://dx.doi.org/10.36548/jaicn.2020.2.003.

Pełny tekst źródła
Streszczenie:
Retrieving of information from the huge set of data flowing due to the day to day development in the technologies has become more popular as it assists in searching for the valuable information in a structured, unstructured or a semi structured data set like text, database, multimedia, documents, and internet etc. The retrieval of information is performed employing any one of the models starting from the simple Boolean model for retrieving information, or using other frame works such as probabilistic, vector space and the natural language modelling. The paper is emphasis on using a natural language model based information retrieval to recover the meaning insights from the enormous amount of data. The method proposed in the paper uses the latent semantic analysis to retrieve significant information’s from the question raised by the user or the bulk documents. The carried out method utilizes the fundamentals of semantic factor occurring in the data set to identify the useful insights. The experiment analysis of the proposed method is carried out with few state of art dataset such as TIME, LISA, CACM and the NPL etc. and the results obtained demonstrate the superiority of the method proposed in terms of precision, recall and F-score.
Style APA, Harvard, Vancouver, ISO itp.
46

John, Jerrin Aleyamma. "Serial Killing as a Defence Mechanism: A Study of Thomas Harris’s “The Silence of the Lambs”". SMART MOVES JOURNAL IJELLH 7, nr 11 (28.11.2019): 8. http://dx.doi.org/10.24113/ijellh.v7i11.10123.

Pełny tekst źródła
Streszczenie:
The literary canon carries with it a huge array of possible writings exploring the various contours of fiction, the genre of Detective fiction is one such umbrella term. The effect of mystery and suspense and the surprise factors being hidden away in the pages, keeps the readers glued to detective fiction. This paper explores the plot line of one of the prominent detective stories, Thomas Harris’s ‘The Silence of the Lambs’ in search of certain existential questions regarding the named serial killer in the plot. The social evil of killing the lives of many for the purely pleasure aspect is viewed from multiple viewpoints and a new reading of the plot by placing it within relevant contextual framework is carried out. A traversal through the psychological, behavioural and social norms of the context is explores within the paper.
Style APA, Harvard, Vancouver, ISO itp.
47

Shrivastava, Umesh Prasad. "Incorporating Bioinformatics into Biological Science in Nepal: Prospects and Challenges". Academic Voices: A Multidisciplinary Journal 2 (30.06.2013): 78–85. http://dx.doi.org/10.3126/av.v2i1.8294.

Pełny tekst źródła
Streszczenie:
The huge amount of data created by proteomics and genomics studies worldwide has caused bioinformatics to gain prominence and importance for urgency to process and analyze those data. However, its multidisciplinary nature has created a challenge to meet the unique demand for specialist trained in both biology and computing. Several countries, in response to this challenge, have developed a number of manpower training programs. This review presents a description of the meaning, scope, history and development of bioinformatics with focus on prospects and challenges facing bioinformatics education worldwide. The paper also provides an overview of attempts at the introduction of bioinformatics; describes the existing bioinformatics scenario and suggests strategies for effective bioinformatics education for the sustainable growth and development in Nepal. Academic Voices, Vol. 2, No. 1, 2012, Pages 78-85 DOI: http://dx.doi.org/10.3126/av.v2i1.8294
Style APA, Harvard, Vancouver, ISO itp.
48

ZHANG, ZHI-QIANG. "The making of a mega-journal in taxonomy". Zootaxa 1385, nr 1 (21.12.2006): 67–68. http://dx.doi.org/10.11646/zootaxa.1385.1.5.

Pełny tekst źródła
Streszczenie:
We live in an era of elevated rates of extinction, yet about 90% of the Earth’s species of animals, plants and micro-organisms remain undescribed (Wilson, 2004). Although there are many journals that may publish taxonomic papers, it is increasingly difficult to publish papers on descriptive taxonomy in a timely and cost-effective manner. It is common for a taxonomist to wait for eight to ten months and sometimes years to get a paper published. And unless there is access to an institutional monograph series, it is even more difficult to publish a large taxonomic revision or monograph, not only because of costs, but the fact that most journals are of a fixed size and have limits on the length of papers. This impediment in publishing has a huge negative impact on taxonomy—the delay and difficulty in getting works published can discourage taxonomists who worked for years and unpublished works are a huge waste of talent and resources (often publicly funded). Large monographs are particularly important to the study of complex species-rich taxa, as taxonomy is about comparison, and closely related species must be compared together. Much needed is a rapid and efficient journal for descriptive papers and monographs in taxonomy. Published concurrently in print and online, Zootaxa was established as a rapid journal at the start of this century to remove these impediments in taxonomy. It has received overwhelming support from zoological taxonomists around the world, despite the fact that this diverse group of specialists are often perceived as too individualistic and fragmented into diverse subdisciplines to come together as a community. Zootaxa rapidly transformed itself from a small journal publishing 20 papers totalling 302 pages on 15 occasions in 2001 to a mega-journal publishing 1,020 papers in 22,052 pages as frequently as twice each week in 2006 (Fig. 1)—a pattern of rapid growth that is unprecedented for any scholarly journal, in both the sciences and humanities. This is indeed a very promising sign for the rejuvenation of zoological branch of one of world’s oldest science (that of naming and describing nature) in a new era when its services are most needed.
Style APA, Harvard, Vancouver, ISO itp.
49

Koubek, Tomáš, i David Procházka. "Empirical evaluation of augmented prototyping effectiveness". Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 60, nr 2 (2012): 143–50. http://dx.doi.org/10.11118/actaun201260020143.

Pełny tekst źródła
Streszczenie:
Augmented reality is a scientific field well known for more than twenty years. Although there is a huge number of projects that present promising results, the real usage of augmented reality applications for fulfilling common tasks is almost negligible. We believe that one of the principal reasons is insufficient usability of these applications. The situation is analogous to the desktop, mobile or cloud application development or even to the web pages design. The first phase of a technology adoption is the exploration of its potential. As soon as the technical problems are overcome and the technology is widely accepted, the usability is a principal issue. The usability is utmost important also from the business point of view. The cost of augmented reality implementation into the production process is substantial, therefore, the usability that is directly responsible for the implemented solution effectiveness must be appropriately tested. Consequently, the benefit of the implemented solution can be measured.This article briefly outlines common techniques used for usability evaluation. Discussed techniques were designed especially for evaluation of desktop applications, mobile solutions and web pages. In spite of this drawback, their application on augmented reality products is usually possible. Further, a review of existing augmented reality project evaluations is presented.Based on this review, a usability evaluation method for our augmented prototyping application is proposed. This method must overcome the fact that the design is a creative process. Therefore, it is not possible to take into account common criteria such as time consumption.
Style APA, Harvard, Vancouver, ISO itp.
50

Zhang, Chen, i Xiaoxia Li. "Construction of Digital Art Education Platform under the “Internet+” Environment". Mobile Information Systems 2023 (10.02.2023): 1–13. http://dx.doi.org/10.1155/2023/8453791.

Pełny tekst źródła
Streszczenie:
With the development of society and the rapid development of the Internet, there are currently about 100 million web pages and 100 million hyperlinks, and in the future, the number of web pages and hyperlinks will also intensify. How to make this huge Internet better used by people has become a common concern of the international community. In recent years, with the acceleration of the reform of China’s basic education curriculum, people have become more and more aware of the special status and superiority of art education in overall education, and the general consensus that “lack of art education is an incomplete education.” Therefore, this paper proposes the construction of a platform for digital art education under the “Internet+” environment. This paper first introduces how to perform data mining in the Internet era, and proposes an interactive data fusion algorithm and model for Internet and crowdsourcing. Then, a statistical analysis was made on the preschool art education syllabus of 20 colleges and universities in different provinces and cities. The experimental results have shown that there are eight undergraduate colleges and universities that have not written syllabuses for College Aesthetic Education majors in art courses, and they still account for 40% of the total sample size. This shows that some Chinese colleges and universities have not paid enough attention to the art courses of College Aesthetic Education majors, have loopholes in curriculum management, and neglected teaching staff.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii