Academic literature on the topic 'HL. Databases and database Networking'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'HL. Databases and database Networking.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "HL. Databases and database Networking"

1

ZHANG, CUI, and RICHARD F. WALTERS. "AN ABSTRACT, SHARED AND PERSISTENT DATA STRUCTURE FOR SUPPORTING DATABASE MANAGEMENT AND MULTILINGUAL NATURAL LANGUAGE PROCESSING." International Journal of Software Engineering and Knowledge Engineering 03, no. 03 (September 1993): 369–82. http://dx.doi.org/10.1142/s0218194093000173.

Full text
Abstract:
Neither today’s general purpose programming environment nor high-level programming languages, including those designed for AI purposes, provide adequate support for database systems. Furthermore, non-English language databases are difficult to treat either in existing database systems or with current high-level languages, because they require culture-sensitive operations on multiple foreign character sets. In this paper, we present an abstract, shared and persistent data structure, called HL+, capable of supporting database management applications. We also describe the means for coping with the aforementioned problems by accessing HL+ features of database management from high-level programming languages with an extensible programmable high-level language interface. Extensions to the data structure to accommodate processing of multiple foreign character strings are also described, and examples of multilingual applications are given.
APA, Harvard, Vancouver, ISO, and other styles
2

Mukka, Milla, Samuli Pesälä, Charlotte Hammer, Pekka Mustonen, Vesa Jormanainen, Hanna Pelttari, Minna Kaila, and Otto Helve. "Analyzing Citizens’ and Health Care Professionals’ Searches for Smell/Taste Disorders and Coronavirus in Finland During the COVID-19 Pandemic: Infodemiological Approach Using Database Logs." JMIR Public Health and Surveillance 7, no. 12 (December 7, 2021): e31961. http://dx.doi.org/10.2196/31961.

Full text
Abstract:
Background The COVID-19 pandemic has prevailed over a year, and log and register data on coronavirus have been utilized to establish models for detecting the pandemic. However, many sources contain unreliable health information on COVID-19 and its symptoms, and platforms cannot characterize the users performing searches. Prior studies have assessed symptom searches from general search engines (Google/Google Trends). Little is known about how modeling log data on smell/taste disorders and coronavirus from the dedicated internet databases used by citizens and health care professionals (HCPs) could enhance disease surveillance. Our material and method provide a novel approach to analyze web-based information seeking to detect infectious disease outbreaks. Objective The aim of this study was (1) to assess whether citizens’ and professionals’ searches for smell/taste disorders and coronavirus relate to epidemiological data on COVID-19 cases, and (2) to test our negative binomial regression modeling (ie, whether the inclusion of the case count could improve the model). Methods We collected weekly log data on searches related to COVID-19 (smell/taste disorders, coronavirus) between December 30, 2019, and November 30, 2020 (49 weeks). Two major medical internet databases in Finland were used: Health Library (HL), a free portal aimed at citizens, and Physician’s Database (PD), a database widely used among HCPs. Log data from databases were combined with register data on the numbers of COVID-19 cases reported in the Finnish National Infectious Diseases Register. We used negative binomial regression modeling to assess whether the case numbers could explain some of the dynamics of searches when plotting database logs. Results We found that coronavirus searches drastically increased in HL (0 to 744,113) and PD (4 to 5375) prior to the first wave of COVID-19 cases between December 2019 and March 2020. Searches for smell disorders in HL doubled from the end of December 2019 to the end of March 2020 (2148 to 4195), and searches for taste disorders in HL increased from mid-May to the end of November (0 to 1980). Case numbers were significantly associated with smell disorders (P<.001) and taste disorders (P<.001) in HL, and with coronavirus searches (P<.001) in PD. We could not identify any other associations between case numbers and searches in either database. Conclusions Novel infodemiological approaches could be used in analyzing database logs. Modeling log data from web-based sources was seen to improve the model only occasionally. However, search behaviors among citizens and professionals could be used as a supplementary source of information for infectious disease surveillance. Further research is needed to apply statistical models to log data of the dedicated medical databases.
APA, Harvard, Vancouver, ISO, and other styles
3

Mao, Ying, Tao Xie, and Ning Zhang. "Chinese Students’ Health Literacy Level and Its Associated Factors: A Meta-Analysis." International Journal of Environmental Research and Public Health 18, no. 1 (December 29, 2020): 204. http://dx.doi.org/10.3390/ijerph18010204.

Full text
Abstract:
Health Literacy (HL) is an important determinant of health. Many scholars have conducted a large number of studies on the level of Chinese students’ HL and its associated factors. However, previous studies on HL level and the factors that influence it have been contradictory. Therefore, this systematic review and meta-analysis was conducted to estimate the level of Chinese students’ HL and its three dimensions (knowledge, behavior and skills) and to identify factors associated with HL in Chinese students. Two investigators independently searched literature, selected research and extracted data through comprehensively searching of four international electronic databases and three Chinese electronic database to identify all relevant observational studies on affecting factors for HL in Chinese students published in English and Chinese from database January, 2010 to September, 2020. In total, 61 articles were extracted in the study. The results showed that the level rates of HL and its three dimensions were 26%, 35%, 26%, 51%, respectively. For Chinese students, the significant factors were urban residents, senior class students, well performance at school, the Han nationality, focus on health knowledge, less exposure to video games, highly educated parents, income of one-child families, receiving health education, having medical background. This study provides some inspirations for improving the level of Chinese students’ HL and their health. First, the findings may help Chinese policy makers understand the overall HL of Chinese students and their levels across three dimensions (knowledge, behavior and skills). Second, protective factors for Chinese students’ HL were found in this research, which will help to improve the level of Chinese students’ HL, stimulate students’ awareness of prevention, and lay the foundation for a healthy China.
APA, Harvard, Vancouver, ISO, and other styles
4

Burghardt, Kyle J., Bradley H. Howlett, Audrey S. Khoury, Stephanie M. Fern, and Paul R. Burghardt. "Three Commonly Utilized Scholarly Databases and a Social Network Site Provide Different, But Related, Metrics of Pharmacy Faculty Publication." Publications 8, no. 2 (April 1, 2020): 18. http://dx.doi.org/10.3390/publications8020018.

Full text
Abstract:
Scholarly productivity is a critical component of pharmacy faculty effort and is used for promotion and tenure decisions. Several databases are available to measure scholarly productivity; however, comparisons amongst these databases are lacking for pharmacy faculty. The objective of this work was to compare scholarly metrics from three commonly utilized databases and a social networking site focused on data from research-intensive colleges of pharmacy and to identify factors associated with database differences. Scholarly metrics were obtained from Scopus, Web of Science, Google Scholar, and ResearchGate for faculty from research-intensive (Carnegie Rated R1, R2, or special focus) United States pharmacy schools with at least two million USD in funding from the National Institutes of Health. Metrics were compared and correlations were performed. Regression analyses were utilized to identify factors associated with database differences. Significant differences in scholarly metric values were observed between databases despite the high correlations, suggestive of systematic variation in database reporting. Time since first publication was the most common factor that was associated with database differences. Google Scholar tended to have higher metrics than all other databases, while Web of Science had lower metrics relative to other databases. Differences in reported metrics between databases are apparent, which may be attributable to the time since first publication and database coverage of pharmacy-specific journals. These differences should be considered by faculty, reviewers, and administrative staff when evaluating scholarly performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Ran, Xue, Yalan Chen, Kui Jiang, and Yaqin Shi. "The Effect of Health Literacy Intervention on Patients with Diabetes: A Systematic Review and Meta-Analysis." International Journal of Environmental Research and Public Health 19, no. 20 (October 12, 2022): 13078. http://dx.doi.org/10.3390/ijerph192013078.

Full text
Abstract:
Relevant studies published between January 2010 and June 2021 were identified through relevant databases, including the Science Citation Index Expanded (SCIE) database of Web of Science, PubMed, and Embase, in order to assess the effect of health literacy (HL) intervention on patients with diabetes. A total of 21 articles were eligible. The results showed that: (1) this review involved different HL assessment tools, most of which were self-designed scales and assessment tools focused on measuring functional HL. (2) The differences in glycosylated hemoglobin (HbA1c) (weighted mean difference [WMD] = −0.78, 95% confidence interval [CI]: −0.94, −0.62) and medication adherence (standardized mean difference [SMD] = 1.85, 95% CI: 0.19, 3.52) between the HL intervention group and the usual care group were statistically significant. There was no significant improvement in systolic blood pressure (SMD = −0.05, 95% CI: −0.34, 0.25). Furthermore, this review reported that self-efficacy (SMD = 0.85, 95% CI: 0.65, 1.04) was increased, and the level of HL was improved. In the assessments of risk of bias, 90% of the studies were classified as medium. The quality of the evidence of medication adherence was very low, and the reliability of the conclusions was not enough to confirm the effect of HL.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ruihong, Jianguo Wang, Stratos Idreos, M. Tamer Özsu, and Walid G. Aref. "The case for distributed shared-memory databases with RDMA-enabled memory disaggregation." Proceedings of the VLDB Endowment 16, no. 1 (September 2022): 15–22. http://dx.doi.org/10.14778/3561261.3561263.

Full text
Abstract:
Memory disaggregation (MD) allows for scalable and elastic data center design by separating compute (CPU) from memory. With MD, compute and memory are no longer coupled into the same server box. Instead, they are connected to each other via ultra-fast networking such as RDMA. MD can bring many advantages, e.g., higher memory utilization, better independent scaling (of compute and memory), and lower cost of ownership. This paper makes the case that MD can fuel the next wave of innovation on database systems. We observe that MD revives the great debate of "shared what" in the database community. We envision that distributed shared-memory databases (DSM-DB, for short) - that have not received much attention before - can be promising in the future with MD. We present a list of challenges and opportunities that can inspire next steps in system design making the case for DSM-DB.
APA, Harvard, Vancouver, ISO, and other styles
7

Waheed, Anem, Angie Mae Rodday, Anita J. Kumar, Kenneth B. Miller, and Susan K. Parsons. "Hematopoietic Stem Cell Transplant Utilization in Relapsed/Refractory Hodgkin Lymphoma: A Population Level Analysis of Statewide Claims Data." Blood 132, Supplement 1 (November 29, 2018): 4771. http://dx.doi.org/10.1182/blood-2018-99-111921.

Full text
Abstract:
Abstract Introduction: In the era of novel therapies, the first and subsequent lines of therapies are rapidly evolving in the treatment of patients with Hodgkin lymphoma (HL) in order to optimize disease control and reduce long term health risks. Hematopoietic stem cell transplant (HSCT) is often used following treatment failure. While utilization of HSCT can be ascertained from transplant-specific registries, the treatment path for patients with relapsed/refractory HL leading up to HSCT is largely unknown. We developed an algorithm to define a cohort of commercially insured patients with HL from 2009-2013 in the Massachusetts All Payer Claims Database (MA APCD) who received HSCT. Further, we describe treatment characteristics of this cohort. Methods: The Patient Protection and Affordable Care Act of 2010 established requirements for states to assess healthcare outcomes which resulted in at least 16 states establishing All Payer Claims Databases. The MA APCD provides detailed medical claims data, physician provider data, and pharmacy data for all commercially insured patients in the state, regardless of site of care. Moreover, each patient is assigned a unique identifier, which allows us to follow patients even if they change insurer ("insurance churning"). To our knowledge, no studies exist using APCD for HL from any state. We identified a cohort with HL who underwent HSCT during the study period from among 7,613 cases with ICD-9 diagnostic codes for HL and of those, 695 had ICD-9 codes for both HL and HSCT. To identify incident HSCT cases during our study period, we developed and iteratively refined an algorithm using ICD-9 diagnostic and procedure codes, dates of service, and length of stay which narrowed the cohort to 178 patients. After review of the medical and pharmacy claims databases by an oncologist (AW), 113 patients were identified as part of the final cohort who underwent autologous and/or allogeneic HSCT. Reasons for exclusion include not HL (34), not HSCT (8), and prevalent (i.e. "history of") HSCT only (23). We then summarized initial treatment, salvage treatment, and HSCT where data were available. Results: Among this commercially insured cohort of 113 patients who received HSCT, the median age was 39.0 years and 51.3% were female. Initial therapy data were identified for 65 of the 113 patients (58%); 58 (89.2%) received doxorubicin, bleomycin, vinblastine, and dacarbazine (ABVD). Of the 60 people for whom salvage therapy data could be discerned, 32 (53.3%) received ifosfamide, carboplatin, etoposide (ICE), 11 (18.3%) received gemcitabine, vinorelbine, liposomal doxorubicin (GVD), 11 (18.3%) received other chemotherapy, and 6 (10%) received brentuximab vedotin. Notably, 92 (81.4%) of all transplants were autologous, 10 (8.9%) were allogeneic transplant, and 9 (8.0%) were autologous followed by allogeneic transplant. Of the 64 patients with initial therapy data, median time to HSCT after completion of initial treatment was 238.5 days (25th-75th percentile, 151.5-428.0). Additionally, 25 HSCT were performed during the year 2009 and 20 of these had unknown initial chemotherapy regimens. Our dataset was limited to the years 2009-2013 and this missing chemotherapy information is most likely due to initial treatment prior to 2009. Conclusion: We successfully developed and refined an algorithm to help identify HSCT among patients with HL within a large statewide claims database. We characterized a cohort of patients with relapsed/refractory HL, including patterns of initial and salvage treatments in a sizeable subset of patients. Median time to HSCT demonstrates that the majority of patients undergo transplant for relapsed/refractory disease within a year of completing initial treatment. Future directions include determining reasons for incomplete information on initial and salvage therapy, such as insurance product or type, different sites of care within community and academic practices, and potential referral patterns into the state for HSCT care. As less than 5% of cancer patients are enrolled onto clinical trials, partnerships between clinical experts and data science are a powerful way to use large claims databases to study more representative patient populations. Table. Table. Disclosures Rodday: Seattle Genetics: Research Funding. Kumar:Seattle Genetics: Research Funding. Parsons:Seattle Genetics: Research Funding.
APA, Harvard, Vancouver, ISO, and other styles
8

Mandias, Green Ferry, Green Arther Sandag, Susi Susanti, and Haryanto Reza Musak. "Penerapan Algoritma K-Means Untuk Analisis Prestasi Akademik Mahasiswa Fakultas Ilmu Komputer Universitas Klabat." CogITo Smart Journal 3, no. 2 (December 12, 2017): 230. http://dx.doi.org/10.31154/cogito.v3i2.72.230-239.

Full text
Abstract:
Universitas Klabat (UNKLAB) adalah salah satu perguruan tinggi swasta yang berada dibawah naungan organisasi Gereja Masehi Advent Hari Ketujuh, yang bertempat di Airmadidi, Sulawesi Utara. Universitas Klabat termasuk universitas yang sangat dikenal di Sulawesi utara, yang di dalamnya memiliki 1 program pascasarjana, 6 fakultas dan 1 akademik. Penelitian ini dilakukan untuk mengetahui pencapaian prestasi mahasiswa fakultas ilmu komputer yang berada pada tingkat 4 yang memiliki 52 mahasiswa yang aktif dengan memanfaatkan metode data mining. Berdasarkan data mahasiswa fakultas ilmu computer, penelitian ini dilakukan untuk mencari tahu berapa banyak mahasiswa yang memiliki prestasi akademik dibidang databases, networking dan programming dengan menggunakan algoritma data mining. Penelitian ini menggunakan algoritma K-Means dalam menganalisis prestasi akademik mahasiswa fakultas llmu komputer di Universitas Klabat. Data yang dianalisis dikelompokan terlebih dahulu agar terstruktur serta data yang dianalisis memiliki kejelasan hasil yang lebih dalam.Hasil yang didapat dari 52 mahasiswa tersebut adalah, 33% mahasiswa memiliki nilai prestasi di bidang database, 42% mahasiswa pada bidang networking dan 25% mahasiswa di bidang programming. Keywords : UNKLAB, Algoritma K-Means ,WEKA, Cluster, Data Mining.
APA, Harvard, Vancouver, ISO, and other styles
9

Manabe, Takashi, Hideko Yamamoto, and Makoto Kawai. "Studies on the procedure for the construction of cellular protein databases employing micro two-dimensional electrophoresis: An HL-60 protein database." Electrophoresis 16, no. 1 (1995): 407–22. http://dx.doi.org/10.1002/elps.1150160168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Selvi, T. Kalai, and S. Sasirakha. "Data Management Issues and Study on Heterogeneous Data Storage in the Internet of Things." Computer Science & Engineering: An International Journal 12, no. 6 (December 30, 2022): 27–34. http://dx.doi.org/10.5121/cseij.2022.12604.

Full text
Abstract:
The Internet of Things is a networking standard that connects various hardware, including digital, physical, and virtual things that may communicate with one another and carry out user-requested tasks. Traditional database management methods cannot be used in this entity because of the variety, large volume and heterogeneous data generated by them. The rapid growth of heterogeneous data can only be managed by distributed and parallel computer systems and databases. When it comes to handling vast amount of diverse data, most relational databases have a variety of drawbacks because they were designed for a certain format. One of the most difficulties in data management is investigating such heterogeneous data. Consequently, IoT data management system design has to be considered with some distinct principles. These various guiding concepts enable the suggestion of various IoT data management system strategies. The solution should provide a unified format for the conversion of various heterogeneous data which are generated by the sensors. The integration of generated data is made simple by some middleware or architecture-oriented solutions. Other methods also offer effective storage of the unified data generated. This paper surveys the challenges of IoT Data management and provides a survey about the storage of heterogeneous data and the type of data used.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "HL. Databases and database Networking"

1

Ulibarri, Desirea Duarte. "Volunteer system project Regis University Networking Lab Practicum /." [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/DUlibarriPartI2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sundaram, Prashanthi. "Student database access from the web." CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1785.

Full text
Abstract:
This project, Database Access through the Web (DAW), implements a database to store academic and general information of graduate students in the Department of Computer Science, CSUSB and provides access to the database from the web. The motivation of the project comes from needs of the Graduate Coordinator, professors and department staff to access through the Internet student information concurrently.
APA, Harvard, Vancouver, ISO, and other styles
3

Kubler, Sylvain. "Premiers travaux relatifs au concept de matière communicante : Processus de dissémination des informations relatives au produit." Phd thesis, Université Henri Poincaré - Nancy I, 2012. http://tel.archives-ouvertes.fr/tel-00759600.

Full text
Abstract:
Depuis de nombreuses années, plusieurs communautés telles que IMS (Intelligent Manufacturing Systems), HMS (Holonic Manufacturing System) ont suggéré l'utilisation de produits intelligents pour rendre les systèmes adaptables et adaptatifs et ont montré les bénéfices pouvant être réalisés, tant au niveau économique, qu'au niveau de la traçabilité des produits, qu'au niveau du partage des informations ou encore de l'optimisation des procédés de fabrication. Cependant, un grand nombre de questions restent ouvertes comme la collecte des informations liées au produit, leur stockage à travers la chaîne logistique, ou encore la dissémination et la gestion de ces informations tout au long de leur cycle de vie. La contribution de cette thèse est la définition d'un cadre de dissémination des informations relatives au produit durant l'ensemble de son cycle de vie. Ce cadre de dissémination est associé à un nouveau paradigme qui change radicalement la manière de voir le produit et la matière. Ce nouveau concept consiste à donner la faculté au produit d'être intrinsèquement et intégralement communicant. Le cadre de dissémination des informations offre la possibilité à l'utilisateur d'embarquer des informations sensibles au contexte d'utilisation du produit communicant. Outre la définition du processus de dissémination des informations, cette thèse offre un aperçu des champs de recherche, tant scientifiques que technologiques, à investiguer par l'avenir concernant le concept de matière communicante.
APA, Harvard, Vancouver, ISO, and other styles
4

González, Carreño Gastón Patricio. "Análisis y evaluación de las bases de datos incorporadas en la biblioteca virtual de la Universidad Miguel de Cervantes." Thesis, 2014. http://eprints.rclis.org/23667/1/INDICE.pdf.

Full text
Abstract:
The research focused on analyzing and evaluating the databases included in the web page of the University Miguel de Cervantes. In particular twenty database were evaluated, including the Proquest databases, and SIARE DIALNET in open access version included. This model was applied to evaluate databases proposed by Alejo Febles, T., Serrano Manzano, P. Bermejo and Fresco, L. Subsequently, we proceeded to analyze thirteen database Proquest: ABI / INFORM Complete, Accounting Tax , PRISMA, Banking Information Service, Proquest   Asian Business and Reference, Proquest Criminal Justice, Educational Journals Proquest,   ProQuest Newsstand, ProQuest Career and Technical Education, Proquest Health   Military Collection Management and Proquest.   The analysis looked at six indicators of content: Thematic scope, idiomatic scope, temporal scope, authority of the producer / editor, material type and geographic reach. The analysis results are presented separately for each database form.
APA, Harvard, Vancouver, ISO, and other styles
5

Castelli, Marta C. "Requisitos Profesionales del Bibliotecario en Área de Salud : Análisis en Mar del Plata." Thesis, 2007. http://eprints.rclis.org/20645/3/Tesina_Castelli.pdf.

Full text
Abstract:
The scientific documentation looms, together with access to information and management of their tools is fundamental in the training and development of professionals that enable scientific and technological progress. Particularly in the health sciences and medicine precisely this development is one of the most remarkable, considering the medical documentation the most complex of all. The highly computerized technological equipment and diagnostic techniques applied in the development of this discipline makes the accumulation of this information and run forward at high speed, requiring an information technology supports this development. Another reason is that doctors do not have the time to search, access, and read as much information circulating, not to have the resources and skills to track, making the documentary and the protagonists informationists direct access and selection information. This is why the demands and requirements of the Librarian Specialized Health Medicine area must be consistent with the requirements and demands of the discipline. It examines how the documentary preparation have when entering the workforce specialized in medicine and the difficulties faced when it comes to search, select and retrieve this type of information, to provide its users an accurate document, according to the professional requirements requested. Mastering English, managing databases, medical vocabulary, knowledge of anatomy, and highly specific subject of the documentary require preparation and specialized academic training. Whereas to enter the labor market does not have these essential tools for good performance, higher education should be largely to provide an academic level commensurate with the needs of librarians.
APA, Harvard, Vancouver, ISO, and other styles
6

Scheithauer, Walter. "Islamische Schriftdenkmäler in Österreich : Überlegungen zu ihrer Erfassung und Erschließung." Thesis, 2006. http://eprints.rclis.org/9496/1/islamische_Schriftdenkmale_in_oesterreich.pdf.

Full text
Abstract:
In Austria, there are many written documents in arabic script. The reason that they are not known properly is to be found in the lack of resources (personal and financial), ignorance and disinterest. The aim of this work is to find out how to locate these documents systematically and how to make them accessible. For this one has to ask: "Who could have come into contact with people who wrote in Arabic script?" and "Where could the documents and items have ended up?" After archives and museums one has to look especially at monastery libraries and generally collecting libraries (in opposition to public libraries). Flukes and single occurrences can not be excluded. The best is to make the results accessible in a relational SQL database with the trancription system of the Deutsche Morgenländische Gesellschaft.
APA, Harvard, Vancouver, ISO, and other styles
7

Ortiz, Andrea Betina. "Colecciones hemerográficas y acceso a la información jurídica en la Hemeroteca de la Facultad de Ciencias Jurídicas y Sociales, de la ciudad de Santa Fe." Thesis, 2015. http://eprints.rclis.org/25233/2/Colecciones%20hemerograficas.pdf.

Full text
Abstract:
The investigation's main subject is based on the collection of periodicals, and other resources such as reference materials and databases which belong to Hemeroteca de la Facultad de Ciencias Jurídicas y Sociales de la Universidad Nacional del Litoral. These are elements that provide a service of legal reference to which the community of users and investigators have a free access. The work was established upon a normative theoretical framework, international guidelines applied to academic and research libraries, the Conspectus model, circulation and consultation data and opinions from experts based on other works. At local level the contribution of the Methodological Guide of Evaluations of Collections made by Universidad Nacional de La Plata is taken into account. All that was mentioned before allows us to make an analysis of this collection in particular and consequently, to lay foundations on future actions to take part concerning this thematic.
APA, Harvard, Vancouver, ISO, and other styles
8

De, Robbio Antonella. "La tutela giuridica delle banche nel diritto d'autore e nei diritti connessi." Thesis, 1999. http://eprints.rclis.org/4012/1/dbthesis.pdf.

Full text
Abstract:
Final thesis discussed on the Course for Library Manager (January – June 1999), organized by University of Padua, Accademic Library. All the thesis had been collected in a book edited by Maria Antonia Romeo.
APA, Harvard, Vancouver, ISO, and other styles
9

Mazzieri, Marinella, Giovanni Michetti, and Gaetana Cognetti. "Definizione, recupero e struttura del protocollo clinico : analisi e riflessioni per una condivisione dell'informazione biomedica su Web." Thesis, 2003. http://eprints.rclis.org/5214/1/tesi_op_arch.zip.

Full text
Abstract:
Internet has led to new systems of information retrieval and diffusion in all fields, also in medicine. Health workers, patients and citizens can freely have access - on the Net - to useful information resources, like those concerning clinical protocols.
APA, Harvard, Vancouver, ISO, and other styles
10

Batı, Hacer. "Elektronik Bilgi Kaynaklarında Maliyet-Yarar Analizi: Orta Doğu Teknik Üniversitesi Kütüphanesi Üzerine Bir Değerlendirme." Thesis, 2006. http://eprints.rclis.org/7890/1/hacer_bati_tez.pdf.

Full text
Abstract:
In recent years there has been a rapid transition to subscription of electronic resources and significant percentages of library budgets are allocated to electronic resources. Identifying and analyzing the benefits and costs of this new trend is therefore relevant. In this study we have considered the experiences of METU Library in utilizing electronic resources and provided a cost-benefit analysis of electronic resources based on the cost and usage statistics obtained from this library. The study examines the ScienceDirect, EbscoHost and Web of Science databases available within the METU electronic resources collection. In addition to the subscription cost statistics, non-subscription cost information obtained through interviews and surveys have been used in our analysis. Usage statistics of electronic information sources have been collected in accordance with the COUNTER standards and analyzed using various methods. The high usage of electronic resources in METU reduces the unit cost of databases. According to the 2004 data, the cost per usage for EbscoHost and Web of Science is $0.3 $0.2 respectively. These figures place METU below the average unit cost per use of all Anatolian University Libraries Consortium (ANKOS) members. Yet due to high subscription cost, the unit cost per use of ScienceDirect is relatively higher ($2.3), even though the database is used very heavily at METU. This figure is above the average unit cost per use of all ANKOS members for the ScienceDirect database. Statistics show that a small number of “core” journals satisfy significant amount of use while the majority of journals are used rather infrequently. The results obtained from this study show that electronic resources cost, over the years, considerable amount of money for METU and their usage has also increased gradually. In general, it can be concluded that electronic resources are heavily utilized in METU in terms of overall usage. In order to maximize the benefits of electronic resources it is necessary to analyze cost and usage statistics in detail at both institutional and consortial levels, using various techniques. The results obtained from such studies can be used as guidelines for the development of collections of electronic resources, consortial agreements and user education programs.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "HL. Databases and database Networking"

1

International Business Machines Corporation. International Technical Support Organization, ed. IBM j-type Data Center Networking Introduction. Poughkeepsie, NY: IBM, International Technical Support Organization, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Networking for big data. Boca Raton: CRC Press, Taylor & Francis Group, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Besch, David. MCSE training guide: SQL Server 7 database design : exam 70-029. Indianapolis, IN: New Riders Pub., 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Christoph, Wille, ed. MCSE: SQL Server 7 Administration, exam: 70-028. Indianapolis, IN: New Riders, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kornel, Terplan, ed. Network design: Management and technical perspectives. Boca Raton: CRC Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Network design: Management and technical perspectives. 2nd ed. Boca Raton, Fla: Auerbach Publications, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Terence, Goggin, ed. IntraBuilder FrontRunner. Scottsdale, Ariz: Coriolis Group, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Anil, Desai, ed. Fast track MCSE. Indianapolis, IN: New Riders, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Owen, Charles B. Computed synchronization for multimedia applications. Boston: Kluwer Academic Publishers, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

F, Makedon, ed. Computed synchronization for multimedia applications. Boston: Kluwer Academic, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "HL. Databases and database Networking"

1

"Database Interoperability: From Federated Databases to a Mobile Federation." In Multi-Operating System Networking, 475–94. Auerbach Publications, 1999. http://dx.doi.org/10.1201/9780203997598-37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Horiuchi, Catherine. "E-Government Databases." In Encyclopedia of Database Technologies and Applications, 206–10. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-560-3.ch035.

Full text
Abstract:
The new face of government is electronic. Prior to the development of e-government, adoption of small-scale computing and networking in homes and businesses created the world of e-business, where computer technologies mediate transactions once performed face-to-face. Through the use of computers large and small, companies reduce costs, standardize performance, extend hours of service, and increase the range of products available to consumers. These same technological advances create opportunities for governments to improve their capacity to meet growing public service mandates. Tasks that formerly required a trip to city hall can be accomplished remotely. Government employees can post answers to frequently asked questions online, and citizens can submit complex questions through the same electronic mail (e-mail) systems already used at home and in businesses. This developing e-government increases the number and complexity of electronic databases that must be managed according to the roles information plays in government operations.
APA, Harvard, Vancouver, ISO, and other styles
3

Udoh, Emmanuel. "Open Source Database Technologies." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 1106–11. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch150.

Full text
Abstract:
The free or open source software (OSS) movement, pioneered by Richard Stallman in 1983, is gaining mainstream acceptance and challenging the established order of the commercial software world. The movement is taking root in various aspects of software development, namely operating systems (Linux), Web servers (Apache), databases (MySQL), and scripting languages (PHP) to mention but a few. The basic tenet of the movement is that the underlying code of any open source software should be freely viewable, modifiable, or redistributable by any interested party, as enunciated under the copyleft concept (Stallman, 2002) This is in sharp contrast to the proprietary software (closed source), in which the code is controlled under the copyright laws. In the contemporary software landscape, the open source movement can no longer be overlooked by any major players in the industry, as the movement portends a paradigm shift and is forcing a major rethinking of strategy in the software business. For instance, companies like Oracle, Microsoft, and IBM now offer the lightweight versions of their proprietary flagship products to small—to-medium businesses at no cost for product trial (Samuelson, 2006). These developments are signs of the success of the OSS movement. Reasons abound for the success of the OSS, viz. the collective effort of many volunteer programmers, flexible and quick release rate, code availability, and security. On the other hand, one of the main disadvantages of OSS is the limited technical support, as it may be difficult to find an expert to help an organization with system setup or maintenance. Due to the extensive nature of OSS, this article will only focus on the database aspects. A database is one of the critical components of the application stack for an organization or a business. Increasingly, open-source databases (OSDBs) such as MYSQL, PostgreSQL, MaxDB, Firebird, and Ingress are coming up against the big three commercial proprietary databases: Oracle, SQL server, and IBM DB (McKendrick, 2006; Paulson, 2004; Shankland, 2004). Big companies like Yahoo and Dell are now embracing OSDBs for enterprise-wide applications. According to the Independent Oracle Users Group (IOUG) survey, 37% of enterprise database sites are running at least one of the major brands of open source databases (McKendrik, 2006). The survey further finds that the OSDBs are mostly used for single function systems, followed by custom home-grown applications and Web sites. But critics maintain that these OSDBs are used for nonmission critical purposes, because IT organizations still have concerns about support, security, and management tools (Harris, 2004; Zhao & Elbaum, 2003)
APA, Harvard, Vancouver, ISO, and other styles
4

Adhikari, Mainak, and Sukhendu Kar. "NoSQL Databases." In Handbook of Research on Securing Cloud-Based Databases with Biometric Applications, 109–52. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6559-0.ch006.

Full text
Abstract:
NoSQL database provides a mechanism for storage and access of data across multiple storage clusters. NoSQL dabases are finding significant and growing industry to meet the huge data storage requirements of Big data, real time applications, and Cloud Computing. NoSQL databases have lots of advantages over the conventional RDBMS features. NoSQL systems are also referred to as “Not only SQL” to emphasize that they may in fact allow Structured language like SQL, and additionally, they allow Semi Structured as well as Unstructured language. A variety of NoSQL databases having different features to deal with exponentially growing data intensive applications are available with open source and proprietary option mostly prompted and used by social networking sites. This chapter discusses some features and challenges of NoSQL databases and some of the popular NoSQL databases with their features on the light of CAP theorem.
APA, Harvard, Vancouver, ISO, and other styles
5

Indraratne, Harith, and Gábor Hosszú. "Fine-Grained Data Access for Networking Applications." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 568–73. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch076.

Full text
Abstract:
Current-day network applications require much more secure data storages than anticipated before. With millions of anonymous users using same networking applications, security of data behind the applications have become a major concern of database developers and security experts. In most security incidents, the databases attached to the applications are targeted, and attacks have been made. Most of these applications require allowing data manipulation at several granular levels to the users accessing the applications—not just table and view level, but tuple level. A database that supports fine-grained access control restricts the rows a user sees, based on his/her credentials. Generally, this restriction is enforced by a query modification mechanism automatically done at the database. This feature enables per-user data access within a single database, with the assurance of physical data separation. It is enabled by associating one or more security policies with tables, views, table columns, and table rows. Such a model is ideal for minimizing the complexity of the security enforcements in databases based on network applications. With fine-grained access controls, one can create fast, scalable, and secure network applications. Each application can be written to find the correct balance between performance and security, so that each data transaction is performed as quickly and safely as possible. Today, the database vendors like Oracle 10g, and IBM DB2 provides commercial implementations of fine-grained access control methods, such as filtering rows, masking columns selectively based on the policy, and applying the policy only when certain columns are accessed. The behavior of the fine-grained access control model can also be increased through the use of multiple types of policies based on the nature of the application, making the feature applicable to multiple situations. Meanwhile, Microsoft SQL Server2005 has also come up with emerging features to control the access to databases using fine-grained access controls. Fine-grained access control does not cover all the security issues related to Internet databases, but when implemented, it supports building secure databases rapidly and bringing down the complexity of security management issues.
APA, Harvard, Vancouver, ISO, and other styles
6

Indraratne, Harith, and Gábor Hosszú. "Fine-Grained Data Security in Virtual Organizations." In Database Technologies, 1663–69. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-058-5.ch101.

Full text
Abstract:
Controlling the access to data based on user credentials is a fundamental part of database management systems. In most cases, the level at which information is controlled extends only to a certain level of granularity. In some scenarios, however, there is a requirement to control access at a more granular way allowing the users to see only the data they are supposed to see in a database table. Fine-grained access control (FGAC) provides row-level security capabilities to secure information stored in modern relational database management systems. In case of creating the virtual networking infrastructure of virtual organizations, the security of the data stored in database management systems is a very important issue. Several models have been proposed by research community and database vendors for specifying and enforcing row-level access control at the database layer. This article reviews the most important facts of some significant FGAC models and current implementations of such in two commercial database management systems. We describe a novel concept of implementing FGAC in SQL Server 2005, which resembles Oracle 10g database management system’s FGAC solution virtual private databases (VPD).
APA, Harvard, Vancouver, ISO, and other styles
7

Su, Chun-Rong, and Jiann-Jone Chen. "Peer-to-Peer Network-Based Image Retrieval." In Multimedia Networking and Coding, 377–99. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2660-7.ch013.

Full text
Abstract:
Performing Content-Based Image Retrieval (CBIR) in Internet connected databases through Peer-to-Peer (P2P) network (P2P-CBIR) helps to effectively explore the large-scale image database distributed over connected peers. Decentralized unstructured P2P framework is adopted in our system to compromise with the structured one while still reserving flexible routing control when peers join/leave or network fails. The P2P- CBIR search engine is designed to provide multi-instance query with multi-feature types to effectively reduce network traffic while maintaining high retrieval accuracy. In addition, the proposed P2P-CBIR system is also designed in the way to provide scalable retrieval function, which can adaptively control the query scope and progressively refine the accuracy of retrieved results. To reflect the most updated local database characteristics for the P2P-CBIR users, reconfiguring system at each regular interval time can effectively reduce trivial peer routing and retrieval operations due to imprecise configuration. Experiments demonstrated that the average recall rate of the proposedP2P-CBIR with reconfiguration is higher than the one without about 20%, and the latter outperforms previous methods, i.e., firework query model (FQM) and breadth-first search (BFS) about 20% and 120%, respectively, under the same range of TTL values.
APA, Harvard, Vancouver, ISO, and other styles
8

Wei, Chia-Hung. "Content-Based Multimedia Retrieval." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 260–66. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch036.

Full text
Abstract:
In the past decade, there has been rapid growth in the use of digital media, such as images, video, and audio. As the use of digital media increases, retrieval and management techniques become more important in order to facilitate the effective searching and browsing of large multimedia databases. Before the emergence of content-based retrieval, media was annotated with text, allowing the media to be accessed by text-based searching. Through textual description, media is managed and retrieved based on the classification of subject or semantics. This hierarchical structure, like yellow pages, allows users to easily navigate and browse, or search using standard Boolean queries. However, with the emergence of massive multimedia databases, the traditional text-based search suffers from the following limitations (Wei, Li, & Wilson, 2006): Manual annotations require too much time and are expensive to implement. As the number of media in a database grows, the difficulty in finding desired information increases. It becomes infeasible to manually annotate all attributes of the media content. Annotating a 60-minute video, containing more than 100,000 images, consumes a vast amount of time and expense. Manual annotations fail to deal with the discrepancy of subjective perception. The phrase, “an image says more than a thousand words,” implies that the textual description is sufficient for depicting subjective perception. To capture all concepts, thoughts, and feelings for the content of any media is almost impossible. Some media contents are difficult to concretely describe in words. For example, a piece of melody without lyric or irregular organic shape cannot easily be expressed in textual form, but people expect to search media with similar contents based on examples they provided. In an attempt to overcome these difficulties, content- based retrieval employs content information to automatically index data with minimal human intervention.
APA, Harvard, Vancouver, ISO, and other styles
9

Raghunathan, A., and K. Murugesan. "Performance-Enhanced Caching Scheme for Web Clusters for Dynamic Content." In Web-Based Multimedia Advancements in Data Communications and Networking Technologies, 185–206. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2026-1.ch010.

Full text
Abstract:
In order to improve the QoS of applications, clusters of web servers are increasingly used in web services. Caching helps improve performance in web servers, but is largely exploited only for static web content. With more web applications using backend databases today, caching of dynamic content has a crucial role in web performance. This paper presents a set of cache management schemes for handling dynamic data in web clusters by sharing cached contents. These schemes use either automatic or expiry-based cache validation, and work with any type of request distribution. The techniques improve response by utilizing the caches efficiently and reducing redundant database accesses by web servers while ensuring cache consistency. The authors present caching schemes for both horizontal and vertical cluster architectures. Simulations show an appreciable performance rise in response times of queries in clustered web servers.
APA, Harvard, Vancouver, ISO, and other styles
10

Cruz, Christophe. "Use of Semantics to Manage 3D Scenes in Web Platforms." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 1487–92. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch200.

Full text
Abstract:
Computer graphics have widely spread out into various computer applications. After the early wire-frame computer generated images of the 1960s, spatial representation of objects improved in the 1970s with Boundary Representation (B-Rep) modeling, Constructive Solid Geometry (CSG) objects, and free-form surfaces. Realistic rendering in the 1990s, taking into account sophisticated dynamic interactions (between objects or between objects and human actors, physical interactions with light, and so on) now make 3Dscenes much better than simple 3D representations of the real world. Indeed, they are a way to conceive products (industrial products, art products, and so on) and to modify them over time, either interactively or by simulation of physical phenomena (Faux & Pratt, 1979; Foley, Van Dam, Feiner, & Hughes, 1990; Kim, Huang, & Kim, 2002). Large amounts of data can be generated from such variety of 3D-models. Because there is a wide range of models corresponding to various areas of applications (metallurgy, chemistry, seismology, architecture, arts and media, and so on) (DIS 3D Databases, 2004; Pittarello & De Faveri, 2006; SketchUp from Google, 2006), data representations vary greatly. Archiving these large amounts of information most often remains a simple storage of representations of 3D-scenes (3D images). To our knowledge, there is no efficient way to manipulate, or archive, extract, and modify scenes together with their components. These components may include geometric objects or primitives that compose scenes (3D-geometry and material aspects), geometrics transformations to compose primitives objects, or observation conditions (cameras, lights, and so on). Difficulties arise less in creating 3D-scenes, rather than in the interactive reuse of these scenes, particularly by database queries, such as via Internet. Managing 3Dscenes (e.g., querying a database of architectural scenes by the content, modifying given parameters on a large scale, or performing statistics) remains difficult. This implies that DBMS should use the data structures of the 3D-scene models. Unfortunately, such data structures are often of different or exclusive standards. Indeed, many “standards” exist in computer graphics. They are often denoted by extensions of data files. Let us mention, as examples, 3dmf (Apple’s Quickdraw 3D), 3ds (Autodesk’s 3DStudio), dxf (AutoDesk’s AutoCAD), flt (Multigen’s ModelGen), iv ( Silicon Graphics’ Inventor ), obj ( Wavefront/Alias ), and so on. Many standardization attempts strive to reduce this multiplicity of various formats. In particular, there is Standard for the Exchange of Product model data (STEP) (Fowler, 1995), an international standard for computer representation and exchange of products data. Its goal is to describe data bound to a product as long as it evolves, independently of any particular computer system. It allows file exchanges, but also provides a basis for implementing and sharing product databases. Merging 3D information and textual information allows the definition of the project’s mock-up. As a matter of fact, 3D information describes CAD objects of the project and textual added information gives semantic information on geometries. The main issues are the sharing and the exchange of the digital mock-up. The next section explains how we use a digital mock-up to create an information system with the help of the semantic included in geometric information. Information is exchanged and shared through a Web Platform.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "HL. Databases and database Networking"

1

Artail, Hassan, Haidar Safa, Rana ELZinnar, and Hicham Hamze. "A Distributed Database Framework from Mobile Databases in MANETs." In Third IEEE International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob 2007). IEEE, 2007. http://dx.doi.org/10.1109/wimob.2007.4390848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rao, Udai Pratap, G. J. Sahani, and Dhiren R. Patel. "Machine learning proposed approach for detecting database intrusions in RBAC enabled databases." In 2010 International Conference on Computing, Communication and Networking Technologies (ICCCNT'10). IEEE, 2010. http://dx.doi.org/10.1109/icccnt.2010.5591574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chiseganegrila, Anamaria. "IMPACT OF WEB 3.0 ON THE EVOLUTION OF LEARNING." In eLSE 2016. Carol I National Defence University Publishing House, 2016. http://dx.doi.org/10.12753/2066-026x-16-008.

Full text
Abstract:
In a world dominated by abrupt changes and information overload, the current Internet version, though successfully welding people via social networking, is still tributary to its early and obsolete system design. The new environments demand a change in the way the Internet functions and the information is conveyed and stored so that the users will have a more complete experience of web-browsing, Currently, the Web provides limited access to data with its chaotic nature that does not offer the user the possibility to obtain the needed information based on only one search on a single browser. That is why, the Web needs to change, become more decentralized, and transform into a huge database that incorporates artificial intelligence able to "understand" and retrieve the information needed by the user. As nowadays data are hidden in different databases, relevant information is hardly discovered by search engines, the results being thus unsatisfactory. The miscellaneous structure of the Web and the information overload have transformed the Web in a swamp where data are duplicated and relevant information sometimes sinks to the bottom becoming invisible even to multiple searches with different engines. Actually, end-users are more likely to dig out what they are looking for, if they already have prior knowledge of what they are going to find using specific key words. However, by obtaining relevant information from different fields, people are able to learn more about the fast changing world around them and make sound decisions regarding their career prospects. Therefore, the Web has to transform in order to feed the thirst for knowledge that characterizes the modern society and provide users with better learning experiences to suit their needs in both academic and nontraditional settings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography