Siga este enlace para ver otros tipos de publicaciones sobre el tema: XML encoding.

Artículos de revistas sobre el tema "XML encoding"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "XML encoding".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Yao, Quan Zhu, Bing Tian y Wang Yun He. "XML Keyword Search Algorithm Based on Level-Traverse Encoding". Applied Mechanics and Materials 263-266 (diciembre de 2012): 1553–58. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.1553.

Texto completo
Resumen
For XML documents, existing keyword retrieval methods encode each node with Dewey encoding, comparing Dewey encodings part by part is necessary in LCA computation. When the depth of XML is large, lots of LCA computations will affect the performance of keyword search. In this paper we propose a novel labeling method called Level-TRaverse (LTR) encoding, combine with the definition of the result set based on Exclusive Lowest Common Ancestor (ELCA),design a query Bottom-Up Level Algorithm(BULA).The experiments demonstrate this method improves the efficiency and the veracity of XML keyword retrieval.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Hardie, Andrew. "Modest XML for Corpora: Not a standard, but a suggestion". ICAME Journal 38, n.º 1 (28 de abril de 2014): 73–103. http://dx.doi.org/10.2478/icame-2014-0004.

Texto completo
Resumen
Abstract This paper argues for, and presents, a modest approach to XML encoding for use by the majority of contemporary linguists who need to engage in corpus construction. While extensive standards for corpus encoding exist - most notably, the Text Encoding Initiative’s Guidelines and the Corpus Encoding Standard based on them - these are rather heavyweight approaches, implicitly intended for major corpus-building projects, which are rather different from the increasingly common efforts in corpus construction undertaken by individual researchers in support of their personal research goals. Therefore, there is a clear benefit to be had from a set of recommendations (not a standard) that outlines general best practices in the use of XML in corpora without going into any of the more technical aspects of XML or the full weight of TEI encoding. This paper presents such a set of suggestions, dubbed Modest XML for Corpora, and posits that such a set of pointers to a limited level of XML knowledge could work as part of the normal, general training of corpus linguists. The Modest XML recommendations cover the following set of things, which, according to the foregoing argument, are sufficient knowledge about XML for most corpus linguists’ day-to-day needs: use of tags; adding attribute value pairs; recommended use of attributes; nesting of tags; encoding of special characters; XML well-formedness; a collection of de facto standard tags and attributes; going beyond the basic de facto standard tags; and text headers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Jiang, Yi, Hong Bo Zhang y Fan Lin. "A Continued Fraction Encoding and Labeling Scheme for Dynamic XML Data". Advanced Materials Research 204-210 (febrero de 2011): 960–63. http://dx.doi.org/10.4028/www.scientific.net/amr.204-210.960.

Texto completo
Resumen
We present a new efficient XML encoding and labeling scheme for dynamic XML document called CFE (Continued Fraction-based Encoding) which labels nodes with continued fractions in this paper. CFE has three important properties which form the foundations of this paper. The experimental results show that CFE provides fairly reasonable XML query processing performance while completely avoiding re-labeling for updates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Vinoth, P. y P. Sankar. "Encoding of coordination complexes with XML". Journal of Molecular Graphics and Modelling 76 (septiembre de 2017): 242–59. http://dx.doi.org/10.1016/j.jmgm.2017.07.009.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Gu, Dong Juan y Li Yong Wan. "A XML Document Coding Schema Based on Binary". Applied Mechanics and Materials 496-500 (enero de 2014): 1877–80. http://dx.doi.org/10.4028/www.scientific.net/amm.496-500.1877.

Texto completo
Resumen
In order to resolve the inefficiency for XML data query and support dynamic updates, etc, this paper has proposed an improved method to encode XML document nodes. On the basic of region encoding and the prefix encoding, it introduces a XML document coding schema base on binary (CSBB). The CSBB code use binary encoding strategy and make the bit string inserted in order. The bit string inserted algorithm can generate ordered bit string to reserve space for the inserted new nodes, and not influence on the others. Experiments shows the CSBB code can effectively avoid re-encoding of nodes, and supports the nodes Dynamic Update.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lu, Yan, Fu Ning Ma y Shan Zhong Chu. "A New Reachability Query Method for Graph-Structured XML Data". Applied Mechanics and Materials 235 (noviembre de 2012): 394–98. http://dx.doi.org/10.4028/www.scientific.net/amm.235.394.

Texto completo
Resumen
Query processing of graph-structured XML data is a rising topic in XML research field. This paper focuses on reachability query methods of graph-structured XML data. Encoding scheme of CDGX (Coding Directed Graph-structured XML data) is proposed in this paper, which not only can effectively solve the circle problem, but also avoiding a large number of intermediate data and saving storage space. And based on encoding scheme of CDGX, a new reachability query method RJDG (Reachability Judgment on Directed Graph) is put forward to. In RJDG, adjacent nodes in a same graph-structured XML document are got and stored beforehand. RJDG only needs to deal with the adjacent nodes to decide the reachability relationship between XML nodes. Experiments illustrate that RJDG is an efficient reachability query method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Rusu, Maria Smaranda. "Encoding youthful perspectives of the Anti-Communist Revolution". Studia Universitatis Babeș-Bolyai Digitalia 65, n.º 2 (25 de enero de 2021): 49–56. http://dx.doi.org/10.24193/subbdigitalia.2020.2.04.

Texto completo
Resumen
"Encoding youthful perspectives of the Anti-Communist Revolution” presents in a captivating manner two interviews dating back to the time in the history of Romania when the country was struggling with the Communist revolution which started in Timisoara. The perspective in which this information is described is the XML language. In order to simplify the data and to make it more accesible, there were used tags in a scheme. By using this method, the readers can have a better understanding of the text while having an over-all look upon the discussed historical issue. Keywords: XML, Text encoding, Anti-Communist Revolution, Testimonies, Oxygen XML Editor "
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Deng, Zhi-Hong, Yong-Qing Xiang y Ning Gao. "LAF: a new XML encoding and indexing strategy for keyword-based XML search". Concurrency and Computation: Practice and Experience 25, n.º 11 (24 de julio de 2012): 1604–21. http://dx.doi.org/10.1002/cpe.2906.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Jang, Bumsuk, SeongHun Park y Young-guk Ha. "A stream-based method to detect differences between XML documents". Journal of Information Science 43, n.º 1 (10 de julio de 2016): 39–53. http://dx.doi.org/10.1177/0165551515602805.

Texto completo
Resumen
Detecting differences between XML documents is one of most important research topics for XML. Since XML documents are generally considered to be organized in a tree structure, most previous research has attempted to detect differences using tree-matching algorithms. However, most tree-matching algorithms have inadequate performance owing to limitations in terms of the execution time, optimality and scalability. This study proposes a stream-based difference detection method in which an XML binary encoding algorithm is used to provide improved performance relative to that of previous tree-matching algorithms. A tree-structured analysis of XML is not essential in order to detect differences. We use a D-Path algorithm that has an optimal result quality for difference detection between two streams and has a lower time complexity than tree-based methods. We then modify the existing XML binary encoding method to tokenize the stream and the algorithm in order to support more operations than D-Path algorithm does. The experimental results reveal greater efficiency for the proposed method relative to tree-based methods. The execution time is at least 4 times faster than state-of-the-art tree-based methods. In addition, the scalability is much more efficient.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Antoniou, Byron y Lysandros Tsoulos. "The potential of XML encoding in geomatics converting raster images to XML and SVG". Computers & Geosciences 32, n.º 2 (marzo de 2006): 184–94. http://dx.doi.org/10.1016/j.cageo.2005.06.004.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Zhou, Yun Cheng. "Research on Method of CIM-Based Data Exchange for Electric Power Enterprise". Advanced Materials Research 986-987 (julio de 2014): 2151–57. http://dx.doi.org/10.4028/www.scientific.net/amr.986-987.2151.

Texto completo
Resumen
A novel CIM-based approach is proposed to realize power enterprise data exchange under heterogeneous IT circumstance. CIM objects encoding specification by XML is introduced in this paper. The object is expressed by XML complex element, and the object’s properties are encoded by simple elements embedded in complex one. In order to solve some data interchange problems, a CIM/XSD schema which applies on CIM data syntax and data validation verification is established by using XML Schema Definition (XSD) technology, and an attribute group “AssociationAttributeGroup” is designed to serialize complex relationships of CIM objects. The attribute group provides syntax support for marshaling linkages of objects in certain two methods: “embedding” and “referring”. The two operators: serialization and deserialization are added to each CIM class. By this way, the CIM objects can make quickly and bidirectional alternation between memory objects and CIM/XML document. The algorithms of the two operators are designed in detail, which can implement complex object set bidirectional conversion efficiently. The case study shows that the CIM object encoding specification, the CIM/XML schema and the algorithms of serialization functions can be applied to exchange and share CIM data in electric power enterprise.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Mirabi. "Controlling Label Size Increment of Efficient XML Encoding and Labeling Scheme in Dynamic XML Update". Journal of Computer Science 6, n.º 12 (1 de diciembre de 2010): 1535–40. http://dx.doi.org/10.3844/jcssp.2010.1535.1540.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Vogeler, Georg. "Towards a Standard of Encoding Medieval Charters with XML". Digital Scholarship in the Humanities 20, n.º 3 (1 de septiembre de 2005): 269–80. http://dx.doi.org/10.1093/llc/fqi031.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Fiander, David J. y D. Grant Campbell. "An XML Definition for an ISBD-Based Encoding Scheme". Journal of Internet Cataloging 6, n.º 4 (24 de septiembre de 2004): 29–58. http://dx.doi.org/10.1300/j141v06n04_04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Min, Jun-Ki, Jihyun Lee y Chin-Wan Chung. "An efficient XML encoding and labeling method for query processing and updating on dynamic XML data". Journal of Systems and Software 82, n.º 3 (marzo de 2009): 503–15. http://dx.doi.org/10.1016/j.jss.2008.08.014.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Jin, Meng, Yuqi Bai, Emmanuel Devys y Liping Di. "Toward a Standardized Encoding of Remote Sensing Geo-Positioning Sensor Models". Remote Sensing 12, n.º 9 (11 de mayo de 2020): 1530. http://dx.doi.org/10.3390/rs12091530.

Texto completo
Resumen
Geolocation information is an important feature of remote sensing image data that is captured through a variety of passive or active observation sensors, such as push-broom electro-optical sensor, synthetic aperture radar (SAR), light detection and ranging (LIDAR) and sound navigation and ranging (SONAR). As a fundamental processing step to locate an image, geo-positioning is used to determine the ground coordinates of an object from image coordinates. A variety of sensor models have been created to describe geo-positioning process. In particular, Open Geospatial Consortium (OGC) has defined the Sensor Model Language (SensorML) specification in its Sensor Web Enablement (SWE) initiative to describe sensors including the geo-positioning process. It has been realized using syntax from the extensible markup language (XML). Besides, two standards defined by the International Organization for Standardization (ISO), ISO 19130-1 and ISO 19130-2, introduced a physical sensor model, a true replacement model, and a correspondence model for the geo-positioning process. However, a standardized encoding for geo-positioning sensor models is still missing for the remote sensing community. Thus, the interoperability of remote sensing data between application systems cannot be ensured. In this paper, a standardized encoding of remote sensing geo-positioning sensor models is introduced. It is semantically based on ISO 19130-1 and ISO 19130-2, and syntactically based on OGC SensorML. It defines a cross mapping of the sensor models defined in ISO 19130-1 and ISO 19130-2 to the SensorML, and then proposes a detailed encoding method to finalize the XML schema (an XML schema here is the structure to define an XML document), which will become a profile of OGC SensorML. It seamlessly unifies the sensor models defined in ISO 19130-1, ISO 19130-2, and OGC SensorML. By enabling a standardized description of sensor models used to produce remote sensing data, this standard is very promising in promoting data interoperability, mobility, and integration in the remote sensing domain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Tang, Hong Jie. "Study of XML Indexing Structure Based on XISS". Applied Mechanics and Materials 851 (agosto de 2016): 611–14. http://dx.doi.org/10.4028/www.scientific.net/amm.851.611.

Texto completo
Resumen
The study is based on XISS(XML Indexing and Storage System) of Dietz’s Numbering Schema to determine the ancestor-descendant relationship. According to the results of research, this paper proposes an improved method of node encoding, realizes its indexing structure, and discusses its query path. Finally, the paper analyzes the property of this improved method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ahmed, Mohamed A. H. "XML Annotation of Hebrew Elements in Judeo-Arabic Texts". Journal of Jewish Languages 6, n.º 2 (23 de agosto de 2018): 221–42. http://dx.doi.org/10.1163/22134638-06021122.

Texto completo
Resumen
Abstract The main aim of this study is to introduce a model of TEI (Text Encoding Initiative) annotation of Hebrew elements in Judeo-Arabic texts, i.e., code switching (CS), borrowing, and Hebrew quotations. This article will provide an introduction to using XML (Extensible Markup Language) to investigate sociolinguistic aspects in medieval Judeo-Arabic texts. Accordingly, it will suggest to what extent using XML is useful for investigating linguistic and sociolinguistic features in the Judeo-Arabic paradigm. To provide an example for how XML annotation could be applied to Judeo-Arabic texts, a corpus of 300 pages selected from three Judeo-Arabic books has been manually annotated using the TEI P5. The annotation covers all instances of CS, borrowing, and Hebrew quotations in that corpus.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Siew, Chengxi y Pankaj Kumar. "CitySAC: A Query-Able CityGML Compression System". Smart Cities 2, n.º 1 (19 de marzo de 2019): 106–17. http://dx.doi.org/10.3390/smartcities2010008.

Texto completo
Resumen
Spatial Data Infrastructures (SDIs) are frequently used to exchange 2D & 3D data, in areas such as city planning, disaster management, urban navigation and many more. City Geography Mark-up Language (CityGML), an Open Geospatial Consortium (OGC) standard has been developed for the storage and exchange of 3D city models. Due to its encoding in XML based format, the data transfer efficiency is reduced which leads to data storage issues. The use of CityGML for analysis purposes is limited due to its inefficiency in terms of file size and bandwidth consumption. This paper introduces XML based compression technique and elaborates how data efficiency can be achieved with the use of schema-aware encoder. We particularly present CityGML Schema Aware Compressor (CitySAC), which is a compression approach for CityGML data transaction within SDI framework. Our test results show that the encoding system produces smaller file size in comparison with existing state-of-the-art compression methods. The encoding process significantly reduces the file size up to 7–10% of the original data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Brahmia, Zouhaier, Fabio Grandi y Rafik Bouaziz. "Conversion of XML schema design styles with StyleVolution". International Journal of Web Information Systems 16, n.º 1 (24 de agosto de 2019): 23–64. http://dx.doi.org/10.1108/ijwis-05-2019-0022.

Texto completo
Resumen
Purpose Any XML schema definition can be organized according to one of the following design styles: “Russian Doll”, “Salami Slice”, “Venetian Blind” and “Garden of Eden” (with the additional “Bologna” style actually representing absence of style). Conversion from a design style to another can facilitate the reuse and exchange of schema specifications encoded using the XML schema language. Without any computer-aided engineering support, style conversions must be performed very carefully as they are difficult and error-prone operations. The purpose of this paper is to efficiently deal with such XML schema design style conversions. Design/methodology/approach A general approach, named StyleVolution, for automatic management of XML schema design style conversions, is proposed. StyleVolution is equipped with a suite of seven procedures: four for converting a valid XML schema from any other design style to the “Garden of Eden” style, which has been chosen as a normalized XML schema format, and three for converting from the “Garden of Eden” style to any of the other desired design styles. Findings Procedures, algorithms and methods for XML schema design style conversions are presented. The feasibility of the approach has been shown through the encoding (using the XQuery language) and the testing (with the Altova XMLSpy 2019 tool) of a suite of seven ready-to-use procedures. Moreover, four test procedures are provided for checking the conformance of a given input XML schema to a schema design style. Originality/value The proposed approach implements a new technique for efficiently managing XML schema design style conversions, which can be used to make any given XML schema file to conform to a desired design style.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

GUO, Huan, Xiao-Ping YE, Yong TANG y Luo-Wu CHEN. "Temporal XML Index Based on Temporal Encoding and Linear Order Partition". Journal of Software 23, n.º 8 (11 de septiembre de 2012): 2042–57. http://dx.doi.org/10.3724/sp.j.1001.2012.04161.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Park, Jun Pyo, Chang-Sup Park y Yon Dohn Chung. "Lineage Encoding: An Efficient Wireless XML Streaming Supporting Twig Pattern Queries". IEEE Transactions on Knowledge and Data Engineering 25, n.º 7 (julio de 2013): 1559–73. http://dx.doi.org/10.1109/tkde.2011.202.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Bergmann, Frank T., Jonathan Cooper, Nicolas Le Novère, David Nickerson y Dagmar Waltemath. "Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2". Journal of Integrative Bioinformatics 12, n.º 2 (1 de junio de 2015): 119–212. http://dx.doi.org/10.1515/jib-2015-262.

Texto completo
Resumen
Summary The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines.This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Rühlemann, Christoph, Andrej Bagoutdinov y Matthew Brook O’Donnell. "Modest XPath and XQuery for corpora: Exploiting deep XML annotation". ICAME Journal 39, n.º 1 (1 de marzo de 2015): 47–84. http://dx.doi.org/10.1515/icame-2015-0003.

Texto completo
Resumen
Abstract This paper outlines a modest approach to XPath and XQuery, tools allowing the navigation and exploitation of XML-encoded texts. The paper starts off from where Andrew Hardie’s paper “Modest XML for corpora: Not a standard, but a suggestion” (Hardie 2014) left the reader, namely wondering how one’s corpus can be usefully analyzed once its XML-encoding is finished, a question the paper did not address. Hardie argued persuasively that “there is a clear benefit to be had from a set of recommendations (not a standard) that outlines general best practices in the use of XML in corpora without going into any of the more technical aspects of XML or the full weight of TEI encoding” (Hardie 2014: 73). In a similar vein this paper argues that even a basic understanding of XPath and XQuery can bring great benefits to corpus linguists. To make this point, we present not only a modest introduction to basic structures underlying the XPath and XQuery syntax but demonstrate their analytical potential using Obama’s 2009 Inaugural Address as a test bed. The speech was encoded in XML, automatically PoS-tagged and manually annotated on additional layers that target two rhetorical figures, anaphora and isocola. We refer to this resource as the Inaugural Rhetorical Corpus (IRC). Further, we created a companion website hosting not only the Inaugural Rhetorical Corpus, but also the Inaugural Training Corpus) (a training corpus in the form of an abbreviated version of the IRC to allow manual checks of query results) as well as an extensive list of tried and tested queries for use with either corpus. All of the queries presented in this paper are at beginners to lower-intermediate levels of XPath/XQuery expertise. Nonetheless, they yield fruitful results: they show how Obama uses the inclusive pronouns we and our as a discursive strategy to advance his political strategy to re-focus American politics on economic and domestic matters. Further, they demonstrate how sentence length contributes to the build-up of climactic tension. Finally, they suggest that Obama’s signature rhetorical figure is the isocolon and that the overwhelming majority of isocola in the speech instantiate the crescens type, where the cola gradually increase in length over the sequence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Shaamood, Mohammed. "Encoding JSON by using Base64". Iraqi Journal for Electrical and Electronic Engineering 17, n.º 1 (3 de marzo de 2021): 1–9. http://dx.doi.org/10.37917/ijeee.17.1.4.

Texto completo
Resumen
Transmitting binary data across a network should generally avoid transmitting raw binary data over the medium for several reasons, one would be that the medium may be a textual one and may not accept or correctly handle raw bitstream, another would be that some protocols may misinterpret the meaning of the bits and causes a problem or even loss of the data. To make the data more readable and would avoid misinterpretation by different systems and environments, this paper introduces encoding two of the most broadly used data interchange formats, XML and JSON, into the Base64 which is an encoding scheme that converts binary data to an ASCII string format by using a radix-64 representation. This process, will, make the data more readable and would avoid misinterpretation by different systems and environments. The results reflect that encoding data in Base64 before the transmission will present many advantages including readability and integrity, it will also enable us to transmit binary data over textual mediums, 7 Bit protocols such as SMTP, and different network hardware without risking misinterpretation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Wei, Sun, Li Hua Dong y Yao Hua Dong. "Query Processing of RFID Data with Object Trajectory in Manufacture and Logistics". Applied Mechanics and Materials 16-19 (octubre de 2009): 1043–47. http://dx.doi.org/10.4028/www.scientific.net/amm.16-19.1043.

Texto completo
Resumen
In the domain of manufacture and logistics, Radio Frequency Identification (RFID) holds the promise of real-time identifying, locating, tracking and monitoring physical objects without line of sight due to an enhanced efficiency, accuracy, and preciseness of object identification, and can be used for a wide range of pervasive computing applications. To achieve these goals, RFID data has to be collected, filtered, and transformed into semantic application data. However, the amount of RFID data is huge. Therefore, it requires much time to extract valuable information from RFID data for object tracing. This paper specifically explores options for modeling and utilizing RFID data set by XML-encoding for tracking queries and path oriented queries. We then propose a method which translates the queries to SQL queries. Based on the XML-encoding scheme, we devise a storage scheme to process tracking queries and path oriented queries efficiently. Finally, we realize the method by programming in a software system for manufacture and logistics laboratory. The system shows that our approach can process the tracing or path queries efficiently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Li, Bei, Katsuya Kawaguchi, Tatsuo Tsuji y Ken Higuchi. "A Labeling Scheme for Dynamic XML Trees Based on History-offset Encoding". IPSJ Online Transactions 3 (2010): 71–87. http://dx.doi.org/10.2197/ipsjtrans.3.71.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Guo, Lihong y Haitao Wu. "An XML Privacy-Preserving Data Disclosure Decision Scheme". Security and Communication Networks 2022 (24 de febrero de 2022): 1–16. http://dx.doi.org/10.1155/2022/9099722.

Texto completo
Resumen
In order to protect the sensitive data represented as XML documents in a trusted collaborative system where sensitive data are not shared, an XML privacy-preserving data disclosure decision scheme was proposed under the assumption of a trusted server. This scheme is inspired by the idea of separating storage structure and content. Temporary access matrix is used to represent structure authorization and the vector represents the content authorization of leaf node. According to the conversion rules, access matrix not only represents access authorization of all nodes but also keeps the main structure of the XML document. With the combination of the vector and matrix, it can provide different access views for different group users with different purposes. In addition, start-end encoding is used to encode all the nodes for locating nodes and the content; privilege matrix solves the problem of privacy synchronization change for all users. At the same time, authentication polynomials are used to verify different users and improve the security level. The experimental results show that the scheme not only effectively protects XML sensitive data but also reduces the storage pressure on the server side; at the same time, from the response time, we know that it is beneficial for the rapid search and information positioning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Yang, Jui-Pin. "Scalable Storage Management Architecture for Common Information Model/Web-Based Enterprise Management Environments". Advanced Science, Engineering and Medicine 12, n.º 7 (1 de julio de 2020): 904–8. http://dx.doi.org/10.1166/asem.2020.2625.

Texto completo
Resumen
Web-Based Enterprise Management (WBEM) is independent of platforms and managed resources so that it can used to unify storage management. WBEM consists of three components. Common Information Model (CIM) is the main component which utilizes a common data format, language and methodology for collecting and describing storage resources. xml CIM encoding defines the way that represents the CIM classes and instances by XML elements. CIM Operations over HTTP makes CIM operations in an open and standardized environment based on HTTP. In this paper, we propose a novel storage architecture that enhances efficiency of storage management under CIM/WBEM environments namely Scalable Storage Management Architecture (SSMA). SSMA is developed based on OpenPegasus. In addition, SSMA has better delay performance than traditional proxy CIMOM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Yang, Yang y Hai Ge Li. "XML Query Based on Indexed Sequential Table". Advanced Materials Research 532-533 (junio de 2012): 1177–81. http://dx.doi.org/10.4028/www.scientific.net/amr.532-533.1177.

Texto completo
Resumen
The current study based on XML index and query mostly focuses on encoding and the structural relation. Region codings are widely used to improve XML query. In this paper postorder-traversal region coding is proposed. The postorder of a node’s all descendants consists of the region. Judging and ensuring structural relation of any two nodes just depend on this region, if the postorder of a node is in a region, ancestor/descendant structural relation can be ensured. Consequently, postorder-traversal region coding can effectively judge structural relation and avoid traversing the XML document tree. Based on region coding, many constructive structural query algorithms have been put forward. As we all know that Stack-Tree-Desc algorithm is one of these fine algorithms, AList and DList only need separately scan one time to judge structural relation, however some unnecessary nodes still be scanned. In order to solve this problem, Indexed Sequential Table algorithm is introduced. The optimized algorithm introduces Indexed Sequential Table to avoid scanning unwanted nodes when the two lists join to locate next node which participates in structural join. In this case, some nodes of AList and DList which don’t participate in structural joins can be jumped, the query efficiency is enhanced. As a result, ordered scanning is prevented, the consuming time of XML query shortens accordingly. Experiment results demonstrate the effectiveness of the improved coding and algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

KATO, H., S. HIDAKA, Z. HU, K. NAKANO y Y. ISHIHARA. "Context-preserving XQuery fusion". Mathematical Structures in Computer Science 25, n.º 4 (10 de noviembre de 2014): 916–41. http://dx.doi.org/10.1017/s096012951300008x.

Texto completo
Resumen
This paper solves the known problem of elimination of unnecessary internal element construction as well as variable elimination in XML processing with (a subset of) XQuery without ignoring the issues of document order. The semantics of XQuery is context sensitive and requires preservation of document order. In this paper, we propose, as far as we are aware, the first XQuery fusion that can deal with both the document order and the context of XQuery expressions. More specifically, we carefully design a context representation of XQuery expressions based on the Dewey order encoding, develop a context-preserving XQuery fusion for ordered trees by static emulation of the XML store, and prove that our fusion is correct. Our XQuery fusion has been implemented, and all the examples in this paper have passed through the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Mirabi, Meghdad, Hamidah Ibrahim, Nur Izura Udzir y Ali Mamat. "An encoding scheme based on fractional number for querying and updating XML data". Journal of Systems and Software 85, n.º 8 (agosto de 2012): 1831–51. http://dx.doi.org/10.1016/j.jss.2012.02.054.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Torres Silva, Ever Augusto, Sebastian Uribe, Jack Smith, Ivan Felipe Luna Gomez y Jose Fernando Florez-Arango. "XML Data and Knowledge-Encoding Structure for a Web-Based and Mobile Antenatal Clinical Decision Support System: Development Study". JMIR Formative Research 4, n.º 10 (16 de octubre de 2020): e17512. http://dx.doi.org/10.2196/17512.

Texto completo
Resumen
Background Displeasure with the functionality of clinical decision support systems (CDSSs) is considered the primary challenge in CDSS development. A major difficulty in CDSS design is matching the functionality to the desired and actual clinical workflow. Computer-interpretable guidelines (CIGs) are used to formalize medical knowledge in clinical practice guidelines (CPGs) in a computable language. However, existing CIG frameworks require a specific interpreter for each CIG language, hindering the ease of implementation and interoperability. Objective This paper aims to describe a different approach to the representation of clinical knowledge and data. We intended to change the clinician’s perception of a CDSS with sufficient expressivity of the representation while maintaining a small communication and software footprint for both a web application and a mobile app. This approach was originally intended to create a readable and minimal syntax for a web CDSS and future mobile app for antenatal care guidelines with improved human-computer interaction and enhanced usability by aligning the system behavior with clinical workflow. Methods We designed and implemented an architecture design for our CDSS, which uses the model-view-controller (MVC) architecture and a knowledge engine in the MVC architecture based on XML. The knowledge engine design also integrated the requirement of matching clinical care workflow that was desired in the CDSS. For this component of the design task, we used a work ontology analysis of the CPGs for antenatal care in our particular target clinical settings. Results In comparison to other common CIGs used for CDSSs, our XML approach can be used to take advantage of the flexible format of XML to facilitate the electronic sharing of structured data. More importantly, we can take advantage of its flexibility to standardize CIG structure design in a low-level specification language that is ubiquitous, universal, computationally efficient, integrable with web technologies, and human readable. Conclusions Our knowledge representation framework incorporates fundamental elements of other CIGs used in CDSSs in medicine and proved adequate to encode a number of antenatal health care CPGs and their associated clinical workflows. The framework appears general enough to be used with other CPGs in medicine. XML proved to be a language expressive enough to describe planning problems in a computable form and restrictive and expressive enough to implement in a clinical system. It can also be effective for mobile apps, where intermittent communication requires a small footprint and an autonomous app. This approach can be used to incorporate overlapping capabilities of more specialized CIGs in medicine.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Tryon, Julia Rachel. "The Rosarium Project". Digital Library Perspectives 32, n.º 3 (8 de agosto de 2016): 209–22. http://dx.doi.org/10.1108/dlp-01-2016-0001.

Texto completo
Resumen
Purpose This paper aims to describe the Rosarium Project, a digital humanities project being undertaken at the Phillips Memorial Library + Commons of Providence College in Providence, Rhode Island. The project focuses on a collection of English language non-fiction writings about the genus Rosa. The collection will comprise books, pamphlets, catalogs and articles from popular magazines, scholarly journals and newspapers written on the rose published before 1923. The source material is being encoded using the Text Encoding Initiative (TEI) Consortium’s P5 guidelines and the extensible markup language (XML) editor software <oXygen/>. Design/methodology/approach This paper outlines the Rosarium Project and describes its workflow. This paper demonstrates how to create TEI-encoded files for digital curation using the XML editing software <oXygen/> and the TEI Archiving Publishing and Access Service (TAPAS) Project. The paper provides information on the purpose, scope, audience and phases of the project. It also identifies the resources – hardware, software and membership – needed for undertaking such a project. Findings This paper shows how straightforward it is to encode transcriptions of primary sources using the TEI and XML editing software and to make the resulting digital resources available on the Web. Originality/value This paper presents a case study of how a research project transitioned from traditional printed bibliography to a web-accessible resource by capitalizing on the tools in the TEI toolkit using specialized XML editing software. The details of the project can be a guide for librarians and researchers contemplating digitally curating primary resources and making them available on the Web.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Dube, Etienne, Thierry Badard y Yvan Bedard. "XML Encoding and Web Services for Spatial OLAP Data Cube Exchange: an SOA Approach". Journal of Computing and Information Technology 17, n.º 4 (2009): 347. http://dx.doi.org/10.2498/cit.1001354.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Chevillard, Jean-Luc. "How Tamil was described once again: towards an XML-encoding of the Grammatici Tamulici". Histoire Epistémologie Langage 39, n.º 2 (2017): 103–27. http://dx.doi.org/10.1051/hel/2017390206.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Girardot, Marc y Neel Sundaresan. "Millau: an encoding format for efficient representation and exchange of XML over the Web". Computer Networks 33, n.º 1-6 (junio de 2000): 747–65. http://dx.doi.org/10.1016/s1389-1286(00)00051-7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Durairaj, Vijayasarathi y Sankar Punnaivanam. "Encoding of Fundamental Chemical Entities of Organic Reactivity Interest using chemical ontology and XML". Journal of Molecular Graphics and Modelling 61 (septiembre de 2015): 30–43. http://dx.doi.org/10.1016/j.jmgm.2015.06.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Rankin, Sharon y Casey Lees. "The McGill library chapbook project: a case study in TEI encoding". OCLC Systems & Services: International digital library perspectives 31, n.º 3 (10 de agosto de 2015): 134–43. http://dx.doi.org/10.1108/oclc-07-2014-0030.

Texto completo
Resumen
Purpose – The purpose of this case study is to describe a multi-year text encoding initiative (TEI) project that took place in the McGill University Library, Rare Books and Special Collections. Design/methodology/approach – Early nineteenth century English language chapbooks from the collection were digitized, and the proofed text files were encoded in TEI, following Best Practices for TEI in Libraries (2011). Findings – The project coordinator describes the TEI file structure and customizations for the project to support a distinct subject classification of the chapbooks and the encoding of the woodcut illustrations using the Iconclass classification. Research limitations/implications – The authors focus on procedures, use of TEI data elements and encoding challenges. Practical implications – This paper documents the project workflow and provides a possible model for future digital humanities projects. Social implications – The graduate students who participated in the TEI encoding learned a new suite of skills involving extensible markup language (XML) file structure and the application of a markup language that requires interpretation. Originality/value – The McGill Library Chapbook Project Web site, launched in 2013 now provides access to 933 full-text works.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Haw, Su-Cheng y Chien-Sing Lee. "Extending path summary and region encoding for efficient structural query processing in native XML databases". Journal of Systems and Software 82, n.º 6 (junio de 2009): 1025–35. http://dx.doi.org/10.1016/j.jss.2009.01.007.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Bodard, Gabriel y Polina Yordanova. "Publication, Testing and Visualization with EFES: A tool for all stages of the EpiDoc XML editing process". Studia Universitatis Babeș-Bolyai Digitalia 65, n.º 1 (8 de diciembre de 2020): 17–35. http://dx.doi.org/10.24193/subbdigitalia.2020.1.02.

Texto completo
Resumen
"EpiDoc is a set of recommendations, schema and other tools for the encoding of ancient texts, especially inscriptions and papyri, in TEI XML, that is now used by upwards of a hundred projects around the world, and large numbers of scholars seek training in EpiDoc encoding every year. The EpiDoc Front-End Services tool (EFES) was designed to fill the important need for a publication solution for researchers and editors who have produced EpiDoc encoded texts but do not have access to digital humanities support or a well-funded IT service to produce a publication for them. This paper will discuss the use of EFES not only for final publication, but as a tool in the editing and publication workflow, by editors of inscriptions, papyri and similar texts including those on coins and seals. The edition visualisations, indexes and search interface produced by EFES are able to serve as part of the validation, correction and research apparatus for the author of an epigraphic corpus, iteratively improving the editions long before final publication. As we will argue, this research process is a key component of epigraphic and papyrological editing practice, and studying these needs will help us to further enhance the effectiveness of EFES as a tool. To this end we also plan to add three major functionalities to the EFES toolbox: (1) date visualisation and filter—building on the existing “date slider,” and inspired by partner projects such as Pelagios and Godot; (2) geographic visualization features, again building on Pelagios code, allowing the display of locations within a corpus or from a specific set of search results in a map; (3) export of information and metadata from the corpus as Linked Open Data, following the recommendations of projects such as the Linked Places format, SNAP, Chronontology and Epigraphy.info, to enable the semantic sharing of data within and beyond the field of classical and historical editions. Finally, we will discuss the kinds of collaboration that will be required to bring about desired enhancements to the EFES toolset, especially in this age of research-focussed, short-term funding. Embedding essential infrastructure work of this kind in research applications for specific research and publication projects will almost certainly need to be part of the solution. Keywords: Text Encoding, Ancient Texts, Epigraphy, Papyrology, Digital Publication, Linked Open Data, Extensible Stylesheet Language Transformations"
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Opaliński, Krzysztof y Patrycja Potoniec. "KORPUS POLSZCZYZNY XVI WIEKU". Poradnik Językowy, n.º 8/2020(777) (28 de octubre de 2020): 17–31. http://dx.doi.org/10.33896/porj.2020.8.2.

Texto completo
Resumen
The original purpose of creating the corpus of the 16th Polish language was to preserve the material basis of Słownik polszczyzny XVI wieku (Dictionary of the 16th-Century Polish Language) (SPXVI) comprising 272 texts transliterated in accordance with standardised principles, which is of great value. The project described here consists in creating an online base of the resources and using a part of it as a germ of a language corpus with texts designated with morphosyntactic markers. The works adopted XML encoding in the TEI (Text Encoding Initiative) formalism, version P5, adjusted to a 16th-century text. Typographical elements as well as grammatical categories and forms of words were designated in the texts. The germ of the corpus of the 16th-century Polish language comprises 135 thousand segments and it will be expanded by another 100 thousand in the future to provide material for an automated form designation tool. Ultimately, integration with the Diachronic Corpus of Polish is planned. Keywords: lexicography – history of Polish – diachronic corpus of Polish
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ebeling, Signe O. y Alois Heuboeck. "Encoding document information in a corpus of student writing: the British Academic Written English corpus". Corpora 2, n.º 2 (noviembre de 2007): 241–56. http://dx.doi.org/10.3366/cor.2007.2.2.241.

Texto completo
Resumen
The information contained in a document is only partly represented by the wording of the text; in addition, features of formatting and layout can be combined to lend specific functionality to chunks of text (e.g., section headings, highlighting, enumeration through list formatting, etc.). Such functional features, although based on the ‘objective’ typographical surface of the document, are often inconsistently realised and encoded only implicitly, i.e., they depend on deciphering by a competent reader. They are characteristic of documents produced with standard text-processing tools. We discuss the representation of such information with reference to the British Academic Written English (BAWE) corpus of student writing, currently under construction at the universities of Warwick, Reading and Oxford Brookes. Assignments are usually submitted to the corpus as Microsoft Word documents and make heavy use of surface-based functional features. As the documents are to be transformed into XML-encoded corpus files, this information can only be preserved through explicit annotation, based on interpretation. We present a discussion of the choices made in the BAWE corpus and the practical requirements for a tagging interface.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Iqbal, Taufiq y Syarifuddin Syarifuddin. "Pengembangan Repository berbasis Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) pada Standar Metadata Encoding and Transmission Standard (METS) dan MPEG-21 Digital Item Declaration Language (DIDL)". Jurnal JTIK (Jurnal Teknologi Informasi dan Komunikasi) 4, n.º 2 (6 de diciembre de 2020): 7. http://dx.doi.org/10.35870/jtik.v5i1.161.

Texto completo
Resumen
The purpose of this research is to build a repository model and feature the Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and the Metadata Encoding and Transmission Standard (METS) and MPEG-21 Digital Item Declaration Language (DIDL). The research model used is qualitative research and methods. Application development used is Fourth Generation Techniques (4GT). From the results of the development of the repository by involving the Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) module on the Metadata Encoding and Transmission Standard (METS) and MPEG-21 Digital Item Declaration Language (DIDL), it has been applied to the repository application that was built. The test results using the OAI-PMH URL using the OVAL validator tool found that there were no problems and problems in validating and verifying data in the Identify, ListMetadataFormats, ListSets, ListIdentifiers, ListRecords, and XML Validation commands. While the test results show the success rate in crawling each metadata in the web repository, the average success rate of crawling metadata by Google Scholar is 90%, while the error is known to be 10% because some documents do not have complete metadata such as bibliography and uploaded documents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Semenov, Vitaly Adolfovich, Semen Vasilyevich Arishin y Georgii Vitalyevich Semenov. "Formal Rules to Produce Object Notation for EXPRESS Schema-Driven Data". Proceedings of the Institute for System Programming of the RAS 33, n.º 5 (2021): 7–24. http://dx.doi.org/10.15514/ispras-2021-33(5)-1.

Texto completo
Resumen
Recently, product data management systems (PDM) are widely used to conduct complex multidisciplinary projects in various industrial domains. The PDM systems enable teams of designers, engineers, and managers to remotely communicate on a network, exchange and share common product information. To integrate CAD/CAM/CAE applications with the PDM systems and ensure their interoperability, a dedicated family of standards STEP (ISO 10303) has been developed and employed. The STEP defines an object-oriented language EXPRESS to formally specify information schemas as well as file formats to store and transfer product data driven by these schemas. These are clear text encoding format SPF and STEP-XML. Nowadays, with the development and widespread adoption of Web technologies, the JSON language is getting increasingly popular due to it being apropos for the tasks of object-oriented data exchange and storage, as well as its simple, easy to parse syntax. The paper explores the topic of the suitability of the JSON language for the unambiguous representation, storage and interpretation of product data. Under the assumption that the product data can be described by arbitrary information schemas in EXPRESS, formal rules for the producing JSON notation are proposed and presented. Explanatory examples are provided to illustrate the proposed rules. The results of computational experiments conducted confirm the advantages of the JSON format compared to SPF and STEP-XML, and motivate its widespread adoption when integrating software applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Woolf, A., B. Lawrence, R. Lowry, K. Kleese van Dam, R. Cramer, M. Gutierrez, S. Kondapalli et al. "Data integration with the Climate Science Modelling Language". Advances in Geosciences 8 (6 de junio de 2006): 83–90. http://dx.doi.org/10.5194/adgeo-8-83-2006.

Texto completo
Resumen
Abstract. The Climate Science Modelling Language (CSML) has been developed by the NERC DataGrid (NDG) project as a standards-based data model and XML markup for describing and constructing climate science datasets. It uses conceptual models from emerging standards in GIS to define a number of feature types, and adopts schemas of the Geography Markup Language (GML) where possible for encoding. A prototype deployment of CSML is being trialled across the curated archives of the British Atmospheric and Oceanographic Data Centres. These data include a wide range of data types – both observational and model – and heterogeneous file-based storage systems. CSML provides a semantic abstraction layer for data files, and is exposed through higher level data delivery services. In NDG these will include file instantiation services (for formats of choice) and the web services of the Open Geospatial Consortium (OGC).
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Kalabikhina, Irina E., Herman A. Klimenko, Evgeny P. Banin, Ekaterina K. Vorobyeva y Anna D. Lameeva. "Database of digital media publications on maternal (family) capital in Russia in 2006–2019". Population and Economics 5, n.º 4 (8 de diciembre de 2021): 21–29. http://dx.doi.org/10.3897/popecon.5.e78723.

Texto completo
Resumen
The database contains data from publications of digital Russian-language media registered in the Russian Federation on the topic of maternity capital published in the period from May 10, 2006 to June 30, 2019. The database includes general data on publications on maternity capital in .csv formats (UTF-8 encoding). Full texts of publications are presented in .xml format. A specialized request was generated for the aggregator of publications of Russian-language digital mass media public.ru. In total, the database consists of 457,888 publications of 7,665 publishing houses from 1,251 settlements located in 85 regions of Russia. The database includes information about the date and type of publication, publisher, place of publication (municipality), texts about maternity capital, and numbers of unique positive, negative, and neutral words and phrases according to the RuSentiLex2017 dictionary, as well as full texts of publications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Love, Robbie, Claire Dembry, Andrew Hardie, Vaclav Brezina y Tony McEnery. "The Spoken BNC2014". International Journal of Corpus Linguistics 22, n.º 3 (23 de noviembre de 2017): 319–44. http://dx.doi.org/10.1075/ijcl.22.3.02lov.

Texto completo
Resumen
Abstract This paper introduces the Spoken British National Corpus 2014, an 11.5-million-word corpus of orthographically transcribed conversations among L1 speakers of British English from across the UK, recorded in the years 2012–2016. After showing that a survey of the recent history of corpora of spoken British English justifies the compilation of this new corpus, we describe the main stages of the Spoken BNC2014’s creation: design, data and metadata collection, transcription, XML encoding, and annotation. In doing so we aim to (i) encourage users of the corpus to approach the data with sensitivity to the many methodological issues we identified and attempted to overcome while compiling the Spoken BNC2014, and (ii) inform (future) compilers of spoken corpora of the innovations we implemented to attempt to make the construction of corpora representing spontaneous speech in informal contexts more tractable, both logistically and practically, than in the past.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Salgado, Ana y Rute Costa. "O projeto 'Edição Digital dos Vocabulários da Academia das Ciências': o VOLP-1940". Revista da Associação Portuguesa de Linguística, n.º 7 (30 de noviembre de 2020): 275–94. http://dx.doi.org/10.26334/2183-9077/rapln7ano2020a17.

Texto completo
Resumen
This paper presents the Digital Edition of the Vocabularies of the Academy of Sciences project, which aims to digitise the spelling vocabularies of the Lisbon Academy of Sciences (ACL) in order to create a digital lexicographic corpus bringing together the printed versions of all these lexicographical reference works – the 1940, 1947, 1970, and finally the 2012 editions. The first stage started with the Vocabulário Ortográfico da Língua Portuguesa [Orthographic Vocabulary of the Portuguese Language] (VOLP-1940), our case study. After digitising this vocabulary, the work described here focuses on the linguistic annotation of VOLP-1940 using eXtensible Markup Language (XML), an annotation metalanguage, and following the annotation directives of the Text Encoding Initiative (TEI), more specifically the application of TEI Lex-0, a new TEI sub-format. We aim to highlight the need for rigorous linguistic data processing in the creation of new lexical resources to increase the quality of their description and applicability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Calarco, Gabriel Alejandro, Pamela Gionco, Rocío Méndez, David Merino Recalde, Gabriela Striker y Cristian Suárez-Giraldo. "Digital Publishing with Minimal Computing (UMD-USAL, 2020)". Publicaciones de la Asociación Argentina de Humanidades Digitales 2 (15 de diciembre de 2021): e022. http://dx.doi.org/10.24215/27187470e022.

Texto completo
Resumen
En este trabajo compartiremos nuestra experiencia como estudiantes del curso Digital Publishing with Minimal Computing/Ediciones digitales con minimal computing, que fue impartido en modalidad virtual por Raffaele Viglianti (University of Maryland), Gimena del Rio Riande, Romina De León y Nidia Hernández (Consejo Nacional de Investigaciones Científicas y Técnicas) entre septiembre y diciembre de 2020. Nos interesa desarrollar las oportunidades y dificultades que percibimos al trabajar con el lenguaje de marcado XML y el estándar desarrollado por la Text Encoding Initiative para abordar nuestra primera edición digital de un texto –en particular, la “Descripción de Buenos Aires”, contenida en Relación de un viaje al Río de la Plata, de Acarete du Biscay (siglo XVII), publicado tanto en inglés (1698) como en castellano (1867)–, mediante tecnologías abiertas como GitLab y Jekyll para su publicación en un sitio web. Sin duda, la metodología colaborativa empleada nos anima a emprender otras tareas y proyectos de Humanidades.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía