Literatura académica sobre el tema "Parse data"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Parse data".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Parse data"

1

Marimon, Montserrat, Núria Bel y Lluís Padró. "Automatic Selection of HPSG-Parsed Sentences for Treebank Construction". Computational Linguistics 40, n.º 3 (septiembre de 2014): 523–31. http://dx.doi.org/10.1162/coli_a_00190.

Texto completo
Resumen
This article presents an ensemble parse approach to detecting and selecting high-quality linguistic analyses output by a hand-crafted HPSG grammar of Spanish implemented in the LKB system. The approach uses full agreement (i.e., exact syntactic match) along with a MaxEnt parse selection model and a statistical dependency parser trained on the same data. The ultimate goal is to develop a hybrid corpus annotation methodology that combines fully automatic annotation and manual parse selection, in order to make the annotation task more efficient while maintaining high accuracy and the high degree of consistency necessary for any foreseen uses of a treebank.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kallmeyer, Laura y Wolfgang Maier. "Data-Driven Parsing using Probabilistic Linear Context-Free Rewriting Systems". Computational Linguistics 39, n.º 1 (marzo de 2013): 87–119. http://dx.doi.org/10.1162/coli_a_00136.

Texto completo
Resumen
This paper presents the first efficient implementation of a weighted deductive CYK parser for Probabilistic Linear Context-Free Rewriting Systems (PLCFRSs). LCFRS, an extension of CFG, can describe discontinuities in a straightforward way and is therefore a natural candidate to be used for data-driven parsing. To speed up parsing, we use different context-summary estimates of parse items, some of them allowing for A* parsing. We evaluate our parser with grammars extracted from the German NeGra treebank. Our experiments show that data-driven LCFRS parsing is feasible and yields output of competitive quality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Dehbi, Y., C. Staat, L. Mandtler y L. Pl¨umer. "INCREMENTAL REFINEMENT OF FAÇADE MODELS WITH ATTRIBUTE GRAMMAR FROM 3D POINT CLOUDS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (6 de junio de 2016): 311–16. http://dx.doi.org/10.5194/isprsannals-iii-3-311-2016.

Texto completo
Resumen
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on fac¸ades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Dehbi, Y., C. Staat, L. Mandtler y L. Pl¨umer. "INCREMENTAL REFINEMENT OF FAÇADE MODELS WITH ATTRIBUTE GRAMMAR FROM 3D POINT CLOUDS". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (6 de junio de 2016): 311–16. http://dx.doi.org/10.5194/isprs-annals-iii-3-311-2016.

Texto completo
Resumen
Data acquisition using unmanned aerial vehicles (UAVs) has gotten more and more attention over the last years. Especially in the field of building reconstruction the incremental interpretation of such data is a demanding task. In this context formal grammars play an important role for the top-down identification and reconstruction of building objects. Up to now, the available approaches expect offline data in order to parse an a-priori known grammar. For mapping on demand an on the fly reconstruction based on UAV data is required. An incremental interpretation of the data stream is inevitable. This paper presents an incremental parser of grammar rules for an automatic 3D building reconstruction. The parser enables a model refinement based on new observations with respect to a weighted attribute context-free grammar (WACFG). The falsification or rejection of hypotheses is supported as well. The parser can deal with and adapt available parse trees acquired from previous interpretations or predictions. Parse trees derived so far are updated in an iterative way using transformation rules. A diagnostic step searches for mismatches between current and new nodes. Prior knowledge on fac¸ades is incorporated. It is given by probability densities as well as architectural patterns. Since we cannot always assume normal distributions, the derivation of location and shape parameters of building objects is based on a kernel density estimation (KDE). While the level of detail is continuously improved, the geometrical, semantic and topological consistency is ensured.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Toutanova, Kristina, Aria Haghighi y Christopher D. Manning. "A Global Joint Model for Semantic Role Labeling". Computational Linguistics 34, n.º 2 (junio de 2008): 161–91. http://dx.doi.org/10.1162/coli.2008.34.2.161.

Texto completo
Resumen
We present a model for semantic role labeling that effectively captures the linguistic intuition that a semantic argument frame is a joint structure, with strong dependencies among the arguments. We show how to incorporate these strong dependencies in a statistical joint model with a rich set of features over multiple argument phrases. The proposed model substantially outperforms a similar state-of-the-art local model that does not include dependencies among different arguments. We evaluate the gains from incorporating this joint information on the Propbank corpus, when using correct syntactic parse trees as input, and when using automatically derived parse trees. The gains amount to 24.1% error reduction on all arguments and 36.8% on core arguments for gold-standard parse trees on Propbank. For automatic parse trees, the error reductions are 8.3% and 10.3% on all and core arguments, respectively. We also present results on the CoNLL 2005 shared task data set. Additionally, we explore considering multiple syntactic analyses to cope with parser noise and uncertainty.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Homayounfar, Hooman y Fangju Wang. "Sibling‐First Data Organization for Parse‐Free XML Data Processing". International Journal of Web Information Systems 2, n.º 3/4 (27 de septiembre de 2007): 176–86. http://dx.doi.org/10.1108/17440080780000298.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Clark, Stephen y James R. Curran. "Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models". Computational Linguistics 33, n.º 4 (diciembre de 2007): 493–552. http://dx.doi.org/10.1162/coli.2007.33.4.493.

Texto completo
Resumen
This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are “full” parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's “spurious ambiguity,” the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ammar, Waleed, George Mulcaire, Miguel Ballesteros, Chris Dyer y Noah A. Smith. "Many Languages, One Parser". Transactions of the Association for Computational Linguistics 4 (diciembre de 2016): 431–44. http://dx.doi.org/10.1162/tacl_a_00109.

Texto completo
Resumen
We train one multilingual model for dependency parsing and use it to parse sentences in several languages. The parsing model uses (i) multilingual word clusters and embeddings; (ii) token-level language information; and (iii) language-specific features (fine-grained POS tags). This input representation enables the parser not only to parse effectively in multiple languages, but also to generalize across languages based on linguistic universals and typological similarities, making it more effective to learn from limited annotations. Our parser’s performance compares favorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rioth, Matthew J., Ramya Thota, David B. Staggs, Douglas B. Johnson y Jeremy L. Warner. "Pragmatic precision oncology: the secondary uses of clinical tumor molecular profiling". Journal of the American Medical Informatics Association 23, n.º 4 (28 de marzo de 2016): 773–76. http://dx.doi.org/10.1093/jamia/ocw002.

Texto completo
Resumen
Abstract Background Precision oncology increasingly utilizes molecular profiling of tumors to determine treatment decisions with targeted therapeutics. The molecular profiling data is valuable in the treatment of individual patients as well as for multiple secondary uses. Objective To automatically parse, categorize, and aggregate clinical molecular profile data generated during cancer care as well as use this data to address multiple secondary use cases. Methods A system to parse, categorize and aggregate molecular profile data was created. A naÿve Bayesian classifier categorized results according to clinical groups. The accuracy of these systems were validated against a published expertly-curated subset of molecular profiling data. Results Following one year of operation, 819 samples have been accurately parsed and categorized to generate a data repository of 10,620 genetic variants. The database has been used for operational, clinical trial, and discovery science research. Conclusions A real-time database of molecular profiling data is a pragmatic solution to several knowledge management problems in the practice and science of precision oncology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zou, Feng, Xingshu Chen, Yonggang Luo, Tiemai Huang, Zhihong Liao y Keer Song. "Spray: Streaming Log Parser for Real-Time Analysis". Security and Communication Networks 2022 (6 de septiembre de 2022): 1–11. http://dx.doi.org/10.1155/2022/1559270.

Texto completo
Resumen
Logs is an important source of data in the field of security analysis. Log messages characterized by unstructured text, however, pose extreme challenges to security analysis. To this end, the first issue to be addressed is how to efficiently parse logs into structured data in real-time. The existing log parsers mostly parse raw log files by batch processing and are not applicable to real-time security analysis. It is also difficult to parse large historical log sets with such parsers. Some streaming log parsers also have some demerits in accuracy and parsing performance. To realize automatic, accurate, and efficient real-time log parsing, we propose Spray, a streaming log parser for real-time analysis. Spray can automatically identify the template of a real-time incoming log and accurately match the log and its template for parsing based on the law of contrapositive. We also improve Spray’s parsing performance based on key partitioning and search tree strategies. We conducted extensive experiments from such aspects as accuracy and performance. Experimental results show that Spray is much more accurate in parsing a variety of public log sets and has higher performance for parsing large log sets.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Parse data"

1

Mansfield, Martin F. "Design of a generic parse tree for imperative languages". Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834617.

Texto completo
Resumen
Since programs are written in many languages and design documents are not maintained (if they ever existed), there is a need to extract the design and other information that the programs represent. To do this without writing a separate program for each language, a common representation of the symbol table and parse tree would be required.The purpose of the parse tree and symbol table will not be to generate object code but to provide a platform for analysis tools. In this way the tool designer develops only one version instead of separate versions for each language. The generic symbol table and generic parse tree may not be as detailed as those same structures in a specific compiler but the parse tree must include all structures for imperative languages.
Department of Computer Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Andrén, August y Patrik Hagernäs. "Data-parallel Acceleration of PARSEC Black-Scholes Benchmark". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-128607.

Texto completo
Resumen
The way programmers has been relying on processor improvements to gain speedup in their applications is no longer applicable in the same fashion. Programmers usually have to parallelize their code to utilize the CPU cores in the system to gain a signicant speedup. To accelerate parallel applications furthermore there are a couple of techniques available. One technique is to vectorize some of the parallel code. Another technique is to move parts of the parallel code to the GPGPU and utilize this very good multithreading unit of the system. The main focus of this report is to accelerate the data-parallel workload Black-Scholes of PARSEC benchmark suite. We are going to compare three accelerations of this workload, using vector instructions in the CPU, using the GPGPU and using a combination of them both. The two fundamental aspects are to look at the speedup and determine which technique requires more or less programming eort. To accelerate with vectorization in the CPU we use SSE & AVX techniques and to accelerate the workload in the GPGPU we use OpenACC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Alvestad, Gaute Odin, Ole Martin Gausnes y Ole-Jakob Kråkenes. "Development of a Demand Driven Dom Parser". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9311.

Texto completo
Resumen

XML is a tremendous popular markup language in internet applications as well as a storage format. XML document access is often done through an API, and perhaps the most important of these is the W3C DOM. The recommendation from W3C defines a number of interfaces for a developer to access and manipulate XML documents. The recommendation does not define implementation specific approaches used behind the interfaces. A problem with the W3C DOM approach however, is that documents often are loaded in to memory as a node tree of objects, representing the structure of the XML document. This tree is memory consuming and can take up to 4-10 times the document size. Lazy processing have been proposed, building the node tree as it accesses new parts of the document. But when the whole document has been accessed, the overhead compared to traditional parsers, both in terms of memory usage and performance, is high. In this thesis a new approach is introduced. With the use of well known indexing schemes for XML, basic techniques for reducing memory consumption, and principles for memoryhandling in operation systems, a new and alternative approach is introduced. By using a memory cache repository for DOM nodes and simultaneous utilize principles for lazy processing, the proposed implementation has full control over memory consumption. The proposed prototype is called Demand Driven Dom Parser, D3P. The proposed approach removes least recently used nodes from the memory when the cache has exceeded its memory limit. This makes the D3P able to process the document with low memory requirements. An advantage with this approach is that the parser is able to process documents that exceed the size of the main memory, which is impossible with traditional approaches. The implementation is evaluated and compared with other implementations, both lazy and traditional parsers that builds everything in memory on load. The proposed implementation performs well when the bottleneck is memory usage, because the user can set the desired amount of memory to be used by the XML node tree. On the other hand, as the coverage of the document increases, time spend processing the node tree grows beyond what is used by traditional approaches.

Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Seppecher, Manon. "Mining call detail records to reconstruct global urban mobility patterns for large scale emissions calculation". Electronic Thesis or Diss., Lyon, 2022. http://www.theses.fr/2022LYSET002.

Texto completo
Resumen
En milieu urbain, le trafic routier contribue de manière significative aux émissions atmosphériques, enjeu majeur de la lutte contre le changement climatique. Par conséquent, la surveillance conjointe du trafic routier et des émissions qu’il génère constitue un support essentiel de la décision publique. Au-delà de simples procédures de suivi, les pouvoirs publics ont besoin de méthodes d’évaluation des politiques de transport selon des critères environnementaux.Le couplage de modèles de trafic avec des modèles d’émissions constitue une réponse adaptée à ce besoin. Cependant, l’intégration de tels models à des outils d'aide à la décision nécessite une ca-ractérisation fine et dynamique de la mobilité urbaine. Les données de téléphonie mobile, et en particulier les statistiques d'appel (données CDR), sont une alternative aux données traditionnelles pour estimer cette mobilité. Elles sont riches, massives, et disponibles partout dans le monde. Néanmoins, leur utilisation pour la caractérisation systématique du trafic routier est restée limitée. Cela s'explique par une faible résolution spatiale et des taux d'échantillonnage temporels sensible aux comportements de communication.Cette thèse de doctorat interroge l'estimation des variables de trafic nécessaires au calcul d'émis-sions atmosphériques (distances totales parcourues et vitesses moyennes de trafic) à partir de telles données, et malgré leurs biais. Une première contribution importante est d’articuler des méthodes de classification des individus avec deux approches distinctes de reconstruction de la mobilité. Un seconde contribution est le développement d'une méthode d'estimation des vitesses de trafic basée sur la fusion de larges quantité de données de déplacements. Enfin, un processus méthodologique complet de modélisation et de traitement des données est avancé. Il articule de façon cohérente les méthodes proposées dans cette thèse
Road traffic contributes significantly to atmospheric emissions in urban areas, a major issue in the fight against climate change. Therefore, joint monitoring of road traffic and related emissions is essential for urban public decision-making. And beyond this kind of procedure, public authorities need methods for evaluating transport policies according to environmental criteria.Coupling traffic models with traffic-related emission models is a suitable response to this need. However, integrating this solution into decision support tools requires a refined and dynamic char-acterization of urban mobility. Cell phone data, particularly Call Detail Records, are an interesting alternative to traditional data to estimate this mobility. They are rich, massive, and available worldwide. However, their use in literature for systematic traffic characterization has remained limited. It is due to low spatial resolution and temporal sampling rates sensitive to communication behaviors.This Ph.D. thesis investigates the estimation of traffic variables necessary for calculating air emis-sions (total distances traveled and average traffic speeds) from such data, despite their biases. The first significant contribution is to articulate methods of classification of individuals with two distinct approaches of mobility reconstruction. A second contribution is developing a method for estimating traffic speeds based on the fusion of large amounts of travel data. Finally, we present a complete methodological process of modeling and data processing. It relates the methods proposed in this thesis coherently
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Shah, Meelap (Meelap Vijay). "PARTE : automatic program partitioning for efficient computation over encrypted data". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79239.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 45-47).
Many modern applications outsource their data storage and computation needs to third parties. Although this lifts many infrastructure burdens from the application developer, he must deal with an increased risk of data leakage (i.e. there are more distributed copies of the data, the third party may be insecure and/or untrustworthy). Oftentimes, the most practical option is to tolerate this risk. This is far from ideal and in case of highly sensitive data (e.g. medical records, location history) it is unacceptable. We present PARTE, a tool to aid application developers in lowering the risk of data leakage. PARTE statically analyzes a program's source, annotated to indicate types which will hold sensitive data (i.e. data that should not be leaked), and outputs a partitioned version of the source. One partition will operate only on encrypted copies of sensitive data to lower the risk of data leakage and can safely be run by a third party or otherwise untrusted environment. The second partition must have plaintext access to sensitive data and therefore should be run in a trusted environment. Program execution will flow between the partitions, levaraging third party resources when data leakage risk is low. Further, we identify operations which, if efficiently supported by some encryption scheme, would improve the performance of partitioned execution. To demonstrate the feasiblity of these ideas, we implement PARTE in Haskell and run it on a web application, hpaste, which allows users to upload and share text snippets. The partitioned hpaste services web request 1.2 - 2.5 x slower than the original hpaste. We find this overhead to be moderately high. Moreover, the partitioning does not allow much code to run on encrypted data. We discuss why we feel our techniques did not produce an attractive partitioning and offer insight on new research directions which could yield better results.
by Meelap Shah.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bucciarelli, Stefano. "Un compilatore per un linguaggio per smart contract intrinsecamente tipato". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19573/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Dall, Rasmus. "Statistical parametric speech synthesis using conversational data and phenomena". Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29016.

Texto completo
Resumen
Statistical parametric text-to-speech synthesis currently relies on predefined and highly controlled prompts read in a “neutral” voice. This thesis presents work on utilising recordings of free conversation for the purpose of filled pause synthesis and as an inspiration for improved general modelling of speech for text-to-speech synthesis purposes. A corpus of both standard prompts and free conversation is presented and the potential usefulness of conversational speech as the basis for text-to-speech voices is validated. Additionally, through psycholinguistic experimentation it is shown that filled pauses can have potential subconscious benefits to the listener but that current text-to-speech voices cannot replicate these effects. A method for pronunciation variant forced alignment is presented in order to obtain a more accurate automatic speech segmentation something which is particularly bad for spontaneously produced speech. This pronunciation variant alignment is utilised not only to create a more accurate underlying acoustic model, but also as the driving force behind creating more natural pronunciation prediction at synthesis time. While this improves both the standard and spontaneous voices the naturalness of spontaneous speech based voices still lags behind the quality of voices based on standard read prompts. Thus, the synthesis of filled pauses is investigated in relation to specific phonetic modelling of filled pauses and through techniques for the mixing of standard prompts with spontaneous utterances in order to retain the higher quality of standard speech based voices while still utilising the spontaneous speech for filled pause modelling. A method for predicting where to insert filled pauses in the speech stream is also developed and presented, relying on an analysis of human filled pause usage and a mix of language modelling methods. The method achieves an insertion accuracy in close agreement with human usage. The various approaches are evaluated and their improvements documented throughout the thesis, however, at the end the resulting filled pause quality is assessed through a repetition of the psycholinguistic experiments and an evaluation of the compilation of all developed methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Erozel, Guzen. "Natural Language Interface On A Video Data Model". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.

Texto completo
Resumen
The video databases and retrieval of data from these databases have become popular in various business areas of work with the improvements in technology. As a kind of video database, video archive systems need user-friendly interfaces to retrieve video frames. In this thesis, an NLP based user interface to a video database system is developed using a content-based spatio-temporal video data model. The data model is focused on the semantic content which includes objects, activities, and spatial properties of objects. Spatio-temporal relationships between video objects and also trajectories of moving objects can be queried with this data model. In this video database system, NL interface enables flexible querying. The queries, which are given as English sentences, are parsed using Link Parser. Not only exact matches but similar objects and activities are also returned from the database with the help of the conceptual ontology module to return all related frames to the user. This module is implemented using a distance-based method of semantic similarity search on the semantic domain-independent ontology, WordNet. The semantic representations of the given queries are extracted from their syntactic structures using information extraction techniques. The extracted semantic representations are used to call the related parts of the underlying spatio-temporal video data model to calculate the results of the queries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Abel, Donald Randall. "The Parser Converter Loader: An Implementation of the Computational Chemistry Output Language (CCOL)". PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/4926.

Texto completo
Resumen
A necessity of managing scientific data is the ability to maintain experimental legacy information without continually modifying the applications that create and use that information. By facilitating the management of scientific data we hope to give scientists the ability to effectively use additional modeling applications and experimental data. We have demonstrated that an extensible interpreter, using a series of stored directives, allows the loading of data from computational chemistry applications into a generic database. Extending the interpreter to support a new application involves supplying a list of directives for each piece of information to be loaded. This research confirms that an extensible interpreter can be used to load computational chemistry experimental data into a generic database. This procedure may be applicable to the loading and retrieving of other types of experimental data without requiring modifications of the loading and retrieving applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sodhi, Bir Apaar Singh. "DATA MINING: TRACKING SUSPICIOUS LOGGING ACTIVITY USING HADOOP". CSUSB ScholarWorks, 2016. https://scholarworks.lib.csusb.edu/etd/271.

Texto completo
Resumen
In this modern rather interconnected era, an organization’s top priority is to protect itself from major security breaches occurring frequently within a communicational environment. But, it seems, as if they quite fail in doing so. Every week there are new headlines relating to information being forged, funds being stolen and corrupt usage of credit card and so on. Personal computers are turned into “zombie machines” by hackers to steal confidential and financial information from sources without disclosing hacker’s true identity. These identity thieves rob private data and ruin the very purpose of privacy. The purpose of this project is to identify suspicious user activity by analyzing a log file which then later can help an investigation agency like FBI to track and monitor anonymous user(s) who seek for weaknesses to attack vulnerable parts of a system to have access of it. The project also emphasizes the potential damage that a malicious activity could have on the system. This project uses Hadoop framework to search and store log files for logging activities and then performs a ‘Map Reduce’ programming code to finally compute and analyze the results.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Libros sobre el tema "Parse data"

1

Müller-Landmann, Sonja. Corpus-based parse pruning: Applying empirical data to symbolic knowledge. Saarbrücken: DFKI, 2000.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bateson, Teresa M. Report on parsing and construction of prototype that will accept freeform text data from publishers' sites and parse this data for automatic entry to book database. [s.l: The Author], 2001.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

service), SpringerLink (Online, ed. Crittografia nel Paese delle Meraviglie. Milano: Springer Milan, 2012.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Miller, David. Commodore 128 data file programming. Blue Ridge Summit, USA: TAB Books, 1987.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Reading machines: Toward an algorithmic criticism. Urbana, Ill: University of Illinois Press, 2011.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Blum, Caroline y Reisman David. Driven to abstraction: Caroline Blum, Lori Ellison, Dana James, Melanie Parke, Lizzie Scott. New York, NY: New Bohemia Press, 2017.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Fred, Karlsson, ed. Constraint grammar: A language-independent system for parsing unrestricted text. Berlin: Mouton de Gruyter, 1995.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Rachel, Cohen y Ackermann Edith, eds. Quand l'ordinateur parle--: Utilisation de la synthèse vocale dans l'apprentissage et le perfectionnement de la langue écrite. Paris: Presses universitaires de France, 1992.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Preston Opera House (Ont.), ed. Operetta, John Gilpin to date, and farce, Ici on parle français (French spoken here): Under auspices Y.M.R. & R.R., Preston Opera House, May 7th : Miss Dora Amelin, accompanist, Mr. H.L. Read, director .. [Cambridge, Ont.?: s.n., 1987.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Caldelli, Maria Letizia, Mireille Cébeillac-Gervasoni, Nicolas Laubry, Ilaria Manzini, Raffaella Marchesini, Filippo Marini Recchia y Fausto Zevi. Epigrafia ostiense dopo il CIL. Venice: Edizioni Ca' Foscari, 2018. http://dx.doi.org/10.30687/978-88-6969-229-1.

Texto completo
Resumen
Ostia è, dopo Roma, la città dell’Impero che ha restituito il maggior numero di iscrizioni latine. Dopo la pubblicazione del XIV volume del Corpus Inscriptionum Latinarum (1887) e del Supplementum Ostiense (1930), molto altro materiale è venuto alla luce, in gran parte nel corso dei ‘Grandi Scavi’ degli anni 1938-42, ma è rimasto in gran parte inedito. Da qualche anno un gruppo di studio italo-francese ha dato avvio a un progetto sistematico di pubblicazione. Il volume comprende circa 2000 iscrizioni funerarie provenienti da Ostia e conservate a Ostia (oltre a 168 perdute), con schede scientifiche e fotografie di alta qualità, eseguite a cura della competente Soprintendenza, che apportano un considerevolissimo contributo alla conoscenza dell’onomastica e delle famiglie della città, nonché agli Iura Sepulchrorum. Al catalogo seguono gli indici che adottano la tradizionale ripartizione tematica. Chiudono il volume una bibliografia abbreviata, gli indici delle fonti epigrafiche e le corrispondenze inventariali.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Parse data"

1

Kulkarni, Adithya, Nasim Sabetpour, Alexey Markin, Oliver Eulenstein y Qi Li. "CPTAM: Constituency Parse Tree Aggregation Method". En Proceedings of the 2022 SIAM International Conference on Data Mining (SDM), 630–38. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2022. http://dx.doi.org/10.1137/1.9781611977172.71.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Mardan, Azat. "Getting Data from Backend Using jQuery and Parse". En Full Stack JavaScript, 67–126. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-3718-2_3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Patchala, Jagadeesh, Raj Bhatnagar y Sridharan Gopalakrishnan. "Author Attribution of Email Messages Using Parse-Tree Features". En Machine Learning and Data Mining in Pattern Recognition, 313–27. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-21024-7_21.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Chang, Fei, Li Zhu, Jin Liu, Jin Yuan y Xiaoxia Deng. "A Universal Heterogeneous Data Integration Standard and Parse Algorithm in Real-Time Database". En Lecture Notes in Electrical Engineering, 709–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34522-7_76.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jin, Zhijing y Rada Mihalcea. "Natural Language Processing for Policymaking". En Handbook of Computational Social Science for Policy, 141–62. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16624-2_7.

Texto completo
Resumen
AbstractLanguage is the medium for many political activities, from campaigns to news reports. Natural language processing (NLP) uses computational tools to parse text into key information that is needed for policymaking. In this chapter, we introduce common methods of NLP, including text classification, topic modelling, event extraction, and text scaling. We then overview how these methods can be used for policymaking through four major applications including data collection for evidence-based policymaking, interpretation of political decisions, policy communication, and investigation of policy effects. Finally, we highlight some potential limitations and ethical concerns when using NLP for policymaking.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Nambiar, Aparna, B. Premjith, J. P. Sanjanasri y K. P. Soman. "BERT-Based Dependency Parser for Hindi". En Advances in Data Science and Computing Technologies, 421–27. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3656-4_43.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Skopik, Florian, Markus Wurzenberger y Max Landauer. "A Concept for a Tree-Based Log Parser Generator". En Smart Log Data Analytics, 131–49. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74450-2_7.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Papadopoulos, Constantinos V. "On the parallelism of data". En PARLE'94 Parallel Architectures and Languages Europe, 414–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/3-540-58184-7_119.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Hurson, A. R., B. Lee y B. Shirazi. "Hybrid structure: A scheme for handling data structures in a data flow environment". En PARLE '89 Parallel Architectures and Languages Europe, 323–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3540512845_48.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Glück-Hiltrop, Elvira, Matthias Ramlow y Ute Schürfeld. "The sto//mann data flow machine". En PARLE '89 Parallel Architectures and Languages Europe, 433–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/3540512845_55.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Parse data"

1

Yoshida, Satoshi y Takuya Kida. "An Efficient Algorithm for Almost Instantaneous VF Code Using Multiplexed Parse Tree". En 2010 Data Compression Conference. IEEE, 2010. http://dx.doi.org/10.1109/dcc.2010.27.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Henderson, James y Ivan Titov. "Data-defined kernels for parse reranking derived from probabilistic models". En the 43rd Annual Meeting. Morristown, NJ, USA: Association for Computational Linguistics, 2005. http://dx.doi.org/10.3115/1219840.1219863.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Deva Priyaa, B. y M. Indra Devi. "Fragmented query parse tree based SQL injection detection system for web applications". En 2016 International Conference on Computing Technologies and Intelligent Data Engineering (ICCTIDE). IEEE, 2016. http://dx.doi.org/10.1109/icctide.2016.7725367.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Goraya, Yassar, Ali Al Felasi, Nicolas Daynac, Stuart Walley, Marc-Antoine Dupont, Hiroyuki Inoue, Ahmed Mubarak Al Khamiri y Alia Hasan Hindi. "Delineating Karsts, Small-Scale Faults, and Fractures by Using a Global Stratigraphic Framework to Integrate Conventional Seismic Attributes with Diffraction Imaging in a Giant Offshore Field, Abu Dhabi." En ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211628-ms.

Texto completo
Resumen
Abstract Poor resolution and signal-to-noise ratio have often been key factors impacting an interpreter’s ability to directly delineate subsurface features from seismic data. Improvements in full azimuth, high-density acquisition and in diffraction imaging, have the potential to reveal greater subsurface detail. To date, the application of diffraction imaging has in part been limited by the methods available to parse and analyze that same data. For instance, visualization of diffraction images on time and or depth slices may show patterns, but as these slices can cut through successive seismic reflectors, they tend not to be geologically meaningful. An approach is described that uses machine automation to rapidly incorporate diffraction images into a full-volume 3D seismic interpretation. Delineation of key stratigraphic surfaces is driven by stacking patterns and stratigraphic terminations and performed in both structural and Wheeler domains. Stratal slicing is a highly flexible and rapid method of generating chronostratigraphic surfaces. These chronostratigraphic surfaces can be extracted at sub-sample resolution and therefore accurately matched to well log responses which typically fall well below the resolution of a seismic dataset. The diffraction image is then projected onto a series of chronostratigraphic surfaces, allowing the interpreter to parse through and compare diffraction data directly with conventional seismic attributes at the same chronostratigraphic layer. The novel approach described has been used to demonstrate both the value of diffraction imaging and the importance of using a global full volume 3D seismic interpretation when identifying features such as karsts and small-scale faults and fractures. When applied to a recently processed high-density wide azimuth seismic survey, the workflow was able to seamlessly integrate diffraction images to provide improved confidence in the delineation of karsts and other collapse features that can pose a drilling hazard within the Giant Field, offshore United Arab Emirates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Yu, Jinhui, Xinyu Luan y Yu Sun. "An Automated Analytics Engine for College Program Selection using Machine Learning and Big Data Analysis". En 2nd International Conference on Machine Learning Techniques and NLP (MLNLP 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111417.

Texto completo
Resumen
Because of the differences in the structure and content of each website, it is often difficult for international applicants to obtain the application information of each school in time. They need to spend a lot of time manually collecting and sorting information. Especially when the information of the school may be constantly updated, the information may become very inaccurate for international applicants. we designed a tool including three main steps to solve the problem: crawling links, processing web pages, and building my pages. In compiling languages, we mainly use Python and store the crawled data in JSON format [4]. In the process of crawling links, we mainly used beautiful soup to parse HTML and designed crawler. In this paper, we use Python language to design a system. First, we use the crawler method to fetch all the links related to the admission information on the school's official website. Then we traverse these links, and use the noise_remove [5] method to process their corresponding page contents, so as to further narrow the scope of effective information and save these processed contents in the JSON files. Finally, we use the Flask framework to integrate these contents into my front-end page conveniently and efficiently, so that it has the complete function of integrating and displaying information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Afzalan, Milad, Farrokh Jazizadeh y Mehdi Ahmadian. "Toward Railway Automated Defect Detection From Onboard Data Using Deep Learning". En 2020 Joint Rail Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/jrc2020-8031.

Texto completo
Resumen
Abstract Regular monitoring of railway systems is imperative for improving safety and ride quality. To this end, data collection is carried out regularly in the rail industry to document performance and maintenance. The use of machine learning methods in the past recent years has provided opportunities for improved data processing and defect detection and monitoring. Such methods rely on installing instrumentation wayside or collecting data from onboard rolling stock. Using the former approach, only specific locations can be monitored, which could hinder covering a large territory. The latter, however, enables monitoring large sections of track, hence proving far more spatial efficiency. In this paper, we have investigated the feasibility of rail defect detection using deep learning from onboard data. The source of data is acceleration and track geometry collected from onboard railcars. Such an approach allows collecting a large set of data on a regular basis. A long short-term memory (LSTM) architecture is proposed to examine the measured time-series to flag potential track defects. The proposed architecture investigates the characteristics of time-series signatures during a short time (∼ls) and classifies the associated track segment to normal/defect states. Furthermore, a novel automated labeling method is proposed to parse the exception report data (recorded by the maintenance team) and label defects for associated time-series signatures during the training phase. In a pilot study, field data from a revenue service Class I railroad has been used to evaluate the proposed deep learning method. The results show that it is possible to efficiently analyze the data (collected onboard a railcar operated in revenue service) for automated defect detection, with relatively higher accuracy for FRA type I defects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ismail, Mohamed. "An Excel Add-in for Accreditation Data Collection and Auto Grading Sheets (AGS): A Canadian Experience". En ASME 2018 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/imece2018-88096.

Texto completo
Resumen
In this paper, an Excel Add-in or Xl-App for automating grade recording and graduate attributes assessment at the course level is presented. The Xl-App is one of the three major constituents of the OBACIS System. At the course level, the purpose of the Xl-App is twofold: 1) cutting down the grade compilation and accreditation reporting time and effort by an order of magnitude using the built-in OBACIS accreditation and grading sheets (AGS) module 2) introducing an advanced tool for data-driven continuous improvement (DCI) for enhancing the teaching and learning experience at both course and program levels. The app has a third module used to collect additional information related to accreditation reporting. This information is required by the course information sheets (CIS) mandated by CEAB accreditation questionnaire. The Win-app has a dedicated module that serves that purpose called OBACIS Catalogs. The Xl-App is capable of emitting the data collected in both XLSX and XML formats. The data collected can be easily exported to learning management systems (LMS), grade books, and web marking systems. The OBACIS Win-App can easily parse the data collected from different faculty members using the Xl-App in their raw excel format and integrate them together to generate unified program and faculty-level assessment reports that can be utilized in generating top-down continuous improvement action plans. The Xl-App has been in implementation since early 2015. It had remarkable impact on enhancing the teaching and learning experience of a handful of courses taught by the author. The App improved the robustness of course grading and saved a tremendous amount of time needed for grade and accreditation reporting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sponsler, Jeffrey L. y Charles Parker. "Lab Parser: A Parser for Medical Lab data". En Modelling, Simulation and Identification. Calgary,AB,Canada: ACTAPRESS, 2018. http://dx.doi.org/10.2316/p.2018.858-002.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Ahmad, Isaar, Sanjog Patil y Smruti R. Sarangi. "HPXA: A highly parallel XML parser". En 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2018. http://dx.doi.org/10.23919/date.2018.8342012.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Yi, Michael, Pradeepkumar Ashok, Dawson Ramos, Spencer Bohlander, Taylor Thetford, Mojtaba Shahri, Mickey Noworyta, Trey Peroyea y Michael Behounek. "Automated Merging of Time Series and Textual Operations Data to Extract Technical Limiter Re-Design Recommendations". En IADC/SPE International Drilling Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/208745-ms.

Texto completo
Resumen
Abstract During the construction planning phase of any new well, drilling engineers often look at offset well data to identify information that could be used to drill the new well more efficiently. This is generally a time-consuming process. The objective was to develop a recommender system that would automate the process of identifying potential hazards and current technical limiters. The developed methodology consisted of three parts. First, a system is developed that can parse textual information found in daily reports to identify key events that occurred in offset wells. Second, time series data from these same offset wells is processed to identify events directly from the patterns in the data, and a reconciliation is done between the time-series data and the contextual data wherever there is a conflict between the two datasets. Finally, KPIs are computed that enable the comparison of various drilling choices and their consequences across the set of offset wells and recommendation are automatically generated for improvements in the construction of a new well. The system was developed on a set of 7 recently drilled wells chosen from a specific North American land operation. The recommendations were compared to recommendation made through manual processing of the data for validation of the approach. The recommender identified invisible lost time (ILT) and potential non-productive time (NPT) scenarios, optimal depth-based drilling parameter for the new well, and recommendations on BHA and flat time improvement areas. Open-source natural language processing libraries were used in this project and were very effective in extracting events from textual data. An automated system was built to guide the drilling engineers in the planning phase of a well construction activity. Given a set of offset wells, the system combined both the time series data and textual data to arrive at these recommendations. The approach upon further refinement is expected to save 30 to 40 hours of the engineer's time per well and shorten the learning curve.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Parse data"

1

Apicella, M. L., J. Slaton y B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 13. Neutral Data Manipulation Language (NDML) Precompiler Parse NDML Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250453.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Apicella, M. L., J. Slaton y B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 12. Neutral Data Manipulation Language (NDML) Precompiler Parse Procedure Division Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada250452.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Apicella, M. L., J. Slaton y B. Levi. Integrated Information Support System (IISS). Volume 5. Common Data Model Subsystem. Part 11. Neutral Data Manipulation Language (NDML) Precompiler Parse Application Procedure Division Product Specification. Fort Belvoir, VA: Defense Technical Information Center, septiembre de 1990. http://dx.doi.org/10.21236/ada252452.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Taylor, Shawna, Jake Carlson, Joel Herndon, Alicia Hofelich Mohr, Wendy Kozlowski, Jennifer Moore, Jonathan Petters y Cynthia Hudson Vitale. Public Access Data Management and Sharing Activities for Academic Administration and Researchers. Association of Research Libraries, noviembre de 2022. http://dx.doi.org/10.29242/report.rads2022.

Texto completo
Resumen
The Realities of Academic Data Sharing (RADS) Initiative’s public-access data management and sharing (DMS) activities are the result of categorizing services and support across the institution that are likely needed to make public access to research data available. The RADS project team categorized these activities by life-cycle phases for public access to research data, and used the activities in RADS surveys of publicly funded campus researchers and institutional administrators whose departments likely provide support in these areas. The result of categorizing and defining these activities not only delineated questions for RADS’s retrospective studies, but, consequently, may also help researchers, administrators, and librarians prepare for upcoming federal and institutional policies requiring access to publicly funded research data. This report presents version 1 of the RADS public access DMS activities. Additional versions are expected to be released as more institutions engage in implementing new federal policies in the coming months. Community engagement and feedback on the RADS DMS activities is critical to (1) validate the activities and (2) parse out the activities, as sharing and refining them will benefit stakeholders interested in meeting new federal open-access and sharing policies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Chen, Yongzhou, Ammar Tahir y Radhika Mittal. Controlling Congestion via In-Network Content Adaptation. Illinois Center for Transportation, septiembre de 2022. http://dx.doi.org/10.36501/0197-9191/22-018.

Texto completo
Resumen
Realizing that it is inherently difficult to match precisely the sending rates at the endhost with the available capacity on dynamic cellular links, we built a system, Octopus, that sends real-time data streams over cellular networks using an imprecise controller (that errs on the side of overestimating network capacity) and then drops appropriate packets in the cellular-network buffers to match the actual capacity. We designed parameterized primitives for implementing the packet-dropping logic, which the applications at the endhost can configure differently to express various content-adaptation policies. Octopus transport encodes the app-specified parameters in packet header fields, which the routers can parse to execute the desired dropping behavior. Our evaluation shows how real-time applications involving standard and volumetric videos can be designed to exploit Octopus for various requirements and achieve a performance that is 1.5 to 18 times better than state-of-the-art schemes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Botero Bolívar, Sara, Víctor De La Espriella Palmett, José Carlos Sánchez Vega, Juan Felipe Soto Restrepo y Jairo Gándara. Perlas clínicas: Guías de la Sociedad Europea de Cardiología (ESC) 2020 para el manejo del síndrome coronario agudo sin elevación del segmento ST. Parte 1/2. Facultad de Medicina Universidad de Antioquia, junio de 2023. http://dx.doi.org/10.59473/medudea.pc.2023.10.

Texto completo
Resumen
Paciente femenina de 67 años, con antecedentes de hipertensión y diabetes mellitus tipo 2 de larga data, consulta al servicio de urgencias por dolor retroesternal opresivo de 1 hora de evolución irradiado a hombro izquierdo, que inicia en reposo, con 8/10 en escala análoga del dolor (EAD), sin mejoría con acetaminofén 1 g.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Calijuri, Mónica, Gustavo A. GArcía, Juan José Bravo y José Elías Feres de Almeida. Documentos tributarios electrónicos y big data económica para el control tributario y aduanero: utilización y codificación de los estados financieros electrónicos para control fiscal y datos económico en América Latina y el Caribe: Tomo 3. Banco Interamericano de Desarrollo, julio de 2023. http://dx.doi.org/10.18235/0005000.

Texto completo
Resumen
El tercer tomo de la serie Documentos tributarios electrónicos y big data económica para el control tributario y aduanero analiza la implementación de los estados financieros electrónicos (EF-e) en el contexto de las Normas Internacionales de Información Financiera (NIIF) y su impacto en el control tributario por parte de las administraciones tributarias (AT). También se presentan marcos para guiar iniciativas y planes para implementar la información contable electrónica por parte de AT y gobiernos. La implementación de las NIIF ha llevado a una estandarización en los registros contables, lo que ha facilitado su digitalización y ha tenido un efecto positivo en términos de reducción de costos y mejora de la transparencia tributaria. Se propone la utilización de la tecnología de lenguaje ampliado para divulgación de informes financieros (XBRL) para estandarizar los datos financieros de las empresas, lo que no solo posibilita la integración por parte de las AT, ya que este lenguaje electrónico permite elaborar, extraer y publicar estados financieros, sino también facilita el intercambio de esta información entre las AT. Además, la implementación de los EF-e bajo las NIIF posibilita la producción de estadísticas e indicadores actualizados y detallados de la actividad económica, lo que puede contribuir al diseño de políticas públicas oportunas en beneficio de los ciudadanos.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Fulponi, Juan Ignacio y Cristian Moleres. Metodología para el estudio de la movilidad con datos de Facebook: generación de matrices origen-destino en ciudades de América Latina y análisis para Buenos Aires. Banco Interamericano de Desarrollo, diciembre de 2022. http://dx.doi.org/10.18235/0004591.

Texto completo
Resumen
El conocimiento de las características y patrones de desplazamiento de las personas en las ciudades es un insumo clave para la definición de políticas públicas de movilidad urbana, especialmente en las grandes áreas metropolitanas. Desafortunadamente, es baja la disponibilidad de datos actualizados y confiables en muchas de las ciudades de América Latina y el Caribe (ALC), y su generación por métodos de estudio tradicionales requieren grandes esfuerzos por parte de los gobiernos locales para su recolección y procesamiento. Por este motivo, investigar nuevas fuentes de información que aprovechen los datos masivos generados por aplicaciones de dispositivos móviles se convierte en un objetivo clave para resolver esta problemática. El Observatorio de Movilidad Urbana de América Latina (OMU) busca dar respuesta esta necesidad de información sólida, confiable y actualizada sobre el transporte y la movilidad urbana en la región. En este trabajo se utilizaron bases de datos de la aplicación Facebook, obtenidos a través del programa Data for Good en colaboración con el Development Data Partnership y el Departamento de Transporte de la Facultad de Ingeniería de la Universidad de Buenos Aires, para elaborar una metodología que genere matrices origen-destino en las grandes áreas metropolitanas de ALC que forman parte del OMU. Estas matrices representan los desplazamientos que realiza la población entre sus residencias y las actividades principales (trabajo, estudio, etc.) durante las horas de la mañana en abril-mayo del año 2022. Además, se realizó un análisis detallado de los resultados obtenidos en la Región Metropolitana de Buenos Aires (RMBA). Se estudió la relación entre las distancias de desplazamiento y variables geográficas y socioeconómicas de la metrópolis, y se comparó los resultados obtenidos con otros datos y estudios disponibles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Meneses-González, María Fernanda y Diego Fernando Cuesta-Mora. Informe especial de estabilidad financiera: liquidez de mercado - Primer semestre de 2021. Banco de la República de Colombia, junio de 2021. http://dx.doi.org/10.32468/liqu-mer.sem1-2021.

Texto completo
Resumen
En la actualidad, una parte importante del manejo de la liquidez de las entidades financieras se realiza mediante operaciones en el mercado monetario, tanto colateralizado como no colateralizado. En el primero se incluyen las operaciones repo, simultáneas y operaciones de transferencia temporal de valores (TTV) y otros , que se realizan por intermedio de los sistemas de negociación o en el mercado over the counter (OTC). Por su parte, el mercado no colateralizado comprende únicamente las operaciones efectuadas en el mercado interbancario. Dada la importancia de las relaciones que se establecen en estos mercados, tanto para la transmisión de la política monetaria, como para la eficiencia en la distribución de la liquidez, en este Informe se estudia la estructura de este mercado y las características de sus interconexiones a la luz del análisis de redes, con el fin de monitorear su dinámica reciente y analizar las características de las interconexiones entre los agentes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Meneses-González, María Fernanda y Mariana Escobar. Informe especial de estabilidad financiera: liquidez de mercado - Segundo semestre de 2022. Banco de la República Colombia, diciembre de 2022. http://dx.doi.org/10.32468/liqu-mer.sem2-2022.

Texto completo
Resumen
En la actualidad, una parte importante del manejo de la liquidez de las entidades financieras se realiza mediante operaciones en el mercado monetario, tanto colateralizado como no colateralizado. En el primero se incluyen las operaciones repo, simultáneas y operaciones de transferencia temporal de valores (TTV) y otros , que se realizan por intermedio de los sistemas de negociación o en el mercado over the counter (OTC). Por su parte, el mercado no colateralizado comprende únicamente las operaciones efectuadas en el mercado interbancario. Dada la importancia de las relaciones que se establecen en estos mercados, tanto para la transmisión de la política monetaria, como para la eficiencia en la distribución de la liquidez, en este Informe se estudia la estructura de este mercado y las características de sus interconexiones a la luz del análisis de redes, con el fin de monitorear su dinámica reciente y analizar las características de las interconexiones entre los agentes. A partir de 2023 este informe especial dejará de publicarse; en su lugar, se invita al lector a consultar el Reporte de Infraestructura Financiera del Banco de la República.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía