Siga este enlace para ver otros tipos de publicaciones sobre el tema: Computational language documentation.

Artículos de revistas sobre el tema "Computational language documentation"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Computational language documentation".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

A, Vinnarasu y Deepa V. Jose. "Speech to text conversion and summarization for effective understanding and documentation". International Journal of Electrical and Computer Engineering (IJECE) 9, n.º 5 (1 de octubre de 2019): 3642. http://dx.doi.org/10.11591/ijece.v9i5.pp3642-3648.

Texto completo
Resumen
<p class="western" style="margin-top: 0.21cm; margin-bottom: 0cm;" align="justify"><span>Speech, is the most powerful way of communication with which human beings express their thoughts and feelings through different languages. The features of speech differs with each language. However, even while communicating in the same language, the pace and the dialect varies with each person. This creates difficulty in understanding the conveyed message for some people. Sometimes lengthy speeches are also quite difficult to follow due to reasons such as different pronunciation, pace and so on. Speech recognition which is an inter disciplinary field of computational linguistics aids in developing technologies that empowers the recognition and translation of speech into text. Text summarization extracts the utmost important information from a source which is a text and provides the adequate summary of the same. The research work presented in this paper describes an easy and effective method for speech recognition. The speech is converted to the corresponding text and produces summarized text. This has various applications like lecture notes creation, summarizing catalogues for lengthy documents and so on. Extensive experimentation is performed to validate the efficiency of the proposed method</span></p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Feldman, Jerome A. "Advances in Embodied Construction Grammar". Constructions and Frames 12, n.º 1 (29 de julio de 2020): 149–69. http://dx.doi.org/10.1075/cf.00038.fel.

Texto completo
Resumen
Abstract This paper describes the continuing goals and present status of the ICSI/UC Berkeley efforts on Embodied Construction Grammar (ECG). ECG is semantics-based formalism grounded in cognitive linguistics. ECG is the most explicitly inter-disciplinary of the construction grammars with deep links to computation, neuroscience, and cognitive science. Work continues on core cognitive, computational, and linguistic issues, including aspects of the mind/body problem. Much of the recent emphasis has been on applications and on tools to facilitate new applications. Extensive documentation plus downloadable systems and grammars can be found at the ECG Homepage.1
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Feraru, Silvia Monica, Horia-Nicolai Teodorescu y Marius Dan Zbancioc. "SRoL - Web-based Resources for Languages and Language Technology e-Learning". International Journal of Computers Communications & Control 5, n.º 3 (1 de septiembre de 2010): 301. http://dx.doi.org/10.15837/ijccc.2010.3.2483.

Texto completo
Resumen
The SRoL Web-based spoken language repository and tool collection includes thousands of voice recordings grouped on sections like "Basic sounds of the Romanian language", "Emotional voices", "Specific language processes", "Pathological voices", "Comparison of natural and synthetic speech", "Gnathophonics and gnathosonics". The recordings are annotated and documented according to proprietary methodology and protocols. Moreover, we included on the site extended documentation on the Romanian language, on speech technology, and on tools, produced by the SRoL team, for voice analysis. The resources are a part of the CLARIN European Network for Language Resources. The resources and tools are useful in virtual learning for phonetics of the Romanian language, speech technology, and medical subjects related to voice. We report on several applications in language learning and voice technology classes. Here, we emphasize the utilization of the SRoL resources in education for medicine and speech rehabilitation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Madlazim, M. y Bagus Jaya Santosa. "Computational physics Using Python: Implementing Maxwell Equation for Circle Polarization". Jurnal Penelitian Fisika dan Aplikasinya (JPFA) 1, n.º 1 (14 de junio de 2011): 1. http://dx.doi.org/10.26740/jpfa.v1n1.p1-7.

Texto completo
Resumen
Python is a relatively new computing language, created by Guido van Rossum [A.S. Tanenbaum, R. van Renesse, H. van Staveren, G.J. Sharp, S.J. Mullender, A.J. Jansen, G. van Rossum, Experiences with the Amoeba distributed operating system, Communications of the ACM 33 (1990) 46–63; also on-line at http://www.cs.vu.nl/pub/amoeba/, which is particularly suitable for teaching a course in computational physics. There are two questions to be considered: (i) For whom is the course intended? (ii) What are the criteria for a suitable language, and why choose Python? The criteria include the nature of the application. High performance computing requires a compiled language, e.g., FORTRAN. For some applications a computer algebra, e.g., Maple, is appropriate. For teaching, and for program development, an interpreted language has considerable advantages: Python appears particularly suitable. Python‟s attractions include (i) its system of modules which makes it easy to extend, (ii) its excellent graphics (VPython module), (iii) its excellent on line documentation, (iv) it is free and can be downloaded from the web. Python and VPython will be described briefly, and some programs demonstrated numerical and animation of some phenomenal physics. In this article, we gave solution of circle polarization by solving Maxwell equation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Et.al, Naveen N. Kulkarni. "Tailoring effective requirement's specification for ingenuity in Software Development Life Cycle." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, n.º 3 (11 de abril de 2021): 3338–44. http://dx.doi.org/10.17762/turcomat.v12i3.1590.

Texto completo
Resumen
Software Requirements Engineering (SRE) process define software manuscripts with sustaining Software Requirement Specification (SRS) and its activities. SRE comprises many tasks requirement analysis, elicitation, documentation, conciliation and validation. Natural language is most popular and commonly used to form the SRS document. However, natural language has its own limitations wrt quality approach for SRS. The constraints include incomplete, incorrect, ambiguous, and inconsistency. In software engineering, most applications are object-oriented. So requirements are unlike problem domain need to be developed. So software documentation is completed in such a way that, all authorized users like clients, analysts, managers, and developers can understand it. These are the basis for success of any planned project. Most of the work is still dependent on intensive human (domain expert) work. consequences of the project success still depend on timeliness with tending errors. The fundamental quality intended for each activity is specified during the software development process. This paper concludes critically with best practices in writing SRS. This approach helps to mitigate SRS limitation up to some extent. An initial review highlights capable results for the proposed practices
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Rougny, Adrien. "sbgntikz—a TikZ library to draw SBGN maps". Bioinformatics 35, n.º 21 (9 de mayo de 2019): 4499–500. http://dx.doi.org/10.1093/bioinformatics/btz287.

Texto completo
Resumen
Abstract Summary The systems biology graphical notation (SBGN) has emerged as the main standard to represent biological maps graphically. It comprises three complementary languages: Process Description, for detailed biomolecular processes; Activity Flow, for influences of biological activities and Entity Relationship, for independent relations shared among biological entities. On the other hand, TikZ is one of the most commonly used package to ‘program’ graphics within TEX/LATEX. Here, we present sbgntikz, a TikZ library that allows drawing and customizing SBGN maps directly into TEX/LATEX documents, using the TikZ language. sbgntikz supports all glyphs of the three SBGN languages, and offers options that facilitate the drawing of complex glyph assembly within TikZ. Furthermore, sbgntikz is provided together with a converter that allows transforming any SBGN map stored under the SBGN Markup Language into a TikZ picture, or rendering it directly into a PDF file. Availability and implementation sbgntikz, the SBGN-ML to sbgntikz converter, as well as a complete documentation can be freely downloaded from https://github.com/Adrienrougny/sbgntikz/. The library and the converter are compatible with all recent operating systems, including Windows, MacOS, and all common Linux distributions. Supplementary information Supplementary material is available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Zulkower, Valentin y Susan Rosser. "DNA Features Viewer: a sequence annotation formatting and plotting library for Python". Bioinformatics 36, n.º 15 (8 de julio de 2020): 4350–52. http://dx.doi.org/10.1093/bioinformatics/btaa213.

Texto completo
Resumen
Abstract Motivation Although the Python programming language counts many Bioinformatics and Computational Biology libraries; none offers customizable sequence annotation visualizations with layout optimization. Results DNA Features Viewer is a sequence annotation plotting library which optimizes plot readability while letting users tailor other visual aspects (colors, labels, highlights etc.) to their particular use case. Availability and implementation Open-source code and documentation are available on Github under the MIT license (https://github.com/Edinburgh-Genome-Foundry/DnaFeaturesViewer). Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Vo, Hoang Nhat Khang, Duc Dong Le, Tran Minh Dat Phan, Tan Sang Nguyen, Quoc Nguyen Pham, Ngoc Oanh Tran, Quang Duc Nguyen, Tran Minh Hieu Vo y Tho Quan. "Revitalizing Bahnaric Language through Neural Machine Translation: Challenges, Strategies, and Promising Outcomes". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 21 (24 de marzo de 2024): 23360–68. http://dx.doi.org/10.1609/aaai.v38i21.30385.

Texto completo
Resumen
The Bahnar, a minority ethnic group in Vietnam with ancient roots, hold a language of deep cultural and historical significance. The government is prioritizing the preservation and dissemination of Bahnar language through online availability and cross-generational communication. Recent AI advances, including Neural Machine Translation (NMT), have transformed translation with improved accuracy and fluency, fostering language revitalization through learning, communication, and documentation. In particular, NMT enhances accessibility for Bahnar language speakers, making information and content more available. However, translating Vietnamese to Bahnar language faces practical hurdles due to resource limitations, particularly in the case of Bahnar language as an extremely low-resource language. These challenges encompass data scarcity, vocabulary constraints, and a lack of fine-tuning data. To address these, we propose transfer learning from selected pre-trained models to optimize translation quality and computational efficiency, capitalizing on linguistic similarities between Vietnamese and Bahnar language. Concurrently, we apply tailored augmentation strategies to adapt machine translation for the Vietnamese-Bahnar language context. Our approach is validated through superior results on bilingual Vietnamese-Bahnar language datasets when compared to baseline models. By tackling translation challenges, we help revitalize Bahnar language, ensuring information flows freely and the language thrives.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Jones, Joshua P., Kurama Okubo, Tim Clements y Marine A. Denolle. "SeisIO: A Fast, Efficient Geophysical Data Architecture for the Julia Language". Seismological Research Letters 91, n.º 4 (29 de abril de 2020): 2368–77. http://dx.doi.org/10.1785/0220190295.

Texto completo
Resumen
Abstract SeisIO for the Julia language is a new geophysical data framework that combines the intuitive syntax of a high-level language with performance comparable to FORTRAN or C. Benchmark comparisons against recent versions of popular programs for seismic data download and analysis demonstrate significant improvements in file read speed and orders-of-magnitude improvements in memory overhead. Because the Julia language natively supports parallel computing with an intuitive syntax, we benchmark test parallel download and processing of multiweek segments of contiguous data from two sets of 10 broadband seismic stations, and find that SeisIO outperforms two popular Python-based tools for data downloads. The current capabilities of SeisIO include file read support for several geophysical data formats, online data access using a variety of services, and optimized versions of several common data processing operations. Tutorial notebooks and extensive documentation are available to improve the user experience. As an accessible example of performant scientific computing for the next generation of researchers, SeisIO offers ease of use and rapid learning without sacrificing computational efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Swapnil Shinde, Vishnu Suryawanshi, Varsha Jadhav, Nakul Sharma, Mandar Diwakar,. "Graph-Based Keyphrase Extraction for Software Traceability in Source Code and Documentation Mapping". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 9 (30 de octubre de 2023): 832–36. http://dx.doi.org/10.17762/ijritcc.v11i9.8973.

Texto completo
Resumen
Natural Language Processing (NLP) forms the basis of several computational tasks. However, when applied to the software system’s, NLP provides several irrelevant features and the noise gets mixed up while extracting features. As the scale of software system’s increases, different metrics are needed to assess these systems. Diagrammatic and visual representation of the SE projects code forms an essential component of Source Code Analysis (SCA). These SE projects cannot be analyzed by traditional source code analysis methods nor can they be analyzed by traditional diagrammatic representation. Hence, there is a need to modify the traditional approaches in lieu of changing environments to reduce learning gap for the developers and traceability engineers. The traditional approaches fall short in addressing specific metrics in terms of document similarity and graph dependency approaches. In terms of source code analysis, the graph dependency graph can be used for finding the relevant key-terms and keyphrases as they occur not just intra-document but also inter-document. In this work, a similarity measure based on context is proposed which can be employed to find a traceability link between the source code metrics and API documents present in a package. Probabilistic graph-based keyphrase extraction approach is used for searching across the different project files.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Gray, Bethany y Douglas Biber. "Lexical frames in academic prose and conversation". Current issues in phraseology 18, n.º 1 (13 de mayo de 2013): 109–36. http://dx.doi.org/10.1075/ijcl.18.1.08gra.

Texto completo
Resumen
While lexical bundles research identifies continuous sequences (e.g. the end of the, I don’t know if), researchers have also been interested in discontinuous sequences in which words form a ‘frame’ surrounding a variable slot (e.g. I don’t * to, it is * to). To date, most research has focused on a few intuitively-selected frames, or has begun with frequent continuous sequences and then analyzed those to identify associated frames. Few previous studies have attempted to directly identify the full set of discontinuous sequences in a corpus. In the present study, we work towards that goal, using a corpus-driven approach to identify the set of recurrent four-word continuous and discontinuous patterns in corpora of conversation and academic writing. This direct computational analysis of the corpora reveals a more complete set of frames than alternative approaches, resulting in the documentation of highly frequent frames that have not been identified in previous research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Więcławska, Edyta. "Discrete units as markers of English/Polish contrasts in company registration discourse". Linguodidactica 24 (2020): 309–27. http://dx.doi.org/10.15290/lingdid.2020.24.22.

Texto completo
Resumen
The paper addresses the issue of the complexity of legal communication in an interlingual perspective. The analysis fits in the paradigmatic approach to contrastive studies, where the distribution of discrete units is presented in quantitative terms. The cross-linguistic, computational account of the distribution of selected discrete units across company registration documentation shows systemic distinctions. It discloses recurrent, symmetrical and asymmetrical patterns which result from language system-inherent distinctions and/or the operation of translation universals. The strong point of the research lies in addressing legal communication within the realm of secondary genres, which – for practical reasons – are underrepresented in jurilinguistic studies. The study is based on a custom -designed, parallel corpus comprised of authentic materials.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Greener, Joe G., Joel Selvaraj y Ben J. Ward. "BioStructures.jl: read, write and manipulate macromolecular structures in Julia". Bioinformatics 36, n.º 14 (14 de mayo de 2020): 4206–7. http://dx.doi.org/10.1093/bioinformatics/btaa502.

Texto completo
Resumen
Abstract Summary Robust, flexible and fast software to read, write and manipulate macromolecular structures is a prerequisite for productively doing structural bioinformatics. We present BioStructures.jl, the first dedicated package in the Julia programming language for dealing with macromolecular structures and the Protein Data Bank. BioStructures.jl builds on the lessons learned with similar packages to provide a large feature set, a flexible object representation and high performance. Availability and implementation BioStructures.jl is freely available under the MIT license. Source code and documentation are available at https://github.com/BioJulia/BioStructures.jl. BioStructures.jl is compatible with Julia versions 0.6 and later and is system-independent. Contact j.greener@ucl.ac.uk
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kramer, Nicole E., Eric S. Davis, Craig D. Wenger, Erika M. Deoudes, Sarah M. Parker, Michael I. Love y Douglas H. Phanstiel. "Plotgardener: cultivating precise multi-panel figures in R". Bioinformatics 38, n.º 7 (4 de febrero de 2022): 2042–45. http://dx.doi.org/10.1093/bioinformatics/btac057.

Texto completo
Resumen
Abstract Motivation The R programming language is one of the most widely used programming languages for transforming raw genomic datasets into meaningful biological conclusions through analysis and visualization, which has been largely facilitated by infrastructure and tools developed by the Bioconductor project. However, existing plotting packages rely on relative positioning and sizing of plots, which is often sufficient for exploratory analysis but is poorly suited for the creation of publication-quality multi-panel images inherent to scientific manuscript preparation. Results We present plotgardener, a coordinate-based genomic data visualization package that offers a new paradigm for multi-plot figure generation in R. Plotgardener allows precise, programmatic control over the placement, esthetics and arrangements of plots while maximizing user experience through fast and memory-efficient data access, support for a wide variety of data and file types, and tight integration with the Bioconductor environment. Plotgardener also allows precise placement and sizing of ggplot2 plots, making it an invaluable tool for R users and data scientists from virtually any discipline. Availability and implementation Package: https://bioconductor.org/packages/plotgardener, Code: https://github.com/PhanstielLab/plotgardener, Documentation: https://phanstiellab.github.io/plotgardener/. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Taylor, R. Andrew, Aidan Gilson, Wade Schulz, Kevin Lopez, Patrick Young, Sameer Pandya, Andreas Coppi, David Chartash, David Fiellin y Gail D’Onofrio. "Computational phenotypes for patients with opioid-related disorders presenting to the emergency department". PLOS ONE 18, n.º 9 (15 de septiembre de 2023): e0291572. http://dx.doi.org/10.1371/journal.pone.0291572.

Texto completo
Resumen
Objective We aimed to discover computationally-derived phenotypes of opioid-related patient presentations to the ED via clinical notes and structured electronic health record (EHR) data. Methods This was a retrospective study of ED visits from 2013–2020 across ten sites within a regional healthcare network. We derived phenotypes from visits for patients ≥18 years of age with at least one prior or current documentation of an opioid-related diagnosis. Natural language processing was used to extract clinical entities from notes, which were combined with structured data within the EHR to create a set of features. We performed latent dirichlet allocation to identify topics within these features. Groups of patient presentations with similar attributes were identified by cluster analysis. Results In total 82,577 ED visits met inclusion criteria. The 30 topics were discovered ranging from those related to substance use disorder, chronic conditions, mental health, and medical management. Clustering on these topics identified nine unique cohorts with one-year survivals ranging from 84.2–96.8%, rates of one-year ED returns from 9–34%, rates of one-year opioid event 10–17%, rates of medications for opioid use disorder from 17–43%, and a median Carlson comorbidity index of 2–8. Two cohorts of phenotypes were identified related to chronic substance use disorder, or acute overdose. Conclusions Our results indicate distinct phenotypic clusters with varying patient-oriented outcomes which provide future targets better allocation of resources and therapeutics. This highlights the heterogeneity of the overall population, and the need to develop targeted interventions for each population.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Tleugabulov, Daniyar T., Azamat T. Dukombaiev y Tatyana V. Brynza. "Сохранение сырцовой архитектуры Тенгиз-Коргалжынской впадины с использованием 3D-технологий". Oriental Studies 15, n.º 5 (26 de diciembre de 2022): 1094–109. http://dx.doi.org/10.22162/2619-0990-2022-63-5-1094-1109.

Texto completo
Resumen
Introduction. The article describes a 3D documentation and visualization technique. Goals. The study seeks to preserve and reconstruct key forms and types of Kazakh memorial architecture with the aid of visual archeology tools. Materials and methods. The work started with determination of morphological characteristics inherent to the objects under study — mudbrick mausoleums. In accordance with the latter, a three-dimensional visualization technique was selected. The photography scenarios have been developed following recommendations of the software developers. The paper provides detailed insights into all stages of creating three-dimensional models, including data collection, feature description of the equipment used, pre-shooting computational analysis, shooting proper, and data post-processing. Special attention is paid to the most important and crucial moment of the survey — shooting, which was performed from different angles each to have yielded a distinguished set of photographs. It is urgent to take a sufficient number of high-quality photographs from different angles. A number of photographs for each angle should be as high as possible — from 30–40 to several hundreds and thousands. Results. The work notes that 6–8 sets were made for each mausoleum. Documentation of one object only (taking into account additional and spare photographs) includes a total of 500–600 photographs. Extensive efforts were made to process the obtained data, the latter work be implemented in a specific order to result in three-dimensional models and visualization patterns of the examined mausoleums. Preservation of mudbrick steppe monuments in digital format is an urgent need because those are a vanishing type of late medieval memorial architecture dating back to ancient times. Digital 3D-models of collapsing mudbrick mausoleums shall always be invaluable for science as historical sources, part of the Kazakh national cultural heritage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Azad, Sasha, Jennifer Wellnitz, Luis Garcia y Chris Martens. "Anthology: A Social Simulation Framework". Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 18, n.º 1 (11 de octubre de 2022): 224–31. http://dx.doi.org/10.1609/aiide.v18i1.21967.

Texto completo
Resumen
Social simulation research seeks to understand the dynamics of complex human behavior by simulating populations of individual decision-makers as multi-agent systems. However, prior work in games and entertainment fail to account for interactions between social behavior, geography, and relationships in a manner that allows researchers to easily reuse their frameworks and model social characters. We present Anthology, an extensible software framework for modeling human social systems, within the context of an ongoing research agenda to integrate AI techniques from social simulation games and computational social science to enable researchers to model and reason about the complex dynamics of human social behavior. Anthology comprises a motive-based agent decision making algorithm; a knowledge representation system for relationships; a flexible specification language for precondition-effect-style actions; a user interface to inspect and interact with the simulation as it runs in real-time; and an extensive user documentation and reference manual. We describe our participatory research design process used for the developing Anthology, the state of the current system, it's limitations and our future development directions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Nanthaamornphong, Aziz, Jeffrey Carver, Karla Morris y Salvatore Filippone. "Extracting UML Class Diagrams from Object-Oriented Fortran: ForUML". Scientific Programming 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/421816.

Texto completo
Resumen
Many scientists who implement computational science and engineering software have adopted the object-oriented (OO) Fortran paradigm. One of the challenges faced by OO Fortran developers is the inability to obtain high level software design descriptions of existing applications. Knowledge of the overall software design is not only valuable in the absence of documentation, it can also serve to assist developers with accomplishing different tasks during the software development process, especially maintenance and refactoring. The software engineering community commonly uses reverse engineering techniques to deal with this challenge. A number of reverse engineering-based tools have been proposed, but few of them can be applied to OO Fortran applications. In this paper, we propose a software tool to extract unified modeling language (UML) class diagrams from Fortran code. The UML class diagram facilitates the developers' ability to examine the entities and their relationships in the software system. The extracted diagrams enhance software maintenance and evolution. The experiments carried out to evaluate the proposed tool show its accuracy and a few of the limitations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Astagneau, Paul C., Guillaume Thirel, Olivier Delaigue, Joseph H. A. Guillaume, Juraj Parajka, Claudia C. Brauer, Alberto Viglione, Wouter Buytaert y Keith J. Beven. "Technical note: Hydrology modelling R packages – a unified analysis of models and practicalities from a user perspective". Hydrology and Earth System Sciences 25, n.º 7 (8 de julio de 2021): 3937–73. http://dx.doi.org/10.5194/hess-25-3937-2021.

Texto completo
Resumen
Abstract. Following the rise of R as a scientific programming language, the increasing requirement for more transferable research and the growth of data availability in hydrology, R packages containing hydrological models are becoming more and more available as an open-source resource to hydrologists. Corresponding to the core of the hydrological studies workflow, their value is increasingly meaningful regarding the reliability of methods and results. Despite package and model distinctiveness, no study has ever provided a comparison of R packages for conceptual rainfall–runoff modelling from a user perspective by contrasting their philosophy, model characteristics and ease of use. We have selected eight packages based on our ability to consistently run their models on simple hydrology modelling examples. We have uniformly analysed the exact structure of seven of the hydrological models integrated into these R packages in terms of conceptual storages and fluxes, spatial discretisation, data requirements and output provided. The analysis showed that very different modelling choices are associated with these packages, which emphasises various hydrological concepts. These specificities are not always sufficiently well explained by the package documentation. Therefore a synthesis of the package functionalities was performed from a user perspective. This synthesis helps to inform the selection of which packages could/should be used depending on the problem at hand. In this regard, the technical features, documentation, R implementations and computational times were investigated. Moreover, by providing a framework for package comparison, this study is a step forward towards supporting more transferable and reusable methods and results for hydrological modelling in R.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Khramtsov, Andrey I., Ruslan A. Nasyrov y Galina F. Khramtsova. "Application of digital technology in the work of a pathologist: guidelines for learning how to use speech recognition systems". Pediatrician (St. Petersburg) 12, n.º 3 (13 de octubre de 2021): 63–68. http://dx.doi.org/10.17816/ped12363-68.

Texto completo
Resumen
Natural language processing is one of the branches of computational linguistics. It is a branch of computer science that includes algorithmic processing of speech and natural language scripts. The algorithms facilitate the development of human-to-machine translation and automatic speech recognition systems (ASRS). ASRS use is twofold: accurately converting operators speech into a coherent and meaningful text and using natural language for interaction with a computer. Currently, these systems are widely used in medical practice, including anatomic pathology. Successful ASRS implementation pivots on creation of standardized templated descriptions for organic inclusion in the diagnosis dictation, likewise on physician training for using such systems in practice. In the past decade, there have been several attempts to standardize surgical pathology reports and create templates undertaken by physicians worldwide. After studying domestic and foreign literature, we created a list of the essential elements that must be included in the template for macro-and microscopic descriptions comprising the final diagnosis. These templates will help in decision-making and accurate diagnosis as they contain all the imperative elements in order of importance. This approach will significantly reduce the need for re-examination of both fixed macroscopic material and the preparation of additional histological sections. The templates built into ASRS reduce the time spent on documentation and significantly reduce the workload for pathologists. For the successful use of ASRS, we have developed an educational course, Digital Speech Recognition in an Anatomical Pathology Practice, for postgraduate education of both domestic and foreign doctors. A brief description of the course is presented in this article, and the course itself is available on the Internet.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Husáková, Martina y Vladimír Bureš. "Formal Ontologies in Information Systems Development: A Systematic Review". Information 11, n.º 2 (27 de enero de 2020): 66. http://dx.doi.org/10.3390/info11020066.

Texto completo
Resumen
Computational ontologies are machine-processable structures which represent particular domains of interest. They integrate knowledge which can be used by humans or machines for decision making and problem solving. The main aim of this systematic review is to investigate the role of formal ontologies in information systems development, i.e., how these graphs-based structures can be beneficial during the analysis and design of the information systems. Specific online databases were used to identify studies focused on the interconnections between ontologies and systems engineering. One-hundred eighty-seven studies were found during the first phase of the investigation. Twenty-seven studies were examined after the elimination of duplicate and irrelevant documents. Mind mapping was substantially helpful in organising the basic ideas and in identifying five thematic groups that show the main roles of formal ontologies in information systems development. Formal ontologies are mainly used in the interoperability of information systems, human resource management, domain knowledge representation, the involvement of semantics in unified modelling language (UML)-based modelling, and the management of programming code and documentation. We explain the main ideas in the reviewed studies and suggest possible extensions to this research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Oliveira, Lucas Lopes, Xiaorui Jiang, Aryalakshmi Nellippillipathil Babu, Poonam Karajagi y Alireza Daneshkhah. "Effective Natural Language Processing Algorithms for Early Alerts of Gout Flares from Chief Complaints". Forecasting 6, n.º 1 (10 de marzo de 2024): 224–38. http://dx.doi.org/10.3390/forecast6010013.

Texto completo
Resumen
Early identification of acute gout is crucial, enabling healthcare professionals to implement targeted interventions for rapid pain relief and preventing disease progression, ensuring improved long-term joint function. In this study, we comprehensively explored the potential early detection of gout flares (GFs) based on nurses’ chief complaint notes in the Emergency Department (ED). Addressing the challenge of identifying GFs prospectively during an ED visit, where documentation is typically minimal, our research focused on employing alternative Natural Language Processing (NLP) techniques to enhance detection accuracy. We investigated GF detection algorithms using both sparse representations by traditional NLP methods and dense encodings by medical domain-specific Large Language Models (LLMs), distinguishing between generative and discriminative models. Three methods were used to alleviate the issue of severe data imbalances, including oversampling, class weights, and focal loss. Extensive empirical studies were performed on the Gout Emergency Department Chief Complaint Corpora. Sparse text representations like tf-idf proved to produce strong performances, achieving F1 scores higher than 0.75. The best deep learning models were RoBERTa-large-PM-M3-Voc and BioGPT, which had the best F1 scores for each dataset, with a 0.8 on the 2019 dataset and a 0.85 F1 score on the 2020 dataset, respectively. We concluded that although discriminative LLMs performed better for this classification task when compared to generative LLMs, a combination of using generative models as feature extractors and employing a support vector machine for classification yielded promising results comparable to those obtained with discriminative models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Friesner, Isabel D., Somya Mohindra, Lauren Boreta, William Cheng Chen, Steve E. Braunstein, Michael W. Rabow y Julian C. Hong. "Natural language processing identification of documented mental health symptoms associated with risk of mental health disorders in patients with cancer." Journal of Clinical Oncology 41, n.º 16_suppl (1 de junio de 2023): 1561. http://dx.doi.org/10.1200/jco.2023.41.16_suppl.1561.

Texto completo
Resumen
1561 Background: Delayed diagnosis and care of mental health disorders (MHD) is a significant challenge in the care for patients with cancer. The objective of this study was to use natural language processing (NLP) to identify words related to mental health documented in clinical notes surrounding the time of cancer diagnosis and assess their predictive ability of future, new MHD. Methods: This single institution cohort study consisted of patients diagnosed with cancer between January 2012 and November 2022. Cancer and MHD were identified based on ICD-10 codes obtained from deidentified electronic health record data. MHD included psychotic disorders (F20-29), mood disorders (F30-39), and anxiety disorders (F40-48). The clinical Text Analysis Knowledge Extraction System was applied to deidentified clinical notes, and symptoms mapped to SNOMED concepts relevant to mental health were identified. These mental health symptoms were aggregated in the 15 days preceding and 15 days following a first cancer diagnosis and analyzed across MHD status. Patient characteristics including sex, age, race, cancer, and insurance were also analyzed. Results: This cohort consisted of 64,010 patients with cancer who had no documented MHD prior to cancer diagnosis, with a majority being 40-64 years old (45.8%) or 65+ (43.7%) and identifying as male (53.0%) or white (60.2%). Most patients had prostate (12.5%), hematologic (10.8%), or breast (10.3%) cancer and private insurance (46.2%). 9,825 (15.3%) patients developed a newly documented MHD, with a median time of 139 days (IQR: 40-466) from cancer diagnosis. The top five most common mental health documented symptoms for all patients were normal mood (23.3%), mental state finding (17.9%), worried (10.2%), feeling content (9.9%), and cognitive function finding (6.6%). Those who had a future MHD had higher documented rates across all mental health symptoms. Multivariate cox proportional hazards model identified 18-39 years old, female, white, and Medicaid or Medicare insurance as independent factors associated with an increased risk of a future, new MHD. Prostate cancer was associated with lower risk of a future MHD. Panic (OR 2.1 [95% CI 1.8-2.4]), feeling nervous (1.9 [1.5-2.4]), feeling guilt (1.9 [1.4-2.5]), mild anxiety (1.8 [1.4-2.4]), and feeling frustrated (1.4 [1.2-1.6]) were identified as the symptoms most strongly associated with an increased risk of a future MHD. Conclusions: NLP extracted mental health symptoms documented in clinical notes correlated with an increased risk of documented MHD. Computational approaches may be tools for improving the timely diagnosis of MHD and referral to specialty services. Further work is needed to investigate potential disparities in documentation and management of care for patients with cancer who develop MHD, including delays between documentation and eventual diagnosis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Righelli, Dario y Claudia Angelini. "Easyreporting simplifies the implementation of Reproducible Research layers in R software". PLOS ONE 16, n.º 5 (10 de mayo de 2021): e0244122. http://dx.doi.org/10.1371/journal.pone.0244122.

Texto completo
Resumen
During last years “irreproducibility” became a general problem in omics data analysis due to the use of sophisticated and poorly described computational procedures. For avoiding misleading results, it is necessary to inspect and reproduce the entire data analysis as a unified product. Reproducible Research (RR) provides general guidelines for public access to the analytic data and related analysis code combined with natural language documentation, allowing third-parties to reproduce the findings. We developed easyreporting, a novel R/Bioconductor package, to facilitate the implementation of an RR layer inside reports/tools. We describe the main functionalities and illustrate the organization of an analysis report using a typical case study concerning the analysis of RNA-seq data. Then, we show how to use easyreporting in other projects to trace R functions automatically. This latter feature helps developers to implement procedures that automatically keep track of the analysis steps. Easyreporting can be useful in supporting the reproducibility of any data analysis project and shows great advantages for the implementation of R packages and GUIs. It turns out to be very helpful in bioinformatics, where the complexity of the analyses makes it extremely difficult to trace all the steps and parameters used in the study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Pitchanathan, Arjun, Christian Ulmann, Michel Weber, Torsten Hoefler y Tobias Grosser. "FPL: fast Presburger arithmetic through transprecision". Proceedings of the ACM on Programming Languages 5, OOPSLA (20 de octubre de 2021): 1–26. http://dx.doi.org/10.1145/3485539.

Texto completo
Resumen
Presburger arithmetic provides the mathematical core for the polyhedral compilation techniques that drive analytical cache models, loop optimization for ML and HPC, formal verification, and even hardware design. Polyhedral compilation is widely regarded as being slow due to the potentially high computational cost of the underlying Presburger libraries. Researchers typically use these libraries as powerful black-box tools, but the perceived internal complexity of these libraries, caused by the use of C as the implementation language and a focus on end-user-facing documentation, holds back broader performance-optimization efforts. With FPL, we introduce a new library for Presburger arithmetic built from the ground up in modern C++. We carefully document its internal algorithmic foundations, use lightweight C++ data structures to minimize memory management costs, and deploy transprecision computing across the entire library to effectively exploit machine integers and vector instructions. On a newly-developed comprehensive benchmark suite for Presburger arithmetic, we show a 5.4x speedup in total runtime over the state-of-the-art library isl in its default configuration and 3.6x over a variant of isl optimized with element-wise transprecision computing. We expect that the availability of a well-documented and fast Presburger library will accelerate the adoption of polyhedral compilation techniques in production compilers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Slater, Louise J., Guillaume Thirel, Shaun Harrigan, Olivier Delaigue, Alexander Hurley, Abdou Khouakhi, Ilaria Prosdocimi, Claudia Vitolo y Katie Smith. "Using R in hydrology: a review of recent developments and future directions". Hydrology and Earth System Sciences 23, n.º 7 (12 de julio de 2019): 2939–63. http://dx.doi.org/10.5194/hess-23-2939-2019.

Texto completo
Resumen
Abstract. The open-source programming language R has gained a central place in the hydrological sciences over the last decade, driven by the availability of diverse hydro-meteorological data archives and the development of open-source computational tools. The growth of R's usage in hydrology is reflected in the number of newly published hydrological packages, the strengthening of online user communities, and the popularity of training courses and events. In this paper, we explore the benefits and advantages of R's usage in hydrology, such as the democratization of data science and numerical literacy, the enhancement of reproducible research and open science, the access to statistical tools, the ease of connecting R to and from other languages, and the support provided by a growing community. This paper provides an overview of a typical hydrological workflow based on reproducible principles and packages for retrieval of hydro-meteorological data, spatial analysis, hydrological modelling, statistics, and the design of static and dynamic visualizations and documents. We discuss some of the challenges that arise when using R in hydrology and useful tools to overcome them, including the use of hydrological libraries, documentation, and vignettes (long-form guides that illustrate how to use packages); the role of integrated development environments (IDEs); and the challenges of big data and parallel computing in hydrology. Lastly, this paper provides a roadmap for R's future within hydrology, with R packages as a driver of progress in the hydrological sciences, application programming interfaces (APIs) providing new avenues for data acquisition and provision, enhanced teaching of hydrology in R, and the continued growth of the community via short courses and events.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Cattani, G., M. C. Coperchio, F. L. Navarria y T. Rovelli. "Diffusion Phenomena and Other WWW Applications for An Introductory Physics Course". International Journal of Modern Physics C 08, n.º 06 (diciembre de 1997): 1177–92. http://dx.doi.org/10.1142/s0129183197001053.

Texto completo
Resumen
The World Wide Web originated within the high-energy physics community from the need to exchange documentation in an efficient way. It can be used easily to produce and maintain didactic material for teaching physics. The material can be made accessible via the network in hypertext form, comprising text, pictures, animations, audio files. For didactic applications in physics, the capability of an interactive link, beyond the use of simple electronic forms is necessary. This was not foreseen in the original WWW protocol, and it has been developed in an application presented here to simulate a series of measurements in a diffusion process in solutions. The recent introduction of the Java language offers a natural way to create new powerful interactive Internet applications. We are currently developing and testing Java powered didactic applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Reyes, Brandon C., Irene Otero-Muras, Michael T. Shuen, Alexandre M. Tartakovsky y Vladislav A. Petyuk. "CRNT4SBML: a Python package for the detection of bistability in biochemical reaction networks". Bioinformatics 36, n.º 12 (2 de mayo de 2020): 3922–24. http://dx.doi.org/10.1093/bioinformatics/btaa241.

Texto completo
Resumen
Abstract Motivation Signaling pathways capable of switching between two states are ubiquitous within living organisms. They provide the cells with the means to produce reversible or irreversible decisions. Switch-like behavior of biological systems is realized through biochemical reaction networks capable of having two or more distinct steady states, which are dependent on initial conditions. Investigation of whether a certain signaling pathway can confer bistability involves a substantial amount of hypothesis testing. The cost of direct experimental testing can be prohibitive. Therefore, constraining the hypothesis space is highly beneficial. One such methodology is based on chemical reaction network theory (CRNT), which uses computational techniques to rule out pathways that are not capable of bistability regardless of kinetic constant values and molecule concentrations. Although useful, these methods are complicated from both pure and computational mathematics perspectives. Thus, their adoption is very limited amongst biologists. Results We brought CRNT approaches closer to experimental biologists by automating all the necessary steps in CRNT4SMBL. The input is based on systems biology markup language (SBML) format, which is the community standard for biological pathway communication. The tool parses SBML and derives C-graph representations of the biological pathway with mass action kinetics. Next steps involve an efficient search for potential saddle-node bifurcation points using an optimization technique. This type of bifurcation is important as it has the potential of acting as a switching point between two steady states. Finally, if any bifurcation points are present, continuation analysis with respect to a user-defined parameter extends the steady state branches and generates a bifurcation diagram. Presence of an S-shaped bifurcation diagram indicates that the pathway acts as a bistable switch for the given optimization parameters. Availability and implementation CRNT4SBML is available via the Python Package Index. The documentation can be found at https://crnt4sbml.readthedocs.io. CRNT4SBML is licensed under the Apache Software License 2.0.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

JR, Vicno Triwira Dhika y Teuku Thifly Fakhrur Rizal. "Various Word Shorts in the Podcast of Political Figures Sandiaga Uno: Corpus-Based Analysis". Educaniora: Journal of Education and Humanities 1, n.º 2 (1 de julio de 2023): 167–77. http://dx.doi.org/10.59687/educaniora.v1i2.50.

Texto completo
Resumen
This study aims to examine the use of abbreviation in the political figure Sandiaga Uno's podcast. This research will also bring up the use of corpus linguistics which is very effective in research in the field of linguistics, especially abbreviation or shortening of words. This type of research is a descriptive qualitative research. The advanced method which is the main approach of this research is the corpus linguistic method. Corpus linguistics can be interpreted as research that relies on linguistic data analysis using digital technology. The data in this study were collected using documentation techniques and utilizing the KORTARA (Korpus Nusantara) application which originates from the Wawancara Tokoh Indonesia corpus. Research data in this study will be analyzed using the KORTARA application. Analysis using the KORTARA application can be referred to as computational linguistics, because it utilizes corpus linguistics applications. The work steps of this computational linguistic analysis start from identifying data, classifying data, interpreting data, and drawing conclusions. The findings of this study reveal that the use of abbreviations in the video podcast of political figure Sandiaga Uno can be in the form of abbreviations, acronyms, and fragments. This research reveals that the most dominant type of abbreviation used in the podcast is fragment. The results of the data analysis also revealed that there were no types of abbreviations for letter symbols and contractions, this was based on the theme or discussion in the video which was not too focused on economic issues, but politics. The findings of abbreviation data in this study also reveal that the use of abbreviated or abbreviated language will continue to be used by political figures, in order to build better and more familiar communication with their interlocutors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

ANSARI, SALIM G., PAOLO GIOMMI y ALBERTO MICOL. "ESIS ON THE WORLD WIDE WEB". International Journal of Modern Physics C 05, n.º 05 (octubre de 1994): 805–9. http://dx.doi.org/10.1142/s0129183194000921.

Texto completo
Resumen
On 3rd November, 1993, ESIS announced its Homepage on the World Wide Web (WWW) to the user community. Ever since then, ESIS has steadily increased its Web support to the astronomical community to include a bibliographic service, the ESIS catalogue documentation and the ESIS Data Browser. More functionality will be added in the near future. All these services share a common ESIS structure that is used by other ESIS user paradigms such as the ESIS Graphical User Interface (Giommi and Ansari, 1993), and the ESIS Command Line Interface. A forms-based paradigm, each ESIS-Web application interfaces to the hypertext transfer protocol (http) translating queries from/to the hypertext markup language (html) format understood by the NCSA Mosaic interface. In this paper, we discuss the ESIS system and show how each ESIS service works on the World Wide Web client.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Panteleev, E. R. y A. A. Mukuchyan. "Logical model of stepwise contextual help for CAD user". Vestnik IGEU, n.º 3 (30 de junio de 2023): 68–78. http://dx.doi.org/10.17588/2072-2672.2023.3.068-078.

Texto completo
Resumen
Automatic stepwise contextual help for CAD systems users reduces time to solve the application task since it saves time to search the prompt message in system documentation. Petri nets (PN) can be used to bind available actions of the user considering the state of application data (context). Application of the Petri net inversion method, which uses limited enumeration to construct chains of recommended actions is preferable than using the standard reachability analysis procedure based on exhaustive enumeration. However, the absence in the known implementations of an explicit separation of the axioms of inversion (knowledge) from the mechanism of their processing (inference) deprives the stepwise contextual help system of the necessary flexibility when changing the axioms to consider the assumptions associated with a particular model. Thus, the aim of this research is to provide the necessary flexibility of the contextual help system by separating the knowledge representation model from the inference engine. A colored PN is used as a model of user action scenarios. The inversion axioms are implemented in the PROLOG language. The standard inference engine of the PROLOG language is used as a tool to construct chains of recommended actions. The authors have proposed an axiomatic model of PN inversion and a method to construct a stepwise contextual help by the standard inference engine of the PROLOG language. The method differs by explicit separation of knowledge (inversion axioms) from the inference engine (stepwise recommendations). It reduces the computational costs of adapting the contextual help system when changing the inversion axioms. The proposed method allows to reduce the time spent on adapting the contextual help system, since the field of the changes is limited by the declarations of the inversion axioms. The reliability of the results is confirmed since the proposed method of contextual help is used for the CAD “Model and Archive” user of CSoft company. The results obtained allow creating contextual help services for existing applications with minimal changes to their code base.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Wellmann, J. Florian, Sam T. Thiele, Mark D. Lindsay y Mark W. Jessell. "pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling". Geoscientific Model Development 9, n.º 3 (10 de marzo de 2016): 1019–35. http://dx.doi.org/10.5194/gmd-9-1019-2016.

Texto completo
Resumen
Abstract. We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Wellmann, J. F., S. T. Thiele, M. D. Lindsay y M. W. Jessell. "pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling". Geoscientific Model Development Discussions 8, n.º 11 (13 de noviembre de 2015): 10011–51. http://dx.doi.org/10.5194/gmdd-8-10011-2015.

Texto completo
Resumen
Abstract. We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Muravev, A. V., A. Yu Bundel, D. B. Kiktev y A. V. Smirnov. "Verification of radar precipitation nowcasting of significant areas using the generalized Pareto distribution. Part 1: Elements of theory and methods for estimating parameters". Hydrometeorological research and forecasting 3 (28 de septiembre de 2022): 6–41. http://dx.doi.org/10.37162/2618-9631-2022-3-6-41.

Texto completo
Resumen
The assessments of nowcasting of large precipitation areas accumulated in the last few years at the Hydrometeorological Research Center of the Russian Federation are presented in two parts complemented by a discussion of methodological problems in the first part and application problems in the second part of the paper. The division is largely due to the sharp distinction between the theoretical modeling of extremes with a relatively free choice of assumptions and the statistical analysis of the distribution "tails" in rapidly "impoverishing" samples. The contrast between these parts is exacerbated by the responsibility we attribute to the statistical inference relating to extreme and, as a rule, dangerous events. The first part deals with the description of two classical models of the extreme value theory for independent one-dimensional random variables ("block maxima") and for threshold exceedances in stationary time series ("peaks over threshold"). The article explores problems arising from violation of the theoretical results and carries a brief overview of the methods of addressing such problems when extremes are modeled using real data, including those from the field of meteorology. Special attention is given to the distributions with "heavy" tails. Methods and formulas for estimating important characteristics, including the parameters of limiting distributions, are discussed that are borrowed from the references in the documentation of computational mathematical packages of the R language repository. Keywords: precipitation nowcasting, extreme value theory, statistical modeling of extremes, heavy distribution tails, mathematical packages for fitting extreme value distributions
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Abner, Natasha, Grégoire Clarté, Carlo Geraci, Robin J. Ryder, Justine Mertz, Anah Salgat y Shi Yu. "Computational phylogenetics reveal histories of sign languages". Science 383, n.º 6682 (2 de febrero de 2024): 519–23. http://dx.doi.org/10.1126/science.add7766.

Texto completo
Resumen
Sign languages are naturally occurring languages. As such, their emergence and spread reflect the histories of their communities. However, limitations in historical recordkeeping and linguistic documentation have hindered the diachronic analysis of sign languages. In this work, we used computational phylogenetic methods to study family structure among 19 sign languages from deaf communities worldwide. We used phonologically coded lexical data from contemporary languages to infer relatedness and suggest that these methods can help study regular form changes in sign languages. The inferred trees are consistent in key respects with known historical information but challenge certain assumed groupings and surpass analyses made available by traditional methods. Moreover, the phylogenetic inferences are not reducible to geographic distribution but do affirm the importance of geopolitical forces in the histories of human languages.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Peterson, Kelly S., Julia Lewis, Olga V. Patterson, Alec B. Chapman, Daniel W. Denhalter, Patricia A. Lye, Vanessa W. Stevens et al. "Automated Travel History Extraction From Clinical Notes for Informing the Detection of Emergent Infectious Disease Events: Algorithm Development and Validation". JMIR Public Health and Surveillance 7, n.º 3 (24 de marzo de 2021): e26719. http://dx.doi.org/10.2196/26719.

Texto completo
Resumen
Background Patient travel history can be crucial in evaluating evolving infectious disease events. Such information can be challenging to acquire in electronic health records, as it is often available only in unstructured text. Objective This study aims to assess the feasibility of annotating and automatically extracting travel history mentions from unstructured clinical documents in the Department of Veterans Affairs across disparate health care facilities and among millions of patients. Information about travel exposure augments existing surveillance applications for increased preparedness in responding quickly to public health threats. Methods Clinical documents related to arboviral disease were annotated following selection using a semiautomated bootstrapping process. Using annotated instances as training data, models were developed to extract from unstructured clinical text any mention of affirmed travel locations outside of the continental United States. Automated text processing models were evaluated, involving machine learning and neural language models for extraction accuracy. Results Among 4584 annotated instances, 2659 (58%) contained an affirmed mention of travel history, while 347 (7.6%) were negated. Interannotator agreement resulted in a document-level Cohen kappa of 0.776. Automated text processing accuracy (F1 85.6, 95% CI 82.5-87.9) and computational burden were acceptable such that the system can provide a rapid screen for public health events. Conclusions Automated extraction of patient travel history from clinical documents is feasible for enhanced passive surveillance public health systems. Without such a system, it would usually be necessary to manually review charts to identify recent travel or lack of travel, use an electronic health record that enforces travel history documentation, or ignore this potential source of information altogether. The development of this tool was initially motivated by emergent arboviral diseases. More recently, this system was used in the early phases of response to COVID-19 in the United States, although its utility was limited to a relatively brief window due to the rapid domestic spread of the virus. Such systems may aid future efforts to prevent and contain the spread of infectious diseases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Wijaya, Yuliandre. "蕊希《只能陪你走一程》小说修辞手法分析 GAYA BAHASA DALAM NOVEL I WILL BE HERE WITH YOU KARYA RUI XI". Journal of Language, Literature, and Teaching 4, n.º 2 (20 de noviembre de 2022): 145–61. http://dx.doi.org/10.35529/jllte.v4i2.145-161.

Texto completo
Resumen
This research’s purposes are to explain the types of figures of speech used and what are the functions of those used by the author in the novel I Will Be Here with You. This research utilizes the qualitative descriptive method. The data resource is the novel itself entitled I Will Be Here with You. The data collection technique is documentation. The results of the study found as many as 12 types of figures of speech used, namely 比喻 (bǐyù) makes abstract things seem concrete, visualizes things more generally, 比拟 (bǐnǐ) gives a clear impression, helps readers understand the conveyed feelings, 夸张 (kuāzhāng) evokes strong resonance strong and imagination, accentuates elements and characteristics in the novel, 婉曲 (wǎnqū) makes it easier to accept what is written, 对偶 (duì'ǒu) gives a strong sense of rhythm and focused meaning, helps readers draw conclusions effectively, 排比 (páibǐ) increases language momentum and expression effect, emphasizes meanings, 层递(céngdì) deepens understanding and impression, 顶真 (dǐngzhēn) gives fluent expression, fresh writing style or unique writing form, 对比 (duìbǐ) accentuates thoughts, 反复 (fǎnfù) emphasizes main thoughts, written in order , has a strong sense of rhythm, 设问 (shewèn) captures attention and guides the readers’ mind, 反问 (fǎnwèn) evokes feelings. Keywords: Novel, Figures of Speech
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Binti Muchsini, Binti Muchsini, Siswandari Siswandari, Gunarhadi Gunarhadi y Wiranto Wiranto. "Exploring college students’ computational thinking in accounting spreadsheets design activities". World Journal on Educational Technology: Current Issues 14, n.º 6 (28 de noviembre de 2022): 1752–64. http://dx.doi.org/10.18844/wjet.v14i6.7715.

Texto completo
Resumen
This study aims to investigate the extent to which computational thinking can be developed through constructionism-based accounting spreadsheets activities. This study design used a mixed-method approach, namely a participatory qualitative approach and a quantitative descriptive approach. Data were collected through documentation (college students’ artefacts) and classroom observations. The results showed that constructionism-based accounting spreadsheets design can build and facilitate computational thinking development. The college students’ emotional and social engagement when executing a design plan can foster curiosity and high enthusiasm to complete the design together. This engagement can reduce the cognitive load that students feel in understanding programming languages when utilising visual basic for application excel. This study contributes and suggests to learning practitioners to improve the students’ quality so that they can compete in this digital era. This research can be used as a basis for conducting further research where researchers empirically investigate the impact of computational thinking development. Keywords: Computational thinking, cognitive load, emotional engagement, accounting education;
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Loncar-Turukalo, Tatjana, Eftim Zdravevski, José Machado da Silva, Ioanna Chouvarda y Vladimir Trajkovik. "Literature on Wearable Technology for Connected Health: Scoping Review of Research Trends, Advances, and Barriers". Journal of Medical Internet Research 21, n.º 9 (5 de septiembre de 2019): e14017. http://dx.doi.org/10.2196/14017.

Texto completo
Resumen
Background Wearable sensing and information and communication technologies are key enablers driving the transformation of health care delivery toward a new model of connected health (CH) care. The advances in wearable technologies in the last decade are evidenced in a plethora of original articles, patent documentation, and focused systematic reviews. Although technological innovations continuously respond to emerging challenges and technology availability further supports the evolution of CH solutions, the widespread adoption of wearables remains hindered. Objective This study aimed to scope the scientific literature in the field of pervasive wearable health monitoring in the time interval from January 2010 to February 2019 with respect to four important pillars: technology, safety and security, prescriptive insight, and user-related concerns. The purpose of this study was multifold: identification of (1) trends and milestones that have driven research in wearable technology in the last decade, (2) concerns and barriers from technology and user perspective, and (3) trends in the research literature addressing these issues. Methods This study followed the scoping review methodology to identify and process the available literature. As the scope surpasses the possibilities of manual search, we relied on the natural language processing tool kit to ensure an efficient and exhaustive search of the literature corpus in three large digital libraries: Institute of Electrical and Electronics Engineers, PubMed, and Springer. The search was based on the keywords and properties to be found in articles using the search engines of the digital libraries. Results The annual number of publications in all segments of research on wearable technology shows an increasing trend from 2010 to February 2019. The technology-related topics dominated in the number of contributions, followed by research on information delivery, safety, and security, whereas user-related concerns were the topic least addressed. The literature corpus evidences milestones in sensor technology (miniaturization and placement), communication architectures and fifth generation (5G) cellular network technology, data analytics, and evolution of cloud and edge computing architectures. The research lag in battery technology makes energy efficiency a relevant consideration in the design of both sensors and network architectures with computational offloading. The most addressed user-related concerns were (technology) acceptance and privacy, whereas research gaps indicate that more efforts should be invested into formalizing clear use cases with timely and valuable feedback and prescriptive recommendations. Conclusions This study confirms that applications of wearable technology in the CH domain are becoming mature and established as a scientific domain. The current research should bring progress to sustainable delivery of valuable recommendations, enforcement of privacy by design, energy-efficient pervasive sensing, seamless monitoring, and low-latency 5G communications. To complement technology achievements, future work involving all stakeholders providing research evidence on improved care pathways and cost-effectiveness of the CH model is needed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Kezai, Mourad y Abdallah Khababa. "Generating Maude Specifications from M-UML Statechart Diagrams". Journal of Advanced Computational Intelligence and Intelligent Informatics 26, n.º 1 (20 de enero de 2022): 8–16. http://dx.doi.org/10.20965/jaciii.2022.p0008.

Texto completo
Resumen
The unified modeling language (UML) is used for the specification, visualization, and documentation of object-oriented software systems. Mobile UML (M-UML) is an extension of UML that considers mobility aspects, and a mobile statechart is an extension of the standard UML diagram that deals with the requirements for modeling, specifying, and visualizing mobile agent-based systems. However, mobile statecharts inherit UML’s lack of formal notation for analysis and verification purposes. The rewriting logic language Maude is a formal method that deals with mobile computations. In this paper, we propose a formalization of M-UML statechart diagrams using Maude to provide formal semantics for such diagrams. The generated Maude specifications are then used to analyze and check the systems using Maude analytical tools. This approach is illustrated through an example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Jalal, Hawre, Petros Pechlivanoglou, Eline Krijkamp, Fernando Alarid-Escudero, Eva Enns y M. G. Myriam Hunink. "An Overview of R in Health Decision Sciences". Medical Decision Making 37, n.º 7 (6 de enero de 2017): 735–46. http://dx.doi.org/10.1177/0272989x16686559.

Texto completo
Resumen
As the complexity of health decision science applications increases, high-level programming languages are increasingly adopted for statistical analyses and numerical computations. These programming languages facilitate sophisticated modeling, model documentation, and analysis reproducibility. Among the high-level programming languages, the statistical programming framework R is gaining increased recognition. R is freely available, cross-platform compatible, and open source. A large community of users who have generated an extensive collection of well-documented packages and functions supports it. These functions facilitate applications of health decision science methodology as well as the visualization and communication of results. Although R’s popularity is increasing among health decision scientists, methodological extensions of R in the field of decision analysis remain isolated. The purpose of this article is to provide an overview of existing R functionality that is applicable to the various stages of decision analysis, including model design, input parameter estimation, and analysis of model outputs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Nikulchev, Evgeny, Dmitry Ilin, Pavel Kolyasnikov, Victoria Ismatullina y Ilya Zakharov. "Development of a Common Format of Questionnaire Tests for a Web-based Platform of Population and Experimental Psychological Research". ITM Web of Conferences 18 (2018): 04004. http://dx.doi.org/10.1051/itmconf/20181804004.

Texto completo
Resumen
A web platform for psychological research needs a single format of questionnaire tests to ensure interaction between its components. The study proposes a general test structure, variant questions, variations in response types, and an embedded domain-specific language for computations. The use of JSON is proposed and justified to store the hierarchical structure of the questionnaire test, and JSON Schema is defined as a technology suitable for the formation of the standard. From the considered validation instruments for compliance with the standard described in JSON Schema, ajv was defined as the most applicable to the task. To build the documentation, Doca is relevant, but this tool needs to be modified to meet the requirements of the task.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Tidwell, Jacqueline. "From a Smoking Gun to Spent Fuel: Principled Subsampling Methods for Building Big Language Data Corpora from Monitor Corpora". Data 4, n.º 2 (2 de abril de 2019): 48. http://dx.doi.org/10.3390/data4020048.

Texto completo
Resumen
With the influence of Big Data culture on qualitative data collection, acquisition, and processing, it is becoming increasingly important that social scientists understand the complexity underlying data collection and the resulting models and analyses. Systematic approaches for creating computationally tractable models need to be employed in order to create representative, specialized reference corpora subsampled from Big Language Data sources. Even more importantly, any such method must be tested and vetted for its reproducibility and consistency in generating a representative model of a particular population in question. This article considers and tests one such method for Big Language Data downsampling of digitally accessible language data to determine both how to operationalize this form of corpus model creation, as well as testing whether the method is reproducible. Using the U.S. Nuclear Regulatory Commission’s public documentation database as a test source, the sampling method’s procedure was evaluated to assess variation in the rate of which documents were deemed fit for inclusion or exclusion from the corpus across four iterations. After performing multiple sampling iterations, the approach pioneered by the Tobacco Documents Corpus creators was deemed to be reproducible and valid using a two-proportion z-test at a 99% confidence interval at each stage of the evaluation process–leading to a final mean rejection ratio of 23.5875 and variance of 0.891 for the documents sampled and evaluated for inclusion into the final text-based model. The findings of this study indicate that such a principled sampling method is viable, thus necessitating the need for an approach for creating language-based models that account for extralinguistic factors and linguistic characteristics of documents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

COVINGTON, MICHAEL A., ROBERTO BAGNARA, RICHARD A. O'KEEFE, JAN WIELEMAKER y SIMON PRICE. "Coding guidelines for Prolog". Theory and Practice of Logic Programming 12, n.º 6 (30 de junio de 2011): 889–927. http://dx.doi.org/10.1017/s1471068411000391.

Texto completo
Resumen
AbstractCoding standards and good practices are fundamental to a disciplined approach to software projects irrespective of programing languages being employed. Prolog programing can benefit from such an approach, perhaps more than programing in other languages. Despite this, no widely accepted standards and practices seem to have emerged till now. The present paper is a first step toward filling this void: It provides immediate guidelines for code layout, naming conventions, documentation, proper use of Prolog features, program development, debugging, and testing. Presented with each guideline is its rationale and, where sensible options exist, illustrations of the relative pros and cons for each alternative. A coding standard should always be selected on a per-project basis, based on a host of issues pertinent to any given programing project; for this reason the paper goes beyond the mere provision of normative guidelines by discussing key factors and important criteria that should be taken into account when deciding on a full-fledged coding standard for the project.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Anditiasari, Nungki. "ANALISIS KESULITAN BELAJAR ABK (TUNA RUNGU) DALAM MENYELESAIKAN SOAL CERITA MATEMATIKA". Mathline : Jurnal Matematika dan Pendidikan Matematika 5, n.º 2 (29 de diciembre de 2020): 183–94. http://dx.doi.org/10.31943/mathline.v5i2.162.

Texto completo
Resumen
This study aims to determine the learning difficulties of students with hearing impairments in solving story problems using the role playing method and using problem solving learning according to Polya, namely: understanding problems, determining problem strategy plans, solving problem strategies, and checking the answers obtained to understand story question are seen from four aspects, namely understanding story problems, making mathematical models, computations, and drawing conclusions. The subject of this study used 2 ABK students. The data were collected by means of interviews, tests and documentations. Data analysis was carried and by drawing conclusions. Based on the results of the research students can understand story problems by applying role playing and problem solving methods, because learning becomes very fun and students can easily solve story problems. Based on the research results, students can understand story problems by applying role playing and problem solving methods, because learning becomes very fun and students can easily solve story problems. In addition, students experience some learning difficulties, namely difficulties in understanding the questions, difficulties in basic mathematical concepts, and difficulties in understanding the language that is conveyed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Mohd, Haslina, Fauziah Baharom, Norida Muhd Darus, Shafinah Farvin Packeer Mohamed, Zaharin Marzuki y Muhammad Afdhal Muhammad Robie. "Functional Requirements Specification of E-Tendering Using Natural Language Approach: Towards Innovative Business Transformation". Journal of Computational and Theoretical Nanoscience 16, n.º 12 (1 de diciembre de 2019): 5003–7. http://dx.doi.org/10.1166/jctn.2019.8555.

Texto completo
Resumen
Recently, business transformation towards the used of Information and Communication Technology (ICT) is a necessity toward rapid industries and the paradigm shifted to sustain business competitiveness. The holistic electronic approach is one of business innovations, especially in handling a lot of tender documentations and process in an electronic environment namely as e-Tendering. Unfortunately, the existing tender process transformation in the electronic approach is not properly followed certain standard and guideline, especially in establishing a good e-Tendering functional requirements specification to ensure the organizations would be in the best served. This is important to ensure a good e-Tendering system can be developed by e-Tendering developers based on a good e-Tendering functional requirement specifications. The requirements specification is a process of documenting user and system requirements. Commonly, user and system requirements should be clear, unambiguous, easy to understand, complete, and consistent. In practice, this is difficult to achieve due to interpretation of the requirements in different ways by stakeholders, which are often inherent conflicts and inconsistencies of the requirements. The implementation of the existing e-tendering still remains uncertainties, especially in defining the functional requirements of the e-tendering system. Therefore, this study aims to construct the e-Tendering functional requirement model using requirement template in natural language representation approach. Moreover the development of this system requirement model may provide a consistency to the requirements representation. The study uses UN/CEFACT Business Standard of the e-Tendering Business. The identified functional requirements are designed by using Requirement Template to ensure the reliability and understandability of requirements. Besides, the proposed functional requirements is constructed by adapting the natural language and verified by expert review approaches. As a result, this study proposed a functional requirements specification of the e-Tendering that contains detailed description which can be referred by software practitioners in developing a secure e-tendering system effectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Sbordone, Luca, Piercarlo Bonifacio y Fiorella Castelli. "ATLAS 9 and ATLAS 12 under GNU-Linux". Proceedings of the International Astronomical Union 2, S239 (agosto de 2006): 71–73. http://dx.doi.org/10.1017/s1743921307000142.

Texto completo
Resumen
AbstractWe successfully ported the suite of codes developed by R. L. Kurucz for stellar atmosphere modelling, abundance determination and synthetic spectra calculation, to run under GNU-Linux. The ported codes include ATLAS 9 and ATLAS 12 for 1-D plane-parallel atmosphere model calculation, DFSYNTHE, which calculates the Opacity Distribution Functions (ODF) to be used with ATLAS 9, WIDTH to derive chemical abundances from measured line Equivalent Widths (EW) and SYNTHE to calculate synthetic spectra. The codes input and output files remain fully compatible with the VMS versions, while the computation speed has been greatly increased due to the high efficiency of modern PC CPUs. As an example, ATLAS 9 model calculations and the computation of large (e.g. 10 nm) synthetic spectra can be executed in a matter of minutes on any mainstream laptop computer. Arbitrary chemical compositions can be used in calculations (by using ATLAS 12 through opacity sampling or by calculating ad-hoc ODFs for ATLAS 9). The large set of scripting languages existing under Linux (shell, perl, python. . .) and the availability of low-cost multiprocessor Linux architectures (such as Beowulf) makes the port highly effective to build model farms to produce large quantities of atmosphere models or synthetic spectra (e.g. for the production of integrated light synthetic spectra). The port is hosted on a dedicated website including a download section for source codes, precompiled binaries, needed data (opacities, line lists and so on), sample launch scripts and documentation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Sulthon, Sulthon. "MEMBANGUN PEMAHAMAN KONSEP DASAR MATEMATIKA PADA ANAK BERKESULITAN BELAJAR MATEMATIKA DI MI". Primary : Jurnal Keilmuan dan Kependidikan Dasar 12, n.º 1 (30 de junio de 2020): 27. http://dx.doi.org/10.32678/primary.v12i01.2457.

Texto completo
Resumen
This study aims to: 1) find out the types of mathematics learning difficulties experienced by MI students, 2) find out the factors that cause difficulty learning mathematics of MI students, and 3) know the concepts and understanding of mathematics to overcome the difficulties of learning mathematics at Students Islamic Elementary School (MI).The research method used in this study is qualitative. The subjects of this study were two math subject teachers and sixs students who indicated mathematics learning difficulties. While the data collection techniques are observation, interviews, and documentation. Data analysis was carried out with descriptive analysis with data reduction, data presentation, and conclusion drawing.The results showed that: 1)Types of Mathematics learning difficulties experienced by MI / SD students include are, (1) low basic Mathematics skills relating to reading errors understanding the problem; transformation, answer writing process skills; (2) concept errors occur which include: errors in determining the theorem / formula, not writing theorem / formula; (3) procedural error, that is, the inability to manipulate the working steps of Mathematics and not using reasoning conclusions correctly; (4) computational errors that consist of errors in manipulating operations, and do not check the results of the calculation again; 2. Factors causing the difficulty of learning Mathematics MI students are, (1) internal factors, namely low interest and motivation to learn, low intellectual ability, and wrong perceptions of Mathematics, and not mastering the basic concepts of Mathematics; (2) external factors namely the teacher, the teacher lacks mastery of Mathematics material, the teacher does not understand the characteristics of students in learning, the teacher is less able to use active learning techniques, the lack of fulfillment of student books, the school environment is less supportive, and the community environment; 3. efforts to overcome the learning difficulties of MI students' Mathematics are, (1) building basic concepts of Mathematics and understanding proper Mathematics by teaching concepts, principles, with language that is easy for students and linking students' daily experiences; (2) re-teaching Mathematical concepts with theories or formulas that have been learned; (3) the development of students' intuitive thinking; (4) rebuild procedural Mathematics by repeating mathematical problems or problems by paying attention to facts, concepts, and principles that have been learned; (5) provide Mathematical remidial learning guidance. Keywords: learning difficulties, mathematics, islamic elementary school (MI) Abstrak Penelitian ini bertujuan untuk: 1) mengetahui jenis kesulitan belajar Matematika yang dialami siswa MI, 2) mengetahui faktor penyebab kesulitan belajar Matematika siswa MI, dan 3) mengetahui upaya mengatasi kesulitan belajar Matematika siswa MI. Metode penelitian yang digunakan dalam penelitian ini adalah kualitatif. Subjek penelitian ini adalah dua guru mata pelajaran matematika dan 6 siswa yang terindikasi kesulitan belajar matematika. Sedangkan teknik pengumpulan data dengan observasi, wawancara, dan dokumentasi. Analisis data dilakukan dengan analisis deskriptif dengan reduksi data, penyajian data, dan penarikan kesimpulan. Hasil penelitian menunjukkan bahwa: 1). Jenis kesulitan belajar Matematika yang dialami siswa MI/SD meliputi: (1) rendah keterampilan dasar Matematika yaitu berkaitan dengan kesalahan membaca soal, memahami masalah, transformasi, keterampilan proses penulisan jawaban; (2) terjadi kesalahan konsep yang meliputi: kesalahan dalam menentukan teorema/rumus, tidak menuliskan teorema/rumus; (3) kesalahan prosedural yaitu, ketidakmampuan memanipulasi langkah-langkah pengerjaan Matematika, dan tidak menggunakan penalaran kesimpulan dengan benar; (4) kesalahan komputasi yang terdiri dari kesalahan dalam memanipulasi operasi, dan tidak memeriksa hasil hitungannya kembali; 2). Faktor penyebab kesulitan belajar Matematika siswa MI adalah, (1) faktor internal yaitu minat dan motivasi belajar rendah, kemampuan intelektual rendah, persepsi yang salah terhadap Matematika, dan tidak dikuasainya konsep-konsep dasar Matematika; (2) faktor eksternal yaitu guru, guru kurang menguasai materi Matematika, guru tidak memahami karakteristik siswa dalam belajar, guru kurang mampu menggunakan teknik pembelajaran aktif, kurang terpenuhinya buku siswa, lingkungan sekolah kurang mendukung, dan lingkungan masyarakat; 3) upaya untuk mengatasi kesulitan belajar Matematika siswa MI adalah, (1) membangun konsep dasar Matematika serta pemahaman Matematika yang tepat dengan mengajarkan konsep, prinsip, dengan bahasa yang mudah bagi siswa serta mengaitkan pengalaman sehari-hari siswa; (2) mengajar kembali konsep Matematika dengan teori-teori atau rumus-rumus yang telah dipelajari; (3) pengembangan berpikir intuitif siswa; (4) membangun kembali procedural Matematika dengan mengulang kembali soal-soal atau permasalahan matematika dengan memperhatikan fakta-fakta, konsep-konsep, dan prisip yang pernah dipelajari; (5) memberikan bimbingan pembelajaran remidial Matematika. Kata Kunci: Kesulitan belajar, matematika, Madrasah Ibtidaiyah.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Rücker, Carsten. "Open-source Python library for modeling coupled thermo-hydro-mechanical (THM) processes". Safety of Nuclear Waste Disposal 2 (6 de septiembre de 2023): 127–28. http://dx.doi.org/10.5194/sand-2-127-2023.

Texto completo
Resumen
Abstract. The interactions between temperature, fluids, and mechanical properties in a repository system are essentially described scientifically by coupled thermo-hydro-mechanical (THM) processes. THM modeling, i.e., the prediction of the behavior of materials under different conditions, is the fundamental numerical tool here. The confident handling and deep understanding of numerical computation methods is thus the prerequisite for performing and evaluating preliminary safety analyses in the site selection process. At the Federal Office for the Safety of Nuclear Waste Management (BASE), a software library for the simulation of coupled THM processes is currently being developed. The main goal of the in-house development is an open-source toolbox in the scripting language Python and is motivated by several long-term sub-goals: targeted development of expertise within BASE regarding numerical modeling of safety-relevant aspects in the long-term safety analyses; diversification of the testing capabilities regarding the preliminary safety investigations by means of an in-house, independent modeling software; foundation of a library of known benchmarks and evaluation examples for the comparison of different software tools; documentation and processing of basic THM scenarios for internal or, if necessary, public technical training. The focus of this development is on creating a toolbox that is easy to use and at the same time highly flexible. The main methodical aspects are as follows: building a new library based on the pyGIMLi pre- and postprocessing framework (Rücker et al., 2017); creating a finite-element reference implementation in the Python scripting language for maximal transparency; creating an easy-to-use interface to the solution of the weak formulation for the finite-element theory with expressions of a symbolic manner allowing maximal flexibility; defining an interface to allow for the integration of alternative, third-party high-performance libraries; creating a collection of Jupyter notebooks of well-documented test cases and benchmarks. Choosing the open-source approach ensures the best possible transparency and, in the long term, also allows the provision of appropriately quality-assured and documented simulation tools to the public. The presented poster shows the current development status of the software library and the currently implemented quality assurance concepts and gives an outline of the potential applications of the library.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Hirsch, Jamie S., Jessica S. Tanenbaum, Sharon Lipsky Gorman, Connie Liu, Eric Schmitz, Dritan Hashorva, Artem Ervits, David Vawdrey, Marc Sturm y Noémie Elhadad. "HARVEST, a longitudinal patient record summarizer". Journal of the American Medical Informatics Association 22, n.º 2 (28 de octubre de 2014): 263–74. http://dx.doi.org/10.1136/amiajnl-2014-002945.

Texto completo
Resumen
Abstract Objective To describe HARVEST, a novel point-of-care patient summarization and visualization tool, and to conduct a formative evaluation study to assess its effectiveness and gather feedback for iterative improvements. Materials and methods HARVEST is a problem-based, interactive, temporal visualization of longitudinal patient records. Using scalable, distributed natural language processing and problem salience computation, the system extracts content from the patient notes and aggregates and presents information from multiple care settings. Clinical usability was assessed with physician participants using a timed, task-based chart review and questionnaire, with performance differences recorded between conditions (standard data review system and HARVEST). Results HARVEST displays patient information longitudinally using a timeline, a problem cloud as extracted from notes, and focused access to clinical documentation. Despite lack of familiarity with HARVEST, when using a task-based evaluation, performance and time-to-task completion was maintained in patient review scenarios using HARVEST alone or the standard clinical information system at our institution. Subjects reported very high satisfaction with HARVEST and interest in using the system in their daily practice. Discussion HARVEST is available for wide deployment at our institution. Evaluation provided informative feedback and directions for future improvements. Conclusions HARVEST was designed to address the unmet need for clinicians at the point of care, facilitating review of essential patient information. The deployment of HARVEST in our institution allows us to study patient record summarization as an informatics intervention in a real-world setting. It also provides an opportunity to learn how clinicians use the summarizer, enabling informed interface and content iteration and optimization to improve patient care.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía