Journal articles on the topic 'Scripts generated'

To see the other types of publications on this topic, follow the link: Scripts generated.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Scripts generated.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Usui, Toshinori, Yuto Otsuki, Tomonori Ikuse, Yuhei Kawakoya, Makoto Iwamura, Jun Miyoshi, and Kanta Matsuura. "Automatic Reverse Engineering of Script Engine Binaries for Building Script API Tracers." Digital Threats: Research and Practice 2, no. 1 (March 2021): 1–31. http://dx.doi.org/10.1145/3416126.

Full text
Abstract:
Script languages are designed to be easy-to-use and require low learning costs. These features provide attackers options to choose a script language for developing their malicious scripts. This diversity of choice in the attacker side unexpectedly imposes a significant cost on the preparation for analysis tools in the defense side. That is, we have to prepare for multiple script languages to analyze malicious scripts written in them. We call this unbalanced cost for script languages asymmetry problem . To solve this problem, we propose a method for automatically detecting the hook and tap points in a script engine binary that is essential for building a script Application Programming Interface (API) tracer. Our method allows us to reduce the cost of reverse engineering of a script engine binary, which is the largest portion of the development of a script API tracer, and build a script API tracer for a script language with minimum manual intervention. This advantage results in solving the asymmetry problem. The experimental results showed that our method generated the script API tracers for the three script languages popular among attackers (Visual Basic for Applications (VBA), Microsoft Visual Basic Scripting Edition (VBScript), and PowerShell). The results also demonstrated that these script API tracers successfully analyzed real-world malicious scripts.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Shuang Mei, Yong Po Liu, and Ji Wu. "Building a Distributed Testing Execution System Based on TTCN-3." Applied Mechanics and Materials 556-562 (May 2014): 2772–78. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.2772.

Full text
Abstract:
In this paper, a distributed testing execution system is designed, which provides a mechanism of node communication, test script deployment, test scheduling, executor-driving and test result collection in distributed environment. A workload model is established, by which testers can describe the performance testing requirement. A performance testing framework is given, which simulates user behaviors in real environment based on virtual users so as to generate workload from the system under test (SUT). It can control the execution of virtual users by TTCN-3 standard interface. After executing the performance testing, test report is generated by extracting log. A method of generating performance test-case is studied by reusing functional test scripts. By executing performance testing on an online bookstore, this paper demonstrates the availability of the method of reusing TTCN-3 functional test scripts and the capability of distributed performance testing system that had been established.
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Shuang Mei, Xue Mei Liu, and Yong Po Liu. "Realization of an Execution System of Distributed Performance Testing Based on TTCN-3." Applied Mechanics and Materials 713-715 (January 2015): 486–90. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.486.

Full text
Abstract:
An execution system of distributed performance testing is designed in this paper, which provides a mechanism of node communication, test script deployment, test scheduling, execution-driving and test result collection in distributed environment. A workload model is established, by which testers can describe the performance testing requirement. A performance testing framework is given, which simulates user behaviors in real environment based on virtual users so as to generate workload from the system under test (SUT). It can control the execution of virtual users by TTCN-3 standard interface. After executing the performance testing, test report is generated by extracting log. A method of generating performance test-case is studied by reusing functional test scripts. By executing performance testing on an online bookstore, this paper demonstrates the availability of the method of reusing TTCN-3 functional test scripts and the capability of distributed performance testing system established.
APA, Harvard, Vancouver, ISO, and other styles
4

Wenzel, Amy. "Schema Content for a Threatening Situation: Responses to Expected and Unexpected Events." Journal of Cognitive Psychotherapy 23, no. 2 (May 2009): 136–46. http://dx.doi.org/10.1891/0889-8391.23.2.136.

Full text
Abstract:
Although previous research has identified the components of event-based schemas, or scripts, for threatening situations in anxious individuals, no studies have examined how scripts change when anxious individuals are faced with a deviation in the expected sequence of events. In the present study, blood fearful (n = 49) and nonfearful (n = 48) participants assigned subjective units of discomfort (SUD) ratings to the events comprising the script for getting a bleeding cut on the arm. Subsequently, they listed a series of 10 events that would occur following 1 of 2 unexpected events that interrupted the script. Results indicated that blood fearful participants assigned higher SUD ratings to scripted events than nonfearful participants. Participants in the two groups generated largely similar sequences of events that would occur after the unexpected events. However, relative to nonfearful participants, blood fearful participants listed more events characterized by negative affect. These results suggest that blood fearful individuals are able to recover from deviations from the standard script for a common but threatening situation, although their associated emotional experiences are more distressing than those of nonfearful individuals.
APA, Harvard, Vancouver, ISO, and other styles
5

Friedland, Gerald, Luke Gottlieb, and Adam Janin. "Narrative theme navigation for sitcoms supported by fan-generated scripts." Multimedia Tools and Applications 63, no. 2 (September 20, 2011): 387–406. http://dx.doi.org/10.1007/s11042-011-0877-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Briggs, Pamela, and Ken Goryo. "Biscriptal Interference: A Study of English and Japanese." Quarterly Journal of Experimental Psychology Section A 40, no. 3 (August 1988): 515–31. http://dx.doi.org/10.1080/02724988843000050.

Full text
Abstract:
It was once assumed that alphabetic, syllabic, and logographic scripts could be clearly differentiated in terms of their respective processing demands, but recent evidence suggests that, as visual stimuli, they all draw upon common “configurational” processing resources. Two experiments are reported which address this issue. Both employ cross-lingual interference paradigms, with the rationale that competition for limited processing resources will be reflected in the degree of interference generated when two scripts are presented simultaneously. The experiments differ in terms of task requirements, the first being a word-naming task, biased towards reliance upon the more rule-based decoding skills; whereas the second is a colour naming task, with a more configurational bias. In the first study, the locus of the interference effect was clearly pre-lexical, and interference was only generated by those scripts that could feasibly draw upon grapheme-phoneme correspondence rules. No interference was generated by logographs that could be accessed “directly” without recourse to any prelexical phonological code. In the second study, the locus of interference was twofold: early in processing, as a result of competition for configurational processes, and later, phonological output competition prior to articulation. These results clearly demonstrate major differences in the ways in which logographic, syllabic, and alphabetic scripts are processed.
APA, Harvard, Vancouver, ISO, and other styles
7

Wilson, Christine, Dave Smith, Adrian Burden, and Paul Holmes. "Participant-generated imagery scripts produce greater EMG activity and imagery ability." European Journal of Sport Science 10, no. 6 (November 2010): 417–25. http://dx.doi.org/10.1080/17461391003770491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Oz, Mert, Caner Kaya, Erdi Olmezogullari, and Mehmet S. Aktas. "On the Use of Generative Deep Learning Approaches for Generating Hidden Test Scripts." International Journal of Software Engineering and Knowledge Engineering 31, no. 10 (October 2021): 1447–68. http://dx.doi.org/10.1142/s0218194021500480.

Full text
Abstract:
With the advent of web 2.0, web application architectures have been evolved, and their complexity has grown enormously. Due to the complexity, testing of web applications is getting time-consuming and intensive process. In today’s web applications, users can achieve the same goal by performing different actions. To ensure that the entire system is safe and robust, developers try to test all possible user action sequences in the testing phase. Since the space of all the possibilities is enormous, covering all user action sequences can be impossible. To automate the test script generation task and reduce the space of the possible user action sequences, we propose a novel method based on long short-term memory (LSTM) network for generating test scripts from user clickstream data. The experiment results clearly show that generated hidden test sequences are user-like sequences, and the process of generating test scripts with the proposed model is less time-consuming than writing them manually.
APA, Harvard, Vancouver, ISO, and other styles
9

Souza, Fernando, and Adolfo Maia Jr. "A Mathematical, Graphical and Visual Approach to Granular Synthesis Composition." Revista Vórtex 9, no. 2 (December 10, 2021): 1–27. http://dx.doi.org/10.33871/23179937.2021.9.2.4.

Full text
Abstract:
We show a method for Granular Synthesis Composition based on a mathematical modeling for the musical gesture. Each gesture is drawn as a curve generated from a particular mathematical model (or function) and coded as a MATLAB script. The gestures can be deterministic through defining mathematical time functions, hand free drawn, or even randomly generated. This parametric information of gestures is interpreted through OSC messages by a granular synthesizer (Granular Streamer). The musical composition is then realized with the models (scripts) written in MATLAB and exported to a graphical score (Granular Score). The method is amenable to allow statistical analysis of the granular sound streams and the final music composition. We also offer a way to create granular streams based on correlated pair of grains parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Pino, Rodney, Renier Mendoza, and Rachelle Sambayan. "A Baybayin word recognition system." PeerJ Computer Science 7 (June 16, 2021): e596. http://dx.doi.org/10.7717/peerj-cs.596.

Full text
Abstract:
Baybayin is a pre-Hispanic Philippine writing system used in Luzon island. With the effort in reintroducing the script, in 2018, the Committee on Basic Education and Culture of the Philippine Congress approved House Bill 1022 or the ”National Writing System Act,” which declares the Baybayin script as the Philippines’ national writing system. Since then, Baybayin OCR has become a field of research interest. Numerous works have proposed different techniques in recognizing Baybayin scripts. However, all those studies anchored on the classification and recognition at the character level. In this work, we propose an algorithm that provides the Latin transliteration of a Baybayin word in an image. The proposed system relies on a Baybayin character classifier generated using the Support Vector Machine (SVM). The method involves isolation of each Baybayin character, then classifying each character according to its equivalent syllable in Latin script, and finally concatenate each result to form the transliterated word. The system was tested using a novel dataset of Baybayin word images and achieved a competitive 97.9% recognition accuracy. Based on our review of the literature, this is the first work that recognizes Baybayin scripts at the word level. The proposed system can be used in automated transliterations of Baybayin texts transcribed in old books, tattoos, signage, graphic designs, and documents, among others.
APA, Harvard, Vancouver, ISO, and other styles
11

Gao, Yi Chao, Feng Jin, Yan Jie Xu, and Jian Yang. "Development of Pre-Processor for OpenSees Based on Patran Command Language." Applied Mechanics and Materials 409-410 (September 2013): 1615–19. http://dx.doi.org/10.4028/www.scientific.net/amm.409-410.1615.

Full text
Abstract:
OpenSees is an object-oriented open-source finite element software platform suitable for advanced simulation of structural and geotechnical systems. Lacking of powerful pre-processor, the preparing of input scripts for OpenSees is time consuming and error-prone for large and complex models. Patran is an excellent pre-processing commercial package for finite element analysis and offers the powerful Patran Command Language (PCL) for user customization. Taking the advantage of Patran in finite element pre-processing, the finite element modeling is done in Patran and the input script for OpenSees is generated by PCL. It is an alternative approach to efficient pre-processing for OpenSees.
APA, Harvard, Vancouver, ISO, and other styles
12

Bostik, Ondrej, Karel Horak, and Jan Klecka. "Usability Evaluation of Randomly Generated Fonts for Bubble Captcha." MENDEL 24, no. 1 (June 1, 2018): 143–50. http://dx.doi.org/10.13164/mendel.2018.1.143.

Full text
Abstract:
A Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), is the wide-spread concept of systems suited to secure the web services from automated SPAM scripts. The most common CAPTCHA systems benefit from imperfections of Optical Character Recognition algorithms. This paper presents our ongoing work focused on the development of a new CAPTCHA scheme based on a human perception. The goal of this work is to evaluate the usability of randomly generated fonts used in Bubble Captcha scheme with both humans and OCR classifiers.
APA, Harvard, Vancouver, ISO, and other styles
13

Plattner, Alain M. "GPRPy: Open-source ground-penetrating radar processing and visualization software." Leading Edge 39, no. 5 (May 2020): 332–37. http://dx.doi.org/10.1190/tle39050332.1.

Full text
Abstract:
GPRPy is an open-source ground-penetrating radar software compatible with a range of ground-penetrating radar systems. Data processing and plotting can be performed by using graphical user interfaces or scripts that are generated automatically from the graphical user interfaces. This makes learning the software easy, and it enables researchers to share their scripts as part of a publication to ensure reproducible research. GPRPy enables profile data processing and visualization, velocity analysis, interpolation of 3D data cubes from profile data, and 3D interpolation for interfaces visible in multiple profiles. The software is written in Python and runs on all major operating systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Banday, M. Tariq, and Shafiya Afzal Sheikh. "Design of Secure Multilingual CAPTCHA Challenge." International Journal of Web Portals 7, no. 1 (January 2015): 1–27. http://dx.doi.org/10.4018/ijwp.2015010101.

Full text
Abstract:
Growing demand for native languages in web applications has made multilingual implementation of web user interfaces and dialogs essential. However, use of insecure foreign language text CAPTCHA challenges to prove human interaction in the native language pages of web applications has rendered CAPTCHA protected services unusable, insecure and inaccessible. This paper analyses CAPTCHA and multilingual functionalities of 410 multilingual websites (240 government and 170 non-government) and discusses their accessibility and usability. It enumerates deficiencies of currently in use CAPTCHA scripts and services (open and closed source). It discusses the design, algorithm, pseudo code, and working of a secure multilingual text CAPTCHA script having desired security, accessibility and usability features. The designed script offers localized onscreen keyboard, random patterns, fonts, and audio alternatives to improve usability and security. The results of experiments, security tests, and users study with the CAPTCHA tests generated through the proposed technique have validated its design, security, usability, and accessibility.
APA, Harvard, Vancouver, ISO, and other styles
15

Alfiandi, Tedi, T. M. Diansyah, and Risko Liza. "ANALISIS PERBANDINGAN MANAJEMEN KONFIGURASI MENGGUNAKAN ANSIBLE DAN SHELL SCRIPT PADA CLOUD SERVER DEPLOYMENT AWS." JiTEKH 8, no. 2 (September 30, 2020): 78–84. http://dx.doi.org/10.35447/jitekh.v8i2.308.

Full text
Abstract:
Utilization of cloud computing technology in the development of a website has significant developments such as the use of disk storage, memory, and CPUs running in the cloud at a low cost when compared to physical servers. When creates a website, human interaction in carrying out the deployment process such as creating a database and installing the packages needed by the website, all of this is done manually so it takes a lot of time. An automation process is needed to solve this problem by using ansible and shell scripts in the website deployment process. This final project will compare ansible and shell scripts as configuration management for drupal deployments to the Amazon web service EC2 server by analyzing the deployment process time, CPU and memory usage on the server, throughput, and packet loss. Based on tests that shell scripts have performed outperformed ansible at deployment time with a difference of 3 minutes, troughput generated in ansible testing is better with an average of 60,164 Kb/s and 22,009 Kb/s for shell scripts, and ansible usage against the CPU is much better because it does not make the server overloaded.
APA, Harvard, Vancouver, ISO, and other styles
16

Ghosh, Rajib, and Prabhat Kumar. "SVM and HMM Classifier Combination Based Approach for Online Handwritten Indic Character Recognition." Recent Advances in Computer Science and Communications 13, no. 2 (June 3, 2020): 200–214. http://dx.doi.org/10.2174/2213275912666181127124711.

Full text
Abstract:
Background: The growing use of smart hand-held devices in the daily lives of the people urges for the requirement of online handwritten text recognition. Online handwritten text recognition refers to the identification of the handwritten text at the very moment it is written on a digitizing tablet using some pen-like stylus. Several techniques are available for online handwritten text recognition in English, Arabic, Latin, Chinese, Japanese, and Korean scripts. However, limited research is available for Indic scripts. Objective: This article presents a novel approach for online handwritten numeral and character (simple and compound) recognition of three popular Indic scripts - Devanagari, Bengali and Tamil. Methods: The proposed work employs the Zone wise Slopes of Dominant Points (ZSDP) method for feature extraction from the individual characters. Support Vector Machine (SVM) and Hidden Markov Model (HMM) classifiers are used for recognition process. Recognition efficiency is improved by combining the probabilistic outcomes of the SVM and HMM classifiers using Dempster-Shafer theory. The system is trained using separate as well as combined dataset of numerals, simple and compound characters. Results: The performance of the present system is evaluated using large self-generated datasets as well as public datasets. Results obtained from the present work demonstrate that the proposed system outperforms the existing works in this regard. Conclusion: This work will be helpful to carry out researches on online recognition of handwritten character in other Indic scripts as well as recognition of isolated words in various Indic scripts including the scripts used in the present work.
APA, Harvard, Vancouver, ISO, and other styles
17

Abdoler, Emily, Bridget O’Brien, Brian Schwartz, and Brian Schwartz. "1946. An Exploratory Study of the Therapeutic Reasoning Underlying Antimicrobial Selection." Open Forum Infectious Diseases 6, Supplement_2 (October 2019): S56—S57. http://dx.doi.org/10.1093/ofid/ofz359.123.

Full text
Abstract:
Abstract Background Clinical reasoning research has helped illuminate how clinicians make diagnoses but offers less insight into management decisions. The need to understand therapeutic choices is particularly salient within infectious diseases (ID), where antimicrobial prescribing has broad implications given increasing rates of resistance. Researchers have examined general factors underlying antibiotic prescribing. Our study advances this work by exploring the factors and processes underlying physician choice of specific antimicrobials. Methods We conducted individual interviews with a purposeful sample of Hospitalists and ID attendings. Our semi-structured interview explored the reasoning underlying antimicrobial choice through clinical vignettes. We identified steps and factors after 12 interviews then conducted 4 more to confirm and refine our findings. We generated a codebook through an iterative, inductive process and used Dedoose to code the interviews and facilitate analysis. Results We identified three antibiotic reasoning steps (Naming the Syndrome, Delineating Pathogens, Antimicrobial Selection) and four factors involved in the reasoning process (Host Features, Case Features, Provider and Healthcare System Factors, Treatment Principles) (Table 1). Participants considered host and case features when determining likely pathogens and antimicrobial options; the other two factors influenced only antimicrobial selection. From these data, we developed an antimicrobial reasoning framework (Figure 1). We also determined that participants seemed to have a “script” with specific content for each antimicrobial they considered, functioning much like the illness scripts common to diagnostic reasoning (Table 2). Conclusion Our antimicrobial reasoning framework details the cognitive processes underlying antimicrobial choice. Our results build on general therapeutic reasoning frameworks while elaborating factors specific to ID. We also provide evidence of the existence of “therapy scripts” that mirror diagnostic reasoning’s “illness scripts.” Our framework has implications for medical education and antimicrobial stewardship. Disclosures All Authors: No reported Disclosures.
APA, Harvard, Vancouver, ISO, and other styles
18

Xu, Pin, Masato Edahiro, and Kondo Masaki. "Code Generation from Simulink Models with Task and Data Parallelism." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 21 (April 14, 2021): 1–13. http://dx.doi.org/10.24297/ijct.v21i.9004.

Full text
Abstract:
In this paper, we propose a method to automatically generate parallelized code from Simulink models, while exploiting both task and data parallelism. Building on previous research, we propose a model-based parallelizer (MBP) that exploits task parallelism and assigns tasks to CPU cores using a hierarchical clustering method. We also propose amethod in which data-parallel SYCL code is generated from Simulink models; computations with data parallelism are expressed in the form of S-Function Builder blocks and are executed in a heterogeneous computing environment. Most parts of the procedure can be automated with scripts, and the two methods can be applied together. In the evaluation, the data-parallel programs generated using our proposed method achieved a maximum speedup of approximately 547 times, compared to sequential programs, without observable differences in the computed results. In addition, the programs generated while exploiting both task and data parallelism were confirmed to have achieved better performance than those exploiting either one of the two.
APA, Harvard, Vancouver, ISO, and other styles
19

Yao, Jian, Qiwang Huang, and Weiping Wang. "Adaptive CGFs Based on Grammatical Evolution." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/197306.

Full text
Abstract:
Computer generated forces (CGFs) play blue or red units in military simulations for personnel training and weapon systems evaluation. Traditionally, CGFs are controlled through rule-based scripts, despite the doctrine-driven behavior of CGFs being rigid and predictable. Furthermore, CGFs are often tricked by trainees or fail to adapt to new situations (e.g., changes in battle field or update in weapon systems), and, in most cases, the subject matter experts (SMEs) review and redesign a large amount of CGF scripts for new scenarios or training tasks, which is both challenging and time-consuming. In an effort to overcome these limitations and move toward more true-to-life scenarios, a study using grammatical evolution (GE) to generate adaptive CGFs for air combat simulations has been conducted. Expert knowledge is encoded with modular behavior trees (BTs) for compatibility with the operators in genetic algorithm (GA). GE maps CGFs, represented with BTs to binary strings, and uses GA to evolve CGFs with performance feedback from the simulation. Beyond-visual-range air combat experiments between adaptive CGFs and nonadaptive baseline CGFs have been conducted to observe and study this evolutionary process. The experimental results show that the GE is an efficient framework to generate CGFs in BTs formalism and evolve CGFs via GA.
APA, Harvard, Vancouver, ISO, and other styles
20

Hou, Vincent D. H. "Automatic Page-Layout Scripts for Gatan Digital Micrograph®." Microscopy and Microanalysis 7, S2 (August 2001): 976–77. http://dx.doi.org/10.1017/s1431927600030956.

Full text
Abstract:
The software DigitalMicrograph (DM) by Gatan, Inc., is a popular software platform for digital imaging in microscopy. in a service-oriented microscopy laboratory, a large number of images from many different samples are generated each day. It is critical that each printed image is properly labeled with sample identification and a description before printing. with DM, a script language is provided: from this, various analyses can be designed or customized and repetitive tasks can be automated. This paper presents the procedures and DM scripts needed to perform these tasks. Due to the major software architecture change between version 2.5x and version 3.5x, each will be discussed separately.DM Version 2.5.8 (on Macintosh®)A “Data Bar” mechanism is provided in this version of DM. Using the “Edit→Data Bar→Define and Add Data Bar...“ menu command specifies data bar items (e.g., scale bar, microscope operator) to be included in the image. in addition, other annotations (text, line, rectangle, and oval) can be included as part of “Data Bar.” This is done by first selecting the desired annotation on the image and then using the “Edit→Data Bar→use AS Default Data Bar...” menu command. After defining data bar items, executing the menu command adds these data bar items to the image.
APA, Harvard, Vancouver, ISO, and other styles
21

Bialke, Martin, Henriette Rau, Thea Schwaneberg, Rene Walk, Thomas Bahls, and Wolfgang Hoffmann. "mosaicQA - A General Approach to Facilitate Basic Data Quality Assurance for Epidemiological Research." Methods of Information in Medicine 56, S 01 (January 2017): e67-e73. http://dx.doi.org/10.3414/me16-01-0123.

Full text
Abstract:
SummaryBackground: Epidemiological studies are based on a considerable amount of personal, medical and socio-economic data. To answer research questions with reliable results, epidemiological research projects face the challenge of providing high quality data. Consequently, gathered data has to be reviewed continuously during the data collection period.Objectives: This article describes the development of the mosaicQA-library for non-statistical experts consisting of a set of reusable R functions to provide support for a basic data quality assurance for a wide range of application scenarios in epidemiological research.Methods: To generate valid quality reports for various scenarios and data sets, a general and flexible development approach was needed. As a first step, a set of quality-related questions, targeting quality aspects on a more general level, was identified. The next step included the design of specific R-scripts to produce proper reports for metric and categorical data. For more flexibility, the third development step focussed on the generalization of the developed R-scripts, e.g. extracting characteristics and parameters. As a last step the generic characteristics of the developed R functionalities and generated reports have been evaluated using different metric and categorical datasets.Results: The developed mosaicQA-library generates basic data quality reports for multivariate input data. If needed, more detailed results for single-variable data, including definition of units, variables, descriptions, code lists and categories of qualified missings, can easily be produced.Conclusions: The mosaicQA-library enables researchers to generate reports for various kinds of metric and categorical data without the need for computational or scripting knowledge. At the moment, the library focusses on the data structure quality and supports the assessment of several quality indicators, including frequency, distribution and plausibility of research variables as well as the occurrence of missing and extreme values. To simplify the installation process, mosaicQA has been released as an official R-package.
APA, Harvard, Vancouver, ISO, and other styles
22

Deeptha, R., and Rajeswari Mukesh. "The Framework for Testing of Web Services through Actions in Addition to Scripts." Applied Mechanics and Materials 490-491 (January 2014): 1617–23. http://dx.doi.org/10.4028/www.scientific.net/amm.490-491.1617.

Full text
Abstract:
.As Web Services draw modules within and across enterprises, dynamically and belligerently testing Web Services has become crucial. Comprehensive Functional, Concert, Interoperability and Susceptibility Testing form the Pillars of Web Services Testing. Only by adopting a comprehensive testing department, enterprises can safeguard that their Web Services is robust, scalable, interoperable, and secure. Overall functionality of web services would be informal towards test. But, only if we methodically trust the applications components (services) before we combine them to complete the application. In current scenario web service technology comprehends various testing apparatuses for manipulating and generating the test cases. But these tools and approaches were negotiating security and execution time and consume more resources. The existing methodologies will generate test cases for the low end web services and limited number of requests, due to these constraints we built new testing framework. In this paper we introduced the new basis with testing of actions, scripts and link for web services by the use of test cases. For this approach we used SOAP web services with SOA. The test case generation and testing reports will gives the accurate testing results and test cases. These test cases are generated using Java JUnit testing tool. We implemented our approach in a java based platform for efficient and secure manner.
APA, Harvard, Vancouver, ISO, and other styles
23

Oh, Hyo-Jung, Dong-Hyun Won, Chonghyuck Kim, Sung-Hee Park, and Yong Kim. "Design and implementation of crawling algorithm to collect deep web information for web archiving." Data Technologies and Applications 52, no. 2 (April 3, 2018): 266–77. http://dx.doi.org/10.1108/dta-07-2017-0053.

Full text
Abstract:
Purpose The purpose of this paper is to describe the development of an algorithm for realizing web crawlers that automatically collect dynamically generated webpages from the deep web. Design/methodology/approach This study proposes and develops an algorithm to collect web information as if the web crawler gathers static webpages by managing script commands as links. The proposed web crawler actually experiments with the algorithm by collecting deep webpages. Findings Among the findings of this study is that if the actual crawling process provides search results as script pages, the outcome only collects the first page. However, the proposed algorithm can collect deep webpages in this case. Research limitations/implications To use a script as a link, a human must first analyze the web document. This study uses the web browser object provided by Microsoft Visual Studio as a script launcher, so it cannot collect deep webpages if the web browser object cannot launch the script, or if the web document contains script errors. Practical implications The research results show deep webs are estimated to have 450 to 550 times more information than surface webpages, and it is difficult to collect web documents. However, this algorithm helps to enable deep web collection through script runs. Originality/value This study presents a new method to be utilized with script links instead of adopting previous keywords. The proposed algorithm is available as an ordinary URL. From the conducted experiment, analysis of scripts on individual websites is needed to employ them as links.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Guang-Dong, Yong Wang, Zhen Zeng, Jun-Ming Mao, Qin-Liu He, Qin Yao, and Ke-Ping Chen. "Simulation of Chordate Intron Evolution Using Randomly Generated and Mutated Base Sequences." Evolutionary Bioinformatics 16 (January 2020): 117693432090310. http://dx.doi.org/10.1177/1176934320903108.

Full text
Abstract:
Introns are well known for their high variation not only in length but also in base sequence. The evolution of intron sequences has aroused broad interest in the past decades. However, very little is known about the evolutionary pattern of introns due to the lack of efficient analytical method. In this study, we designed 2 evolutionary models, that is, mutation-and-deletion (MD) and mutation-and-insertion (MI), to simulate intron evolution using randomly generated and mutated bases by referencing to the phylogenetic tree constructed using 14 chordate introns from TF4 (transcription factor–like protein 4) gene. A comparison of attributes between model-generated sequences and chordate introns showed that the MD model with proper parameter settings could generate sequences that have attributes matchable to chordate introns, whereas the MI model with any parameter settings failed in doing so. These data suggest that the surveyed chordate introns have evolved from a long ancestral sequence through gradual reduction in length. The established methodology provides an effective measure to study the evolutionary pattern of intron sequences from organisms of various taxonomic groups. (C++ scripts of MD and MI models are available upon request.)
APA, Harvard, Vancouver, ISO, and other styles
25

Suvanajata, Rapit. "Movement Navigator: A Relational Syntax Study on Movement and Space at King’s Cross and Piccadilly Circus Underground Stations, London, UK." Journal of Architectural/Planning Research and Studies (JARS) 3 (December 30, 2005): 85–114. http://dx.doi.org/10.56261/jars.v3.169044.

Full text
Abstract:
The article explores the design and analytical method Relational Syntax [1], using both a syntactical approach as well as on-site observations. The architecture of King’s Cross and Piccadilly Circus Underground stations in London is used as the laboratory in which a theoretical discussion on movement and dimensional relations in architectural space is conducted. The two stations are among the most well-known and used in the London Underground system. Observations were made at these highly-used stations in order to establish an overall understanding of the spatial mechanism through social and natural movement. Considering the case studies as both texts and experiences, the article shows that spatial analysis and bodily movement can be explained, compared and put into sets of relations that can be pre-established or scripted during architects’ design activities. It is argued that the concept of ‘script’ [2] can be used bi-directionally in both the design and analysis of architecture. The research and arguments presented in this paper are being developed to form the basis of an application tool. Based on the theory of Relational Syntax, this application tool can be used to process building requirements generated from the design or analysis of a piece of architecture into scripts in order to systematically and aesthetically describe and generate spatial relations in buildings.
APA, Harvard, Vancouver, ISO, and other styles
26

Akhand, M. A. H., Mahtab Ahmed, M. M. Hafizur Rahman, and Md Monirul Islam. "Convolutional Neural Network Training incorporating Rotation-Based Generated Patterns and Handwritten Numeral Recognition of Major Indian Scripts." IETE Journal of Research 64, no. 2 (July 25, 2017): 176–94. http://dx.doi.org/10.1080/03772063.2017.1351322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Tyerman, Jane, Marian Luctkar-Flude, Lillian Chumbley, Michelle Lalonde, Laurie Peachey, Tammie McParland, and Deborah Tregunno. "Developing virtual simulation games for presimulation preparation: A user-friendly approach for nurse educators." Journal of Nursing Education and Practice 11, no. 7 (March 12, 2021): 10. http://dx.doi.org/10.5430/jnep.v11n7p10.

Full text
Abstract:
Objective: Engaging presimulation activities are needed to better prepare undergraduate nursing students to participate in clinical simulations.Methods: Design: We created a series of virtual simulation games (VSGs) to enhance presimulation preparation. This involved creating learning outcomes, assessment rubrics, decision point maps with rationale, and filming scripts. Setting: This was a multi-site project involving four universities across Ontario, Canada. Participants: Games were to be embedded within undergraduate nursing courses and used as presimulation preparation before participating in a traditional live simulation. Four existing bilingual peer-reviewed simulation scenarios were transformed into VSGs to be used for presimulation preparation. The team selected critical decision-points from each scenario to form the basis of each VSG, created filming scripts, and filmed and assembled video clips.Results: Our project generated four bilingual presimulation preparation VSGs with a user-friendly, low-cost VSG design process.Conclusions: We have demonstrated that nurse educators can easily create contextually relevant VSGs addressing program gaps.
APA, Harvard, Vancouver, ISO, and other styles
28

ALAEI, ALIREZA, UMAPADA PAL, and P. NAGABHUSHAN. "DATASET AND GROUND TRUTH FOR HANDWRITTEN TEXT IN FOUR DIFFERENT SCRIPTS." International Journal of Pattern Recognition and Artificial Intelligence 26, no. 04 (June 2012): 1253001. http://dx.doi.org/10.1142/s0218001412530011.

Full text
Abstract:
In document image analysis (DIA) especially in handwritten document recognition, standard databases play significant roles for evaluating performances of algorithms and comparing results obtained by different groups of researchers. The field of DIA regard to Indo-Persian documents is still at its infancy compared to Latin script-based documents; as such standard datasets are not still available in literature. This paper is an effort towards alleviating this gap. In this paper, an unconstrained handwritten dataset containing documents of Persian, Bangla, Oriya and Kannada (PBOK) is introduced. The PBOK contains 707 text-pages written in four different languages (Persian, Bangla, Oriya and Kannada) by 436 individuals. Total number of text-lines, words/subwords and characters are 12,565, 104,541 and 423,980, respectively. In most documents of PBOK dataset contain either an overlapping or a touching text-lines. The average number of text-lines in text-pages of the PBOK dataset is 18. Two types of ground truths, based on pixels information and content information, are generated for the dataset. Because of such ground truths, the PBOK dataset can be utilized in many areas of document image processing e.g. text-line segmentation, word segmentation and word recognition. To provide an insight for other researches, recent text-line segmentation results on this dataset are also reported.
APA, Harvard, Vancouver, ISO, and other styles
29

Cho-Hi Joh and 문지현. "The Effects of Role-Play Activities Based upon Student-Generated Scripts Using Label-Sheets on Primary English Learners." English21 23, no. 3 (September 2010): 209–33. http://dx.doi.org/10.35771/engdoi.2010.23.3.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bajcsi, Anna, Barbara Botos, Péter Bajkó, and Zalán Bodó. "Can You Guess the Title? Generating Emoji Sequences for Movies." Studia Universitatis Babeș-Bolyai Informatica 67, no. 1 (July 3, 2022): 5–20. http://dx.doi.org/10.24193/subbi.2022.1.01.

Full text
Abstract:
"In the culture of the present emojis play an important role in written/typed communication, having a primary role of supplementing the words with emotional cues. While in different cultures emojis can be interpreted and thus used differently, a small set of emojis have clear meaning and strong sentiment polarity. In this work we study how to map natural language texts to emoji sequences, more precisely, we automatically assign emojis to movie subtitles/scripts. The pipeline of the proposed method is as follows: first the most relevant words are extracted from the movie subtitle, and then these are mapped to emojis. In order to perform the mapping, three methods are proposed: a lexical matching-based, a word embedding-based and a combined approach. To demonstrate the viability of the approach, we list some of the generated emojis for a randomly selected movie subset, showing also the deficiencies of the method in generating guessable sequences. Evaluation is performed via quizzes completed by human participants. Keywords and phrases: natural language processing, emoji, keyword extraction, movie scripts, lexical matching, word embedding. "
APA, Harvard, Vancouver, ISO, and other styles
31

Bansal, V. K., and Mahesh Pal. "Quantity Takeoffs and Detailed Buildings Cost Estimation Using Geographic Information Systems." International Journal of Information Technology Project Management 4, no. 3 (July 2013): 66–80. http://dx.doi.org/10.4018/jitpm.2013070105.

Full text
Abstract:
This paper presents a Geographic Information System (GIS) based cost estimation methodology, which may be helpful in increasing the productivity of quantity estimator by reducing the manual work in quantity takeoffs. Proposed methodology also eliminates missing or duplication of various items of work by visualizing each components corresponding to the items in three dimension (3D). Several scripts developed within ArcView, a desktop GIS based mapping system, have been used to extract the necessary dimensions from the design drawings (prepared in GIS environment) and to perform various calculations of quantity takeoffs. Accurate Bill of Quantities (BOQ) may be generated on the basis of dimensions of various data themes. Methodology has been designed to store construction resource data (materials, workers, and equipments) in tabular form within the GIS environment. Separate tables have been used for each project to generate BOQ, Bill of Materials (BOM), and labor requirements.
APA, Harvard, Vancouver, ISO, and other styles
32

Flantua, S. G. A., M. Blaauw, and H. Hooghiemstra. "Geochronological database and classification system for age uncertainties in Neotropical pollen records." Climate of the Past 12, no. 2 (February 23, 2016): 387–414. http://dx.doi.org/10.5194/cp-12-387-2016.

Full text
Abstract:
Abstract. The newly updated inventory of palaeoecological research in Latin America offers an important overview of sites available for multi-proxy and multi-site purposes. From the collected literature supporting this inventory, we collected all available age model metadata to create a chronological database of 5116 control points (e.g. 14C, tephra, fission track, OSL, 210Pb) from 1097 pollen records. Based on this literature review, we present a summary of chronological dating and reporting in the Neotropics. Difficulties and recommendations for chronology reporting are discussed. Furthermore, for 234 pollen records in northwest South America, a classification system for age uncertainties is implemented based on chronologies generated with updated calibration curves. With these outcomes age models are produced for those sites without an existing chronology, alternative age models are provided for researchers interested in comparing the effects of different calibration curves and age–depth modelling software, and the importance of uncertainty assessments of chronologies is highlighted. Sample resolution and temporal uncertainty of ages are discussed for different time windows, focusing on events relevant for research on centennial- to millennial-scale climate variability. All age models and developed R scripts are publicly available through figshare, including a manual to use the scripts.
APA, Harvard, Vancouver, ISO, and other styles
33

Jones, Jonathan P. "“And So We Write”: Reflective Practice in Ethnotheatre and Devised Theatre Projects." LEARNing Landscapes 15, no. 1 (June 23, 2022): 173–85. http://dx.doi.org/10.36510/learnland.v15i1.1061.

Full text
Abstract:
This paper follows the author’s trajectory as he collaboratively experimented with ethnodrama (theatre scripts generated from interviews, media artifacts, and written media) and devised theatre performance (theatre collaboratively created with a group), culminating in the analysis of a performance with high school students combining elements of these forms. The author defines the forms, illuminates how he engaged with them over time, and how he adapted elements of them for work with his high school students. The author proposes a framework deduced from these experiences as a provocation for future performance projects and as a demonstration of an educator’s reflective practice.
APA, Harvard, Vancouver, ISO, and other styles
34

Penev, Petar I., Holly M. McCann, Caeden D. Meade, Claudia Alvarez-Carreño, Aparna Maddala, Chad R. Bernier, Vasanta L. Chivukula, et al. "ProteoVision: web server for advanced visualization of ribosomal proteins." Nucleic Acids Research 49, W1 (May 17, 2021): W578—W588. http://dx.doi.org/10.1093/nar/gkab351.

Full text
Abstract:
Abstract ProteoVision is a web server designed to explore protein structure and evolution through simultaneous visualization of multiple sequence alignments, topology diagrams and 3D structures. Starting with a multiple sequence alignment, ProteoVision computes conservation scores and a variety of physicochemical properties and simultaneously maps and visualizes alignments and other data on multiple levels of representation. The web server calculates and displays frequencies of amino acids. ProteoVision is optimized for ribosomal proteins but is applicable to analysis of any protein. ProteoVision handles internally generated and user uploaded alignments and connects them with a selected structure, found in the PDB or uploaded by the user. It can generate de novo topology diagrams from three-dimensional structures. All displayed data is interactive and can be saved in various formats as publication quality images or external datasets or PyMol Scripts. ProteoVision enables detailed study of protein fragments defined by Evolutionary Classification of protein Domains (ECOD) classification. ProteoVision is available at http://proteovision.chemistry.gatech.edu/.
APA, Harvard, Vancouver, ISO, and other styles
35

GOLOB, Nina. "Foreword." Acta Linguistica Asiatica 9, no. 1 (January 30, 2019): 5–6. http://dx.doi.org/10.4312/ala.9.1.5-6.

Full text
Abstract:
In the mids of cold northern winds and landscape covered with snow we are pleased to announce the first ALA issue of the year 2019, which contains six research articles. Warm congratulation goes to all the authors, and words of appreciation to the Editorial team and recently enlarged proofreading team that have been working very hard in order to offer state-of-the-art contemporary linguistic research in this journal. The present issue is opened up by Mayuri J. DILIP and Rajesh KUMAR, who present a unified account of licensing conditions of Negative Polarity Items (NPI) in Telugu. In their work “Negative Polarity Items in Telugu” they analyze the distribution of NPIs in complex sentences with embedded clauses, and conclude that negation c-commanding NPI be conducted at the base-generated position. Kun SUN with his article “The Integration Functions of Topic Chains in Chinese Discourse” thoroughly presents the long and extensive Chinese research tradition on topic chains, and re-examines their core characteristics with the help of the so-called “integration functions”. The following paper “Tracing the Identity and Ascertaining the Nature of Brahmi-derived Devanagari Script” by Krishna Kumar PANDEY and Smita JHA exploits the orthographic design of Brahmi-derived scripts. Authors argue that such scripts should not be described with the existing linguistic properties of alphabetic and syllabic scripts but should instead gains its own categorization with a unique descriptor. Chikako SHIGEMORI BUČAR successfully submitted the article “Image of Japan among Slovenes” in which she represents the process and mechanism of borrowing from Japanese into Slovene. Conclusions briefly touch the image of Japan seen through the borrowing process and consolidated loanwords, and predict possible development of borrowing in the near future. Another interesting paper “Understanding Sarcastic Metaphorical Expression in Hindi through Conceptual Integration Theory” was authored by Sandeep Kumar SHARMA and Sweta SINHA. Based on a corpus of five thousand sentences, authors examine the abstract notion of sarcasm within the framework of conceptual integration theory, and with special reference to Hindi language. Findings aim to provide a theoretical understanding on how Hindi sarcasm is perceived among the native speakers. And last but not least, Điệp Thi Nhu NGUYỄN, An-Vinh LƯƠNG, and Điền ĐINH humbly observe research backlog in the area of Vietnamese text readability and write their paper “Affection of the part of speech elements in Vietnamese text readability” to encourage researchers to further explore the field and put Vietnamese findings on the world’s map. Editors and Editorial Board wish the regular and new readers of the ALA journal a pleasant read full of inspiration.
APA, Harvard, Vancouver, ISO, and other styles
36

Rehman, Mohammed Suhail, Silu Huang, and Aaron J. Elmore. "A demonstration of RELIC." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 2795–98. http://dx.doi.org/10.14778/3476311.3476347.

Full text
Abstract:
The ad-hoc, heterogeneous process of modern data science typically involves loading, cleaning, and mutating dataset(s) into multiple versions recorded as artifacts by various tools within a single data science workflow. Lineage information, including the source datasets, data transformation programs or scripts, or manual annotations, is rarely captured, making it difficult to infer the relationships between artifacts in a given workflow retrospectively. We demonstrate Relic, a tool to retrospectively infer the lineage of data artifacts generated as a result of typical data science workflows, with an interactive demonstration that allows users to input artifact files and visualize the inferred lineage in a web-based setting.
APA, Harvard, Vancouver, ISO, and other styles
37

Binder, Anna Ronja Dorothea, Andrej-Nikolai Spiess, and Michael W. Pfaffl. "Modelling and Differential Quantification of Electric Cell-Substrate Impedance Sensing Growth Curves." Sensors 21, no. 16 (August 5, 2021): 5286. http://dx.doi.org/10.3390/s21165286.

Full text
Abstract:
Measurement of cell surface coverage has become a common technique for the assessment of growth behavior of cells. As an indirect measurement method, this can be accomplished by monitoring changes in electrode impedance, which constitutes the basis of electric cell-substrate impedance sensing (ECIS). ECIS typically yields growth curves where impedance is plotted against time, and changes in single cell growth behavior or cell proliferation can be displayed without significantly impacting cell physiology. To provide better comparability of ECIS curves in different experimental settings, we developed a large toolset of R scripts for their transformation and quantification. They allow importing growth curves generated by ECIS systems, edit, transform, graph and analyze them while delivering quantitative data extracted from reference points on the curve. Quantification is implemented through three different curve fit algorithms (smoothing spline, logistic model, segmented regression). From the obtained models, curve reference points such as the first derivative maximum, segmentation knots and area under the curve are then extracted. The scripts were tested for general applicability in real-life cell culture experiments on partly anonymized cell lines, a calibration setup with a cell dilution series of impedance versus seeded cell number and finally IPEC-J2 cells treated with 1% and 5% ethanol.
APA, Harvard, Vancouver, ISO, and other styles
38

Yekkala, Indu, and Sunanda Dixit. "Prediction of Heart Disease Using Random Forest and Rough Set Based Feature Selection." International Journal of Big Data and Analytics in Healthcare 3, no. 1 (January 2018): 1–12. http://dx.doi.org/10.4018/ijbdah.2018010101.

Full text
Abstract:
Data is generated by the medical industry. Often this data is of very complex nature—electronic records, handwritten scripts, etc.—since it is generated from multiple sources. Due to the Complexity and sheer volume of this data necessitates techniques that can extract insight from this data in a quick and efficient way. These insights not only diagnose the diseases but also predict and can prevent disease. One such use of these techniques is cardiovascular diseases. Heart disease or coronary artery disease (CAD) is one of the major causes of death all over the world. Comprehensive research using single data mining techniques have not resulted in an acceptable accuracy. Further research is being carried out on the effectiveness of hybridizing more than one technique for increasing accuracy in the diagnosis of heart disease. In this article, the authors worked on heart stalog dataset collected from the UCI repository, used the Random Forest algorithm and Feature Selection using rough sets to accurately predict the occurrence of heart disease
APA, Harvard, Vancouver, ISO, and other styles
39

Mattos, Sérgio Henrique Vannucchi Leme de, Luiz Eduardo Vicente, Andrea Koga Vicente, Cláudio Bielenki Júnior, and José Roberto Castilho Piqueira. "Metrics based on information entropy applied to evaluate complexity of landscape patterns." PLOS ONE 17, no. 1 (January 20, 2022): e0262680. http://dx.doi.org/10.1371/journal.pone.0262680.

Full text
Abstract:
Landscape is an ecological category represented by a complex system formed by interactions between society and nature. Spatial patterns of different land uses present in a landscape reveal past and present processes responsible for its dynamics and organisation. Measuring the complexity of these patterns (in the sense of their spatial heterogeneity) allows us to evaluate the integrity and resilience of these complex environmental systems. Here, we show how landscape metrics based on information entropy can be applied to evaluate the complexity (in the sense of spatial heterogeneity) of patches patterns, as well as their transition zones, present in a Cerrado conservation area and its surroundings, located in south-eastern Brazil. The analysis in this study aimed to elucidate how changes in land use and the consequent fragmentation affect the complexity of the landscape. The scripts CompPlex HeROI and CompPlex Janus were created to allow calculation of information entropy (He), variability (He/Hmax), and López-Ruiz, Mancini, and Calbet (LMC) and Shiner, Davison, and Landsberg (SDL) measures. CompPlex HeROI enabled the calculation of these measures for different regions of interest (ROIs) selected in a satellite image of the study area, followed by comparison of the complexity of their patterns, in addition to enabling the generation of complexity signatures for each ROI. CompPlex Janus made it possible to spatialise the results for these four measures in landscape complexity maps. As expected, both for the complexity patterns evaluated by CompPlex HeROI and the complexity maps generated by CompPlex Janus, the areas with vegetation located in a region of intermediate spatial heterogeneity had lower values for the He and He/Hmax measures and higher values for the LMC and SDL measurements. So, these landscape metrics were able to capture the behaviour of the patterns of different types of land use present in the study area, bringing together uses linked to vegetation with increased canopy coverage and differentiating them from urban areas and transition areas that mix different uses. Thus, the algorithms implemented in these scripts were demonstrated to be robust and capable of measuring the variability in information levels from the landscape, not only in terms of spatial datasets but also spectrally. The automation of measurement calculations, owing to informational entropy provided by these scripts, allows a quick assessment of the complexity of patterns present in a landscape, and thus, generates indicators of landscape integrity and resilience.
APA, Harvard, Vancouver, ISO, and other styles
40

Medvedev, V. V. "The contents and laws of soil anthropogenous evolution." Fundamental and Applied Soil Science 15, no. 1-2 (January 15, 2014): 17–32. http://dx.doi.org/10.15421/041402.

Full text
Abstract:
Long soil ploughed up are typical polygenetic formations as in their formation alongside with natural the significant role is played with anthropogenous factors. Under action of mechanical, chemical, reclamative and other kinds influences natural soils lose inherent in them a structure, properties and modes. Anisotropism, spatial heterogeneity, preferential descending and ascending streams of a moisture amplify, new types of horizontal and vertical soil structures are formed, grows the equilibrium bulk density, consolidation and quantity of false aggregates, the structure pore spaces changes, obvious braking processes of aggregation is marked, ability to convertibility of properties and modes as the basic condition of counteraction of degradation processes is lost, rhythmic of soil formation due to activization relax processes is broken characteristic for natural soil. Significant changes occur in thin dispersed mineral and organic parts. The total humus decreases, its lability increases, is observed claying, because of increase in depth of watering and lowering of carbonates level it is locally marked acidification. As a result it is ascertained, that in conditions of unbalanced and poor-quality land tenure even simple reproduction of soil fertility is impossible, and an equilibrium (stable) condition of soil properties and modes – more likely wrongly generated on the basis of not enough long-term researches. As a result of anthropogenous evolution for rather short historical time interval the new body – anthropogenous transformed soils was generated. This fact demands reflection in soil classification and correctives in studying, management of their fertility and use. Possible scripts of the further anthropogenous soil evolution are discussed: the degradation, a seeming balance and "reasonable" precise agriculture. Degradation (degradation) – the most probable script at preservation of modern unbalanced and poor-quality agriculture. Degradation in these conditions can gradually become the factor forming an agrisoil. A seeming balance (seeming equilibrium, balance). – the least probable script. Seeming because it is characteristic for short-term prospect, but in conditions of long scarce balance elements and excessive mechanical loading soil evolution cannot be equilibrium. Steady development - the script to which it is necessary to aspire ("reasonable" agriculture - intelligence agriculture). The script on immediate prospects – instead of the zone generalized technologies – exact agriculture (precise agriculture) in view of spatial diversity, history of a field and a stage of its anthropogenous evolution. The organization of researches is necessary for realization of the favorable script of anthropogenous soil evolution with use of modes in situ and on-line, landscape soil-ecological ranges, complex stationary experiences with application of methods of planning of experiment, use of effective methods of forecasting of soil processes and as a whole exemplary system of scientific monitoring. Uncontrolled soil use in the country should not be.
APA, Harvard, Vancouver, ISO, and other styles
41

Banyard, Victoria L., Katie M. Edwards, Elizabeth A. Moschella, and Katherine M. Seavey. "“Everybody’s Really Close-Knit”: Disconnections Between Helping Victims of Intimate Partner Violence and More General Helping in Rural Communities." Violence Against Women 25, no. 3 (June 11, 2018): 337–58. http://dx.doi.org/10.1177/1077801218768714.

Full text
Abstract:
Social support is key to well-being for victims of intimate partner violence (IPV), and bystanders have an important role to play in preventing IPV by taking action when there is risk for violence. The current study used qualitative interviews to explore young adults’ perspectives on helping in situations of IPV, and more general helping, in the rural communities in which they resided. Participants were 74 individuals between the ages of 18 and 24 years from 16 rural counties across the eastern United States. Participants generally described their communities as close-knit and helpful, especially around daily hassles (e.g., broken down car) and unusual circumstances (e.g., house fire). Although participants generated ways in which community members help IPV victims, these mostly focused on providing support or taking action in the aftermath of IPV as opposed to more preventive actions. Lack of financial resources were uniquely cited as a barrier to more general helping, whereas concerns about privacy and lack of deservingness of help were barriers to both general helping and helping in IPV situations, although these were more pronounced in IPV situations than general helping situations. Taken together, these results suggest that although people generally see their communities as helpful and close-knit, these perceptions and scripts did not necessarily translate to helping in situations of IPV. Bystander intervention programs are needed that provide more specific helping scripts for IPV.
APA, Harvard, Vancouver, ISO, and other styles
42

Clouse, Ronald M., Benjamin L. de Bivort, and Gonzalo Giribet. "A phylogenetic analysis for the South-east Asian mite harvestman family Stylocellidae (Opiliones:Cyphophthalmi) – a combined analysis using morphometric and molecular data." Invertebrate Systematics 23, no. 6 (2009): 515. http://dx.doi.org/10.1071/is09044.

Full text
Abstract:
In an effort to place type specimens lacking molecular data into a phylogenetic framework ahead of a taxonomic revision, we used morphometric data, both alone and in combination with a molecular dataset, to generate phylogenetic hypotheses under the parsimony criterion for 107 members of the South-east Asian mite harvestman family Stylocellidae (Arachnida: Opiliones: Cyphophthalmi). For the morphometric analyses, we used undiscretised characters, analysed for independence and collapsed by principal components analysis (PCA) when dependent. Two challenges not previously encountered in the use of this method were (a) handling terminals with missing data, necessitated by the inclusion of old and damaged type specimens, and (b) controlling for extreme variation in size. Custom scripts for independence analysis were modified to accommodate missing data whereby placeholder numbers were used during PCA for missing measurements. Size was controlled in four ways: choosing characters that avoided misleading size information and were easily scaled; using only locally scaled measurements; adjusting ratios by y-intercepts; and collapsing dependent characters into one. These steps removed enough size information that miniaturised and large species, suspected from molecular and discrete morphological studies to be closely related, were closely placed using morphometric data alone. Both morphometric and combined analyses generated relationships that positioned type specimens in agreement with taxonomic expectations and our knowledge of the family from prior studies. The hypotheses generated here provide new direction in linking molecular analyses with established taxonomy in this large group of South-east Asian arachnids.
APA, Harvard, Vancouver, ISO, and other styles
43

Tu, Chengcheng, and Emma K. T. Benn. "RRApp, a robust randomization app, for clinical and translational research." Journal of Clinical and Translational Science 1, no. 6 (December 2017): 323–27. http://dx.doi.org/10.1017/cts.2017.310.

Full text
Abstract:
While junior clinical researchers at academic medical institutions across the US often desire to be actively engaged in randomized-clinical trials, they often lack adequate resources and research capacity to design and implement them. This insufficiency hinders their ability to generate a rigorous randomization scheme to minimize selection bias and yield comparable groups. Moreover, there are limited online user-friendly randomization tools. Thus, we developed a free robust randomization app (RRApp). RRApp incorporates 6 major randomization techniques: simple randomization, stratified randomization, block randomization, permuted block randomization, stratified block randomization, and stratified permuted block randomization. The design phase has been completed, including robust server scripts and a straightforward user-interface using the “shiny” package in R. Randomization schemes generated in RRApp can be input directly into the Research Electronic Data Capture (REDCap) system. RRApp has been evaluated by biostatisticians and junior clinical faculty at the Icahn School of Medicine at Mount Sinai. Constructive feedback regarding the quality and functionality of RRApp was also provided by attendees of the 2016 Association for Clinical and Translational Statisticians Annual Meeting. RRApp aims to educate early stage clinical trialists about the importance of randomization, while simultaneously assisting them, in a user-friendly fashion, to generate reproducible randomization schemes.
APA, Harvard, Vancouver, ISO, and other styles
44

Álvarez Veinguer, Aurora, Rocío García Soto, and Dario Ranocchiari. "“Ya no estás sola”: tramas, personajes y guiones. Experimentaciones con la ficción radiofónica desde la etnografía colaborativa." Empiria. Revista de metodología de ciencias sociales, no. 57 (January 9, 2023): 123–44. http://dx.doi.org/10.5944/empiria.57.2023.36432.

Full text
Abstract:
El propósito del presente artículo es reflexionar sobre una experiencia de escritura etnográfica de ficción que hicimos mientras hacíamos investigación colaborativa. Mediante un ejercicio de memoria, imaginación e investigación hemos creado un producto de ficción sonora junto a un colectivo social que lucha por el acceso a una vivienda digna: Stop Desahucios Granada 15M. Partiendo de la materialidad de uno de los guiones que forman parte de nuestro producto de ficción sonora, la radionovela, pretendemos en primer lugar mostrar de qué forma hemos elaborado los seis episodios de la serie, construyendo antes tramas y personajes, y después los diálogos. En segundo lugar hablaremos de los aspectos más performativos, de transgresión del guion, que hemos experimentado durante las grabaciones. En tercer lugar abordaremos dos “mutaciones” en las formas de hacer y presentar la etnografía que ha generado el despliegue colaborativo de estrategias de investigación basadas en la construcción de guiones de ficción sonora. Y por último plantearemos por qué consideramos que la elaboración de guiones de ficción han sido un elemento central de nuestra etnografía colaborativa. The purpose of this paper is to reflect on an experience of fictional ethnographic writing that we made while doing collaborative research. Through an exercise of memory, imagination and investigation, we created a radio fiction series together with a social movement that fights for the right to decent housing: Stop Desahucios Granada 15M. Starting from the materiality of one of the scripts that form part of our series, a radio soap opera, we want to show how we jointly wrote the six episodes of the series by constructing first the plots and characters, and after the dialogues. Secondly, we’ll reflect on the performative aspects of the production process, in which we transgressed the scripts. Thirdly, we will address two ‘mutations’ that the collaborative framework of our fiction-based research had generated in our way of doing and communicate ethnography. And finally, we’ll explain the reasons why we consider the writing of the scripts as a central element of our collaborative ethnography.
APA, Harvard, Vancouver, ISO, and other styles
45

Permar, Justin, and Brian Magerko. "A Conceptual Blending Approach to the Generation of Cognitive Scripts for Interactive Narrative." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 9, no. 4 (June 30, 2021): 44–50. http://dx.doi.org/10.1609/aiide.v9i4.12621.

Full text
Abstract:
This paper presents a computational approach to the generation of cognitive scripts employed in freeform activities such as pretend play. Pretend play activities involve a high degree of improvisational narrative construction using cognitive scripts acquired from everyday experience, cultural experiences, and previous play experiences. Our computational model of cognitive script generation, based upon conceptual integration theory, applies operations to familiar scripts to generate new blended scripts.
APA, Harvard, Vancouver, ISO, and other styles
46

Nash, Alexander J., and Boris Lenhard. "A novel measure of non-coding genome conservation identifies genomic regulatory blocks within primates." Bioinformatics 35, no. 14 (December 7, 2018): 2354–61. http://dx.doi.org/10.1093/bioinformatics/bty1014.

Full text
Abstract:
Abstract Motivation Clusters of extremely conserved non-coding elements (CNEs) mark genomic regions devoted to cis-regulation of key developmental genes in Metazoa. We have recently shown that their span coincides with that of topologically associating domains (TADs), making them useful for estimating conserved TAD boundaries in the absence of Hi-C data. The standard approach—detecting CNEs in genome alignments and then establishing the boundaries of their clusters—requires tuning of several parameters and breaks down when comparing closely related genomes. Results We present a novel, kurtosis-based measure of pairwise non-coding conservation that requires no pre-set thresholds for conservation level and length of CNEs. We show that it performs robustly across a large span of evolutionary distances, including across the closely related genomes of primates for which standard approaches fail. The method is straightforward to implement and enables detection and comparison of clusters of CNEs and estimation of underlying TADs across a vastly increased range of Metazoan genomes. Availability and implementation The data generated for this study, and the scripts used to generate the data, can be found at https://github.com/alexander-nash/kurtosis_conservation. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
47

Zhai, Junhao, and Rui Wang. "Research on Application Testing Method Based on Test Script." Academic Journal of Science and Technology 3, no. 2 (October 28, 2022): 87–91. http://dx.doi.org/10.54097/ajst.v3i2.2098.

Full text
Abstract:
This project provides a research on application testing method based on test script, especially relating to the technical field of big data. The method includes: according to a preset interface document, obtaining the environment information, field information and field standard information corresponding to the field information of the tested application; Filling the environmental information, field information and field standard information corresponding to the field information into a preset test script template to obtain a test script; Running the test script to perform the application test. This project can automatically generate test scripts based on the current interface documents, thus improving the speed and accuracy of generating test scripts, and further improving the efficiency of the overall application test.
APA, Harvard, Vancouver, ISO, and other styles
48

Guo, Chao, and Zhengran Lu. "A 3D FEM Mesoscale Numerical Analysis of Concrete Tensile Strength Behaviour." Advances in Materials Science and Engineering 2021 (July 12, 2021): 1–14. http://dx.doi.org/10.1155/2021/5538477.

Full text
Abstract:
A three-dimensional (3D) finite element method (FEM) based on an inserted cohesive element numerical analysis procedure was developed for concrete mesoscale systems on the ABAQUS platform with python scripts. Aggregates were generated based on dividing the existing geometrical element algorithms to randomize arbitrary spheres. Simultaneously, randomizations of the maximum aggregate size and uniform distributions of aggregate particles were also considered. An FEM for the mortar phase in concrete mesoscale systems was generated along with the interfacial transition zone (ITZ) by inserting a cohesive element. Numerical parameter analyses were performed for nine different concrete systems by varying the coarse aggregate volume fraction (α) and the ITZ tension strength (ITZ-S). The mechanical performance of concrete systems with the coupling effects of α and ITZ-S was evaluated for simulated tensile loading. The results of the numerical simulations for mechanical properties, such as the simulated tensile strengths and tension damage behaviour of concrete systems, were verified with experimental results. The proposed aggregate and ITZ generation approach and numerical simulation procedure can be used by researchers to better understand how aggregate volume fraction and ITZ strength affect the tensile behaviour of concrete mesoscale systems.
APA, Harvard, Vancouver, ISO, and other styles
49

Peugh, James L., and Ronald H. Heck. "Conducting Three-Level Longitudinal Analyses." Journal of Early Adolescence 37, no. 1 (July 27, 2016): 7–58. http://dx.doi.org/10.1177/0272431616642329.

Full text
Abstract:
Researchers in the field of early adolescence interested in quantifying the environmental influences on a response variable of interest over time would use cluster sampling (i.e., obtaining repeated measures from students nested within classrooms and/or schools) to obtain the needed sample size. The resulting longitudinal data would be nested at three levels (e.g., repeated measures [Level 1], collected across participants [Level 2], and nested within different schools [Level 3]). A previous publication addressed statistical analysis issues specific to cross-sectional three-level data analytic designs. This article expands upon the previous cross-sectional three-level publication to address topics specific to longitudinal three-level data analyses efforts. Although all analysis examples are demonstrated using SAS, the equivalent SPSS and Mplus syntax scripts, as well as the generated example data and additional supplemental materials, are available online.
APA, Harvard, Vancouver, ISO, and other styles
50

Siti Kholifah. "PKM SISTEM INFORMASI AKUNTANSI PEMBUATAN LAPORAN KEUANGAN BERBASIS WEB." JURNAL PENGABDIAN MASYARAKAT INDONESIA 1, no. 1 (February 16, 2022): 1–9. http://dx.doi.org/10.55606/jpmi.v1i1.75.

Full text
Abstract:
The problem that occurs in Adi jaya Grosir is that the administrative process and the making of the financial statements of profit or loss obtained by the administration officer must go through a manual and not integrated process. Difficulties occur in calculating the amount of income earned by each sales. And it is difficult to know the number of stores that actively order products from this because there is no accounting information system, so the process is still manual. In this study the software used to build the system is Balsamiq Mockups for designing systems, PHP programming scripts and using MySQL as the database. The results obtained are financial data recapitulation. The system provides a report menu that is integrated between sales transaction data and financial statement data, so that it can obtain reports based on date. This system also generates income reports for each salesperson. With this system, the operation manager and administration staff can see the number of stores / customers who are actively ordering products from Adi jaya Grosir. This system produces accurate financial reports because the process is integrated. Reports generated by accounting information systems for making web-based financial statements can be printed and exported in the form of a .pdf file.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography