Journal articles on the topic 'Cluster OpenMP implementations'

To see the other types of publications on this topic, follow the link: Cluster OpenMP implementations.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 journal articles for your research on the topic 'Cluster OpenMP implementations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Saeed, Firas Mahmood, Salwa M. Ali, and Mohammed W. Al-Neama. "A parallel time series algorithm for searching similar sub-sequences." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (March 1, 2022): 1652. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1652-1661.

Full text
Abstract:
<p><span>Dynamic time warping (DTW) is an important metric for measuring similarity for most time series applications. The computations of DTW cost too much especially with the gigantic of sequence databases and lead to an urgent need for accelerating these computations. However, the multi-core cluster systems, which are available now, with their scalability and performance/cost ratio, meet the need for more powerful and efficient performance. This paper proposes a highly efficient parallel vectorized algorithm with high performance for computing DTW, addressed to multi-core clusters using the Intel quad-core Xeon co-processors. It deduces an efficient architecture. Implementations employ the potential of both message passing interface (MPI) and OpenMP libraries. The implementation is based on the OpenMP parallel programming technology and offloads execution mode, where part of the code sub-sequences on the processor side, which are uploaded to the co-processor for the DTW computations. The results of experiments confirm the effectiveness of the algorithm.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Neama, Mohammed W., Naglaa M. Reda, and Fayed F. M. Ghaleb. "An Improved Distance Matrix Computation Algorithm for Multicore Clusters." BioMed Research International 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/406178.

Full text
Abstract:
Distance matrix has diverse usage in different research areas. Its computation is typically an essential task in most bioinformatics applications, especially in multiple sequence alignment. The gigantic explosion of biological sequence databases leads to an urgent need for accelerating these computations.DistVectalgorithm was introduced in the paper of Al-Neama et al. (in press) to present a recent approach for vectorizing distance matrix computing. It showed an efficient performance in both sequential and parallel computing. However, the multicore cluster systems, which are available now, with their scalability and performance/cost ratio, meet the need for more powerful and efficient performance. This paper proposesDistVect1as highly efficient parallel vectorized algorithm with high performance for computing distance matrix, addressed to multicore clusters. It reformulatesDistVect1vectorized algorithm in terms of clusters primitives. It deduces an efficient approach of partitioning and scheduling computations, convenient to this type of architecture. Implementations employ potential of both MPI and OpenMP libraries. Experimental results show that the proposed method performs improvement of around 3-fold speedup upon SSE2. Further it also achieves speedups more than 9 orders of magnitude compared to the publicly available parallel implementation utilized in ClustalW-MPI.
APA, Harvard, Vancouver, ISO, and other styles
3

Thomas, Nathan, Steven Saunders, Tim Smith, Gabriel Tanase, and Lawrence Rauchwerger. "ARMI: A High Level Communication Library for STAPL." Parallel Processing Letters 16, no. 02 (June 2006): 261–80. http://dx.doi.org/10.1142/s0129626406002617.

Full text
Abstract:
ARMI is a communication library that provides a framework for expressing fine-grain parallelism and mapping it to a particular machine using shared-memory and message passing library calls. The library is an advanced implementation of the RMI protocol and handles low-level details such as scheduling incoming communication and aggregating outgoing communication to coarsen parallelism. These details can be tuned for different platforms to allow user codes to achieve the highest performance possible without manual modification. ARMI is used by STAPL, our generic parallel library, to provide a portable, user transparent communication layer. We present the basic design as well as the mechanisms used in the current Pthreads/OpenMP, MPI implementations and/or a combination thereof. Performance comparisons between ARMI and explicit use of Pthreads or MPI are given on a variety of machines, including an HP-V2200, Origin 3800, IBM Regatta and IBM RS/6000 SP cluster.
APA, Harvard, Vancouver, ISO, and other styles
4

SCHUBERT, GERALD, HOLGER FEHSKE, GEORG HAGER, and GERHARD WELLEIN. "HYBRID-PARALLEL SPARSE MATRIX-VECTOR MULTIPLICATION WITH EXPLICIT COMMUNICATION OVERLAP ON CURRENT MULTICORE-BASED SYSTEMS." Parallel Processing Letters 21, no. 03 (September 2011): 339–58. http://dx.doi.org/10.1142/s0129626411000254.

Full text
Abstract:
We evaluate optimized parallel sparse matrix-vector operations for several representative application areas on widespread multicore-based cluster configurations. First the single-socket baseline performance is analyzed and modeled with respect to basic architectural properties of standard multicore chips. Beyond the single node, the performance of parallel sparse matrix-vector operations is often limited by communication overhead. Starting from the observation that nonblocking MPI is not able to hide communication cost using standard MPI implementations, we demonstrate that explicit overlap of communication and computation can be achieved by using a dedicated communication thread, which may run on a virtual core. Moreover we identify performance benefits of hybrid MPI/OpenMP programming due to improved load balancing even without explicit communication overlap. We compare performance results for pure MPI, the widely used "vector-like" hybrid programming strategies, and explicit overlap on a modern multicore-based cluster and a Cray XE6 system.
APA, Harvard, Vancouver, ISO, and other styles
5

Речкалов, Т. В., and М. Л. Цымблер. "A parallel data clustering algorithm for Intel MIC accelerators." Numerical Methods and Programming (Vychislitel'nye Metody i Programmirovanie), no. 2 (March 28, 2019): 104–15. http://dx.doi.org/10.26089/nummet.v20r211.

Full text
Abstract:
Алгоритм PAM (Partitioning Around Medoids) представляет собой разделительный алгоритм кластеризации, в котором в качестве центров кластеров выбираются только кластеризуемые объекты (медоиды). Кластеризация на основе техники медоидов применяется в широком спектре приложений: сегментирование медицинских и спутниковых изображений, анализ ДНК-микрочипов и текстов и др. На сегодня имеются параллельные реализации PAM для систем GPU и FPGA, но отсутствуют таковые для многоядерных ускорителей архитектуры Intel Many Integrated Core (MIC). В настоящей статье предлагается новый параллельный алгоритм кластеризации PhiPAM для ускорителей Intel MIC. Вычисления распараллеливаются с помощью технологии OpenMP. Алгоритм предполагает использование специализированной компоновки данных в памяти и техники тайлинга, позволяющих эффективно векторизовать вычисления на системах Intel MIC. Эксперименты, проведенные на реальных наборах данных, показали хорошую масштабируемость алгоритма. The PAM (Partitioning Around Medoids) is a partitioning clustering algorithm where each cluster is represented by an object from the input dataset (called a medoid). The medoid-based clustering is used in a wide range of applications: the segmentation of medical and satellite images, the analysis of DNA microarrays and texts, etc. Currently, there are parallel implementations of PAM for GPU and FPGA systems, but not for Intel Many Integrated Core (MIC) accelerators. In this paper, we propose a novel parallel PhiPAM clustering algorithm for Intel MIC systems. Computations are parallelized by the OpenMP technology. The algorithm exploits a sophisticated memory data layout and loop tiling technique, which allows one to efficiently vectorize computations with Intel MIC. Experiments performed on real data sets show a good scalability of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

Osthoff, Carla, Francieli Zanon Boito, Rodrigo Virote Kassick, Laércio Lima Pilla, Philippe O. A. Navaux, Claudio Schepke, Jairo Panetta, et al. "Atmospheric models hybrid OpenMP/MPI implementation multicore cluster evaluation." International Journal of Information Technology, Communications and Convergence 2, no. 3 (2012): 212. http://dx.doi.org/10.1504/ijitcc.2012.050411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mahinthakumar, G., and F. Saied. "A Hybrid Mpi-Openmp Implementation of an Implicit Finite-Element Code on Parallel Architectures." International Journal of High Performance Computing Applications 16, no. 4 (November 2002): 371–93. http://dx.doi.org/10.1177/109434200201600402.

Full text
Abstract:
Summary The hybrid MPI-OpenMP model is a natural parallel programming paradigm for emerging parallel architectures that are based on symmetric multiprocessor (SMP) clusters. This paper presents a hybrid implementation adapted for an implicit finite-element code developed for groundwater transport simulations. The original code was parallelized for distributed memory architectures using MPI (Message Passing Interface) using a domain decomposition strategy. OpenMP directives were then added to the code (a straightforward loop-level implementation) to use multiple threads within each MPI process. To improve the OpenMP performance, several loop modifications were adopted. The parallel performance results are compared for four modern parallel architectures. The results show that for most of the cases tested, the pure MPI approach outperforms the hybrid model. The exceptions to this observation were mainly due to a limitation in the MPI library implementation on one of the architectures. A general conclusion is that while the hybrid model is a promising approach for SMP cluster architectures, at the time of this writing, the payoff may not be justified for converting all existing MPI codes to hybrid codes. However, improvements in OpenMP compilers combined with potential MPI limitations in SMP nodes may make the hybrid approach more attractive for a broader set of applications in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Lorna, and Mark Bull. "Development of Mixed Mode MPI / OpenMP Applications." Scientific Programming 9, no. 2-3 (2001): 83–98. http://dx.doi.org/10.1155/2001/450503.

Full text
Abstract:
MPI / OpenMP mixed mode codes could potentially offer the most effective parallelisation strategy for an SMP cluster, as well as allowing the different characteristics of both paradigms to be exploited to give the best performance on a single SMP. This paper discusses the implementation, development and performance of mixed mode MPI / OpenMP applications. The results demonstrate that this style of programming will not always be the most effective mechanism on SMP systems and cannot be regarded as the ideal programming model for all codes. In some situations, however, significant benefit may be obtained from a mixed mode implementation. For example, benefit may be obtained if the parallel (MPI) code suffers from: poor scaling with MPI processes due to load imbalance or too fine a grain problem size, memory limitations due to the use of a replicated data strategy, or a restriction on the number of MPI processes combinations. In addition, if the system has a poorly optimised or limited scaling MPI implementation then a mixed mode code may increase the code performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Lei, Barbara Chapman, and Zhenying Liu. "Towards a more efficient implementation of OpenMP for clusters via translation to global arrays." Parallel Computing 31, no. 10-12 (October 2005): 1114–39. http://dx.doi.org/10.1016/j.parco.2005.03.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Hua Zhong, Yong Sheng Liang, Tao He, and Yi Li. "AOI Multi-Core Parallel System for TFT-LCD Defect Detection." Advanced Materials Research 472-475 (February 2012): 2325–31. http://dx.doi.org/10.4028/www.scientific.net/amr.472-475.2325.

Full text
Abstract:
The present Automatic Optical Inspection (AOI) technology can hardly satisfy online inspection requirements for large-scale high-speed, high-precision and high-sensitivity TFT-LCD. First, through studying the working principle of TFT-LCD Defect AOI System, the system architecture for mixed-parallel multi-core computer cluster is proposed to satisfy design requirements. Second, the study focuses on the software framework of AOI system and related key software technology. Finally, the fusion programming model for parallel image processing and its implementation strategy is proposed based on OpenMP, MPI, OpenCV, and Intel Integrated Performance Primitives (IPP).
APA, Harvard, Vancouver, ISO, and other styles
11

Schive, Hsi-Yu, Ui-Han Zhang, and Tzihong Chiueh. "Directionally unsplit hydrodynamic schemes with hybrid MPI/OpenMP/GPU parallelization in AMR." International Journal of High Performance Computing Applications 26, no. 4 (November 17, 2011): 367–77. http://dx.doi.org/10.1177/1094342011428146.

Full text
Abstract:
We present the implementation and performance of a class of directionally unsplit Riemann-solver-based hydrodynamic schemes on graphics processing units (GPUs). These schemes, including the MUSCL-Hancock method, a variant of the MUSCL-Hancock method, and the corner-transport-upwind method, are embedded into the adaptive-mesh-refinement (AMR) code GAMER. Furthermore, a hybrid MPI/OpenMP model is investigated, which enables the full exploitation of the computing power in a heterogeneous CPU/GPU cluster and significantly improves the overall performance. Performance benchmarks are conducted on the Dirac GPU cluster at NERSC/LBNL using up to 32 Tesla C2050 GPUs. A single GPU achieves speed-ups of 101 (25) and 84 (22) for uniform-mesh and AMR simulations, respectively, as compared with the performance using one (four) CPU core(s), and the excellent performance persists in multi-GPU tests. In addition, we make a direct comparison between GAMER and the widely adopted CPU code Athena in adiabatic hydrodynamic tests and demonstrate that, with the same accuracy, GAMER is able to achieve two orders of magnitude performance speed-up.
APA, Harvard, Vancouver, ISO, and other styles
12

Guo, Han, Jun Hu, and Zaiping Nie. "An MPI-OpenMP Hybrid ParallelH-LU Direct Solver for Electromagnetic Integral Equations." International Journal of Antennas and Propagation 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/615743.

Full text
Abstract:
In this paper we propose a high performance parallel strategy/technique to implement the fast direct solver based on hierarchical matrices method. Our goal is to directly solve electromagnetic integral equations involving electric-large and geometrical-complex targets, which are traditionally difficult to be solved by iterative methods. The parallel method of our direct solver features both OpenMP shared memory programming and MPl message passing for running on a computer cluster. With modifications to the core direct-solving algorithm of hierarchical LU factorization, the new fast solver is scalable for parallelized implementation despite of its sequential nature. The numerical experiments demonstrate the accuracy and efficiency of the proposed parallel direct solver for analyzing electromagnetic scattering problems of complex 3D objects with nearly 4 million unknowns.
APA, Harvard, Vancouver, ISO, and other styles
13

Mavriplis, Dimitri J. "Parallel Performance Investigations of an Unstructured Mesh Navier-Stokes Solver." International Journal of High Performance Computing Applications 16, no. 4 (November 2002): 395–407. http://dx.doi.org/10.1177/109434200201600403.

Full text
Abstract:
Summary The implementation and performance of a hybrid OpenMP/ MPI parallel communication strategy for an unstructured mesh computational fluid dynamics code is described. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Parallelism is obtained through domain decomposition for both communication models. Single processor computational rates as well as scalability curves are given on various architectures. For the architectures studied in this work, the OpenMP or hybrid OpenMP/MPI communication strategies achieved no appreciable performance benefit over an exclusive MPI communication strategy.
APA, Harvard, Vancouver, ISO, and other styles
14

Briguglio, Sergio, Beniamino Di Martino, and Gregorio Vlad. "A Performance-Prediction Model for PIC Applications on Clusters of Symmetric MultiProcessors: Validation with Hierarchical HPF+OpenMP Implementation." Scientific Programming 11, no. 2 (2003): 159–76. http://dx.doi.org/10.1155/2003/691573.

Full text
Abstract:
A performance-prediction model is presented, which describes different hierarchical workload decomposition strategies for particle in cell (PIC) codes on Clusters of Symmetric MultiProcessors. The devised workload decomposition is hierarchically structured: a higher-level decomposition among the computational nodes, and a lower-level one among the processors of each computational node. Several decomposition strategies are evaluated by means of the prediction model, with respect to the memory occupancy, the parallelization efficiency and the required programming effort. Such strategies have been implemented by integrating the high-level languages High Performance Fortran (at the inter-node stage) and OpenMP (at the intra-node one). The details of these implementations are presented, and the experimental values of parallelization efficiency are compared with the predicted results.
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, X., and V. Taylor. "Performance Characteristics of Hybrid MPI/OpenMP Implementations of NAS Parallel Benchmarks SP and BT on Large-Scale Multicore Clusters." Computer Journal 55, no. 2 (July 18, 2011): 154–67. http://dx.doi.org/10.1093/comjnl/bxr063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Couder-Castañeda, Carlos, Carlos Ortiz-Alemán, Mauricio Gabriel Orozco-del-Castillo, and Mauricio Nava-Flores. "TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble." Journal of Applied Mathematics 2013 (2013): 1–15. http://dx.doi.org/10.1155/2013/437357.

Full text
Abstract:
An implementation with the CUDA technology in a single and in several graphics processing units (GPUs) is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performance results obtained with the GPUs against a previous version coded in OpenMP with MPI, and we analyzed the results on both platforms. Today, the use of GPUs represents a breakthrough in parallel computing, which has led to the development of several applications with various applications. Nevertheless, in some applications the decomposition of the tasks is not trivial, as can be appreciated in this paper. Unlike a trivial decomposition of the domain, we proposed to decompose the problem by sets of prisms and use different memory spaces per processing CUDA core, avoiding the performance decay as a result of the constant calls to kernels functions which would be needed in a parallelization by observations points. The design and implementation created are the main contributions of this work, because the parallelization scheme implemented is not trivial. The performance results obtained are comparable to those of a small processing cluster.
APA, Harvard, Vancouver, ISO, and other styles
17

Chikumbo, Oliver, and Vincent Granville. "Optimal Clustering and Cluster Identity in Understanding High-Dimensional Data Spaces with Tightly Distributed Points." Machine Learning and Knowledge Extraction 1, no. 2 (June 5, 2019): 715–44. http://dx.doi.org/10.3390/make1020042.

Full text
Abstract:
The sensitivity of the elbow rule in determining an optimal number of clusters in high-dimensional spaces that are characterized by tightly distributed data points is demonstrated. The high-dimensional data samples are not artificially generated, but they are taken from a real world evolutionary many-objective optimization. They comprise of Pareto fronts from the last 10 generations of an evolutionary optimization computation with 14 objective functions. The choice for analyzing Pareto fronts is strategic, as it is squarely intended to benefit the user who only needs one solution to implement from the Pareto set, and therefore a systematic means of reducing the cardinality of solutions is imperative. As such, clustering the data and identifying the cluster from which to pick the desired solution is covered in this manuscript, highlighting the implementation of the elbow rule and the use of hyper-radial distances for cluster identity. The Calinski-Harabasz statistic was favored for determining the criteria used in the elbow rule because of its robustness. The statistic takes into account the variance within clusters and also the variance between the clusters. This exercise also opened an opportunity to revisit the justification of using the highest Calinski-Harabasz criterion for determining the optimal number of clusters for multivariate data. The elbow rule predicted the maximum end of the optimal number of clusters, and the highest Calinski-Harabasz criterion method favored the number of clusters at the lower end. Both results are used in a unique way for understanding high-dimensional data, despite being inconclusive regarding which of the two methods determine the true optimal number of clusters.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Fan, Tong Nian Shi, Han Chu, and Kun Wang. "The Design and Implementation of Parallel Algorithm Accelerator Based on CPU-GPU Collaborative Computing Environment." Advanced Materials Research 529 (June 2012): 408–12. http://dx.doi.org/10.4028/www.scientific.net/amr.529.408.

Full text
Abstract:
With the rapid development of GPU in recent years, CPU-GPU collaborative computing has become an important technique in scientific research. In this paper, we introduce a cluster system design which based on CPU-GPU collaborative computing environment. This system is based on Intel Embedded Star Platform, and we expand a Computing-Node for it by connecting to high-speed network. Through OpenMP and MPI mixed programming, we integrate different algorithms meeting with the scientific computing and application computing by Master/Worker model and a software system which is based on RIA (Rich Internet Applications). In order to achieve high performance, we used a combination of software and hardware technology. The performance results show that the programs built with hybrid programming model have good performance and scalability.
APA, Harvard, Vancouver, ISO, and other styles
19

Mikhaylenko, C. I., and V. S. Kuleshov. "The formation of a turbulent wake behind a body." Proceedings of the Mavlyutov Institute of Mechanics 10 (2014): 78–81. http://dx.doi.org/10.21662/uim2014.1.014.

Full text
Abstract:
The paper presents the results of direct numerical simulation of the process of turbulence development in the case of fluid flow around the body on the example of the formation of the Karman path. A two-dimensional design area describes an open channel with an obstacle. For the implementation of direct turbulence simulation, the number of nodes in the computed area exceeds 1.5 × 106, which inevitably entails a significant decrease in the time step for maintaining the stability of the numerical scheme. The program code is implemented using OpenMP and MPI parallelization technologies for a computational cluster.
APA, Harvard, Vancouver, ISO, and other styles
20

Shegay, Maksim V., Dmitry A. Suplatov, Nina N. Popova, Vytas K. Švedas, and Vladimir V. Voevodin. "parMATT: parallel multiple alignment of protein 3D-structures with translations and twists for distributed-memory systems." Bioinformatics 35, no. 21 (March 27, 2019): 4456–58. http://dx.doi.org/10.1093/bioinformatics/btz224.

Full text
Abstract:
Abstract Motivation Accurate structural alignment of proteins is crucial at studying structure-function relationship in evolutionarily distant homologues. Various software tools were proposed to align multiple protein 3D-structures utilizing one CPU and thus are of limited productivity at large-scale analysis of protein families/superfamilies. Results The parMATT is a hybrid MPI/pthreads/OpenMP parallel re-implementation of the MATT algorithm to align multiple protein 3D-structures by allowing translations and twists. The parMATT can be faster than MATT on a single multi-core CPU, and provides a much greater speedup when executed on distributed-memory systems, i.e. computing clusters and supercomputers hosting memory-independent computing nodes. The most computationally demanding steps of the MATT algorithm—the initial construction of pairwise alignments between all input structures and further iterative progression of the multiple alignment—were parallelized using MPI and pthreads, and the concluding refinement step was optimized by introducing the OpenMP support. The parMATT can significantly accelerate the time-consuming process of building a multiple structural alignment from a large set of 3D-records of homologous proteins. Availability and implementation The source code is available at https://biokinet.belozersky.msu.ru/parMATT. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
21

Ayres, Daniel L., Michael P. Cummings, Guy Baele, Aaron E. Darling, Paul O. Lewis, David L. Swofford, John P. Huelsenbeck, Philippe Lemey, Andrew Rambaut, and Marc A. Suchard. "BEAGLE 3: Improved Performance, Scaling, and Usability for a High-Performance Computing Library for Statistical Phylogenetics." Systematic Biology 68, no. 6 (April 23, 2019): 1052–61. http://dx.doi.org/10.1093/sysbio/syz020.

Full text
Abstract:
Abstract BEAGLE is a high-performance likelihood-calculation library for phylogenetic inference. The BEAGLE library defines a simple, but flexible, application programming interface (API), and includes a collection of efficient implementations for calculation under a variety of evolutionary models on different hardware devices. The library has been integrated into recent versions of popular phylogenetics software packages including BEAST and MrBayes and has been widely used across a diverse range of evolutionary studies. Here, we present BEAGLE 3 with new parallel implementations, increased performance for challenging data sets, improved scalability, and better usability. We have added new OpenCL and central processing unit-threaded implementations to the library, allowing the effective utilization of a wider range of modern hardware. Further, we have extended the API and library to support concurrent computation of independent partial likelihood arrays, for increased performance of nucleotide-model analyses with greater flexibility of data partitioning. For better scalability and usability, we have improved how phylogenetic software packages use BEAGLE in multi-GPU (graphics processing unit) and cluster environments, and introduced an automated method to select the fastest device given the data set, evolutionary model, and hardware. For application developers who wish to integrate the library, we also have developed an online tutorial. To evaluate the effect of the improvements, we ran a variety of benchmarks on state-of-the-art hardware. For a partitioned exemplar analysis, we observe run-time performance improvements as high as 5.9-fold over our previous GPU implementation. BEAGLE 3 is free, open-source software licensed under the Lesser GPL and available at https://beagle-dev.github.io.
APA, Harvard, Vancouver, ISO, and other styles
22

Casal, Uxía, Jorge González-Domínguez, and María J. Martín. "Parallelization of ARACNe, an Algorithm for the Reconstruction of Gene Regulatory Networks." Proceedings 21, no. 1 (July 31, 2019): 25. http://dx.doi.org/10.3390/proceedings2019021025.

Full text
Abstract:
Gene regulatory networks are graphical representations of molecular regulators that interact with each other and with other substances in the cell to govern the gene expression. There are different computational approaches for the reverse engineering of these networks. Most of them require all gene-gene evaluations using different mathematical methods such as Pearson/Spearman correlation, Mutual Information or topology patterns, among others. The Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNe) is one of the most effective and widely used tools to reconstruct gene regulatory networks. However, the high computational cost of ARACNe prevents its use over large biologic datasets. In this work, we present a hybrid MPI/OpenMP parallel implementation of ARACNe to accelerate its execution on multi-core clusters, obtaining a speedup of 430.46 using as input a dataset with 41,100 genes and 108 samples and 32 nodes (each of them with 24 cores).
APA, Harvard, Vancouver, ISO, and other styles
23

Arkhipov, E. E. "Creation of railway transport clusters within the framework of the federal project “Professionalitet”." Transport Technician: Education and Practice 3, no. 3 (November 2, 2020): 246–49. http://dx.doi.org/10.46684/2687-1033.2022.3.246-249.

Full text
Abstract:
In 2021, the implementation of the federal project “Professionalitet” began. The project was developed as part of the Strategy for the Socio-Economic Development of the Russian Federation until 2030 and is aimed at a comprehensive reboot of the system of secondary vocational education by reducing the duration of training and modernizing the infrastructure of colleges and technical schools.Railway transport has become one of the eight sectors of the economy that participate in the implementation of the federal project “Professionalitet”. Industry-specific educational and production centers (clusters) in the 2022–2023 academic year are planned to be opened in nine railway educational organizations of secondary vocational education and will accept about 770 students for training. Approbation of a new model of personnel training for railway transport will be carried out in four specialties, and training programs involve a more intensive educational process and work experience, which will reduce the training time and provide students with in-demand working professions.The effective interaction of Russian Railways with railway clusters will ensure the main goal of the federal project “Professionalitet” — ensuring a comprehensive restructuring of the secondary vocational education system through experimental programs with reduced training periods and modernization of the material and technical base of educational institutions.
APA, Harvard, Vancouver, ISO, and other styles
24

Kaupužs, J., J. Rimšāns, and R. V. N. Melnik. "Critical Phenomena and Phase Transitions in Large Lattices within Monte-Carlo Based Non-perturbative Approaches." Ukrainian Journal of Physics 56, no. 8 (February 9, 2022): 845. http://dx.doi.org/10.15407/ujpe56.8.845.

Full text
Abstract:
Critical phenomena and Goldstone mode effects in spin models with the O(n) rotational symmetry are considered. Starting with Goldstone mode singularities in the XY and O(4) models, we briefly review various theoretical concepts, as well as state-of-the-art Monte Carlo simulation results. They support recent results of the GFD (grouping of Feynman diagrams) theory, stating that these singularities are described by certain nontrivial exponents, which differ from those predicted earlier by perturbative treatments. Furthermore, we present the recent Monte Carlo simulation results of the three-dimensional Ising model for lattices with linear sizes up to L = 1536, which are very large as compared to L ≤ 128 usually used in the finite-size scaling analysis. These results are obtained, using a parallel OpenMP implementation of the Wolff single-cluster algorithm. The finite-size scaling analysis of the critical exponent η, assuming the usually accepted correction-to-scaling exponent ω ≈ 0.8, shows that η is likely to be somewhat larger than the value 0.0335 ± 0.0025 of the perturbative renormalization group (RG) theory. Moreover, we have found that the actual data can be well described by different critical exponents: η = ω =1/8 and ν = 2/3, found within the GFD theory.
APA, Harvard, Vancouver, ISO, and other styles
25

Migallón, Héctor, Violeta Migallón, and José Penadés. "Non-Stationary Acceleration Strategies for PageRank Computing." Mathematics 7, no. 10 (October 1, 2019): 911. http://dx.doi.org/10.3390/math7100911.

Full text
Abstract:
In this work, a non-stationary technique based on the Power method for accelerating the parallel computation of the PageRank vector is proposed and its theoretical convergence analyzed. This iterative non-stationary model, which uses the eigenvector formulation of the PageRank problem, reduces the needed computations for obtaining the PageRank vector by eliminating synchronization points among processes, in such a way that, at each iteration of the Power method, the block of iterate vector assigned to each process can be locally updated more than once, before performing a global synchronization. The parallel implementation of several strategies combining this novel non-stationary approach and the extrapolation methods has been developed using hybrid MPI/OpenMP programming. The experiments have been carried out on a cluster made up of 12 nodes, each one equipped with two Intel Xeon hexacore processors. The behaviour of the proposed parallel algorithms has been studied with realistic datasets, highlighting their performance compared with other parallel techniques for solving the PageRank problem. Concretely, the experimental results show a time reduction of up to 58.4 % in relation to the parallel Power method, when a small number of local updates is performed before each global synchronization, outperforming both the two-stage algorithms and the extrapolation algorithms, more sharply as the number of processes increases.
APA, Harvard, Vancouver, ISO, and other styles
26

Daoudi, Sara, Chakib Mustapha Anouar Zouaoui, Miloud Chikr El-Mezouar, and Nasreddine Taleb. "Parallelization of the K-Means++ Clustering Algorithm." Ingénierie des systèmes d information 26, no. 1 (February 28, 2021): 59–66. http://dx.doi.org/10.18280/isi.260106.

Full text
Abstract:
K-means++ is the clustering algorithm that is created to improve the process of getting initial clusters in the K-means algorithm. The k-means++ algorithm selects initial k-centroids arbitrarily dependent on a probability that is proportional to each data-point distance to the existing centroids. The most noteworthy problem of this algorithm is when running happens in sequential mode, as this reduces the speed of clustering. In this paper, we develop a new parallel k-means++ algorithm using the graphics processing units (GPU) where the Open Computing Language (OpenCL) platform is used as the programming environment to perform the data assignment phase in parallel while the Streaming SIMD Extension (SSE) technology is used to perform the initialization step to select the initial centroids in parallel on CPU. The focus is on optimizations directly targeted to this architecture to exploit the most of the available computing capabilities. Our objective is to minimize runtime while keeping the quality of the serial implementation. Our outcomes demonstrate that the implementation of targeting hybrid parallel architectures (CPU & GPU) is the most appropriate for large data. We have been able to achieve a 152 times higher throughput than that of the sequential implementation of k-means ++.
APA, Harvard, Vancouver, ISO, and other styles
27

Ketelhöhn, Niels, Roberto Artavia, Ronald Arce, and Victor Umaña. "The Central American Competitiveness Initiative." Competitiveness Review 25, no. 5 (October 19, 2015): 555–70. http://dx.doi.org/10.1108/cr-07-2015-0065.

Full text
Abstract:
Purpose – This paper is a historical account of the process by which Michael Porter and INCAE Business School put together a regional competitiveness strategy for Central America that was officially adopted by the governments of five participating countries, and implemented through a series of Presidential Summits that occurred between 1995 and 1999. The paper provides a unique case study on the adoption of the concepts put forth by Porter in his book “The Competitive Advantage of Nations” (1990) at the highest level of government. The study arrives at a series of practical implications for policy makers that are particularly relevant for the implementation of supra-national regional strategies. Design/methodology/approach – The authors conduct an extensive literature review of 190 policy papers produced by INCAE Business School, that are used to recreate the historical evolution of the regional competitiveness strategy. The effect of Porter’s intervention is also assessed by comparing the main economic indicators of each participating country with those of 2005-2010. One of the authors was the main protagonist in the successful implementation of the strategy, and the paper relies partially on his accounts of events. Findings – This study describes how economic policy in Central America was profoundly influenced by Michael Porter’s thinking in the second half of the 1990s. These policy changes promoted international competition of Central American clusters and firms, and opened the region for international investment and tourism. The region experienced important increases in its economic integration, its international trade, foreign direct investment and tourist arrivals. Gross domestic product growth was accelerated in Honduras and Nicaragua. Research limitations/implications – Like all case studies, this study has limits related to the generalizability of its conclusions. Additionally, it is not possible to determine the precise nature of the relation between the implementation of the regional economic strategy, and the impact on economic growth, integration, FDI attraction and exports. Practical implications – The paper has several practical implications that relate to the design of regional economic strategies. First, it identifies policy areas that are more effective as part of regional strategies, and distinguishes them from those that should be resolved at the national level. Second, it suggests a process that can facilitate execution. Finally, it provides an example of the coordinating role that can be assumed by an academic institution such as INCAE. Originality/value – The Central American Competitiveness Initiative provides a unique setting to study the implementation of competitiveness policy for several reasons. First, in all countries in Central America, Michael Porter’s diamond framework (1990) and cluster theory were officially adopted at the highest level of government. Second, in addition to their individual competitiveness strategies, all countries adopted a regional strategy for cooperation and economic integration. Finally, the Central American Competitiveness Initiative was founded on one of the first competitiveness think tanks of the world.
APA, Harvard, Vancouver, ISO, and other styles
28

Mokodompis, Emir Zulkarnain, Joni Widjayanto, Helda Risman, and Wayan Nuriada. "Defense Strategies for Large Island and Cluster of Small Island in Preparation for Modern Warfare." International Journal of Research and Innovation in Social Science 06, no. 05 (2022): 740–44. http://dx.doi.org/10.47772/ijriss.2022.6541.

Full text
Abstract:
The war between Russia and Ukraine opened the eyes of the entire international community about the potential for modern wars and third world wars, especially for Indonesia which implemented the defense strategy of the major islands. The strategy is oriented towards building defense forces on large islands, so as to protect themselves from attacks from the enemy and provide protective assistance to small islands located nearby. But the concept of defense that focuses on large islands needs to be reviewed for its suitability to face the threat of modern warfare, especially proxy wars, which instead focus on small and outer islands that are not the main focus of defense force development. Based on this, this research was carried out with the aim of examining the suitability of the defense strategies of large islands to deal with modern warfare, as well as the development of strategies that can effectively face modern warfare. The Research method that was used for this study literature. The research used the secondary method of data collection. The method of data analysis was content analysis. The results stated that the defense strategy of the big islands has loopholes, making it less effective to deal with the threat of modern warfare in the form of proxy wars. The development of a strategy that can be done is to establish the radar defense strategy as one of the orientations in the defense strategy of the big island. Thus, the construction of defenses on large islands is not only oriented inward, but also to the surrounding small islands. Practically speaking, the defense concept is part of the defense of large islands that prioritizes the implementation of joint tni trimatra operations that are conventional and non-conventional, which have a wider scope to reach all areas of small islands around large islands.
APA, Harvard, Vancouver, ISO, and other styles
29

Korableva, Galina, and Elena Kucherova. "A functionally-oriented approach to digitalization of the activities of competence centers in the field of agricultural cooperation." SHS Web of Conferences 141 (2022): 01008. http://dx.doi.org/10.1051/shsconf/202214101008.

Full text
Abstract:
The publication summarizes the experience of digitalization of the activities of the competence center in the field of agricultural cooperation. The concept of digitalization of the competence center in the field of agricultural cooperation has been formulated and practically implemented, including the use of heterogeneous software products, both author’s and third-party developers. Software products and their functions that will automate the main activities of the competence center in the field of agricultural cooperation are considered. The author’s software product developed for informational and analytical support of the activities of the competence center opened at the Moscow State University of Technology and Management named after K. G. Razumovsky, allows you to automate accounting functions and data mining functions. The main functions of the created and implemented software product are accounting of agricultural producers, established cooperatives and their participation in events organized by the competence center and other organizations, accounting of grantees cooperating with the competence center, implementation of cluster data analysis methods to identify potential members of an agricultural cooperative.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Peijun, Chen Hao, Zhaoyuan Liu, and Tao Liu. "Implementation and optimization of hybrid parallel strategy in HNET." Frontiers in Nuclear Engineering 1 (December 5, 2022). http://dx.doi.org/10.3389/fnuen.2022.1058525.

Full text
Abstract:
For the self-developed three-dimensional whole-core High-fidelity NEutron Transport calculation program HNET, although the numerical acceleration algorithms can improve the computation performance of high-fidelity neutron transport in terms of algorithms and models, it still faces some critical issues such as long computing time and enormous memory requirement. The rapid development of high-performance clusters provides a foundation for the application of massively parallel computing. Most current MOC programs are based on a single-type variable to achieve efficient parallelism, and spatial domain decomposition methods are the most common parallel schemes. However, its parallelism is limited and cannot fully utilize the current state-of-the-art computer resources. To solve this problem, hybrid parallel strategies are implemented in HNET to further expand the parallel degree, improve the speed of computation, and reduce the memory requirement. A hybrid MPI/OpenMP method based on domain decomposition and characteristic rays is proposed for the method of characteristics (MOC). For domain decomposition, the simulation domain is divided into spatial subdomains, with each subdomain handled by different processes. On this basis, the characteristic ray parallelism is implemented taking advantage of the inherent parallelism of the characteristic rays. Meanwhile, the optimization of the hybrid parallel strategy further improves the computation speed by eliminating atomic operations and using private pointer arrays and other techniques. In addition, in the framework of generalized equivalence theory (GET) based two-level coarse mesh finite difference method (CMFD), there is also a certain time consumption in solving CMFD linear system. Hence, for CMFD, a hybrid MPI/OpenMP method based on domain decomposition and secondary domain decomposition can be used. Using the secondary domain decomposition method, each subdomain is divided into sub-subdomains, with each sub-subdomain handled by threads, which allows CMFD to utilize the resources of characteristic ray parallelism in MOC, and also increases the speed of CMFD computation. Numerical results show that for both steady-state and transient calculations, the hybrid MPI/OpenMP in HNET can further expand the parallelism and accelerate the computation. It can take full use of parallel resources and achieve large-scale parallelism.
APA, Harvard, Vancouver, ISO, and other styles
31

Steinegger, Martin, Markus Meier, Milot Mirdita, Harald Vöhringer, Stephan J. Haunsberger, and Johannes Söding. "HH-suite3 for fast remote homology detection and deep protein annotation." BMC Bioinformatics 20, no. 1 (September 14, 2019). http://dx.doi.org/10.1186/s12859-019-3019-7.

Full text
Abstract:
Abstract Background HH-suite is a widely used open source software suite for sensitive sequence similarity searches and protein fold recognition. It is based on pairwise alignment of profile Hidden Markov models (HMMs), which represent multiple sequence alignments of homologous proteins. Results We developed a single-instruction multiple-data (SIMD) vectorized implementation of the Viterbi algorithm for profile HMM alignment and introduced various other speed-ups. These accelerated the search methods HHsearch by a factor 4 and HHblits by a factor 2 over the previous version 2.0.16. HHblits3 is ∼10× faster than PSI-BLAST and ∼20× faster than HMMER3. Jobs to perform HHsearch and HHblits searches with many query profile HMMs can be parallelized over cores and over cluster servers using OpenMP and message passing interface (MPI). The free, open-source, GPLv3-licensed software is available at https://github.com/soedinglab/hh-suite. Conclusion The added functionalities and increased speed of HHsearch and HHblits should facilitate their use in large-scale protein structure and function prediction, e.g. in metagenomics and genomics projects.
APA, Harvard, Vancouver, ISO, and other styles
32

Hashim, Jamal Hisham, Mohammad Adam Adman, Zailina Hashim, Mohd Firdaus Mohd Radi, and Soo Chen Kwan. "COVID-19 Epidemic in Malaysia: Epidemic Progression, Challenges, and Response." Frontiers in Public Health 9 (May 7, 2021). http://dx.doi.org/10.3389/fpubh.2021.560592.

Full text
Abstract:
COVID-19 pandemic is the greatest communicable disease outbreak to have hit Malaysia since the 1918 Spanish Flu which killed 34,644 people or 1% of the population of the then British Malaya. In 1999, the Nipah virus outbreak killed 105 Malaysians, while the SARS outbreak of 2003 claimed only 2 lives. The ongoing COVID-19 pandemic has so far claimed over 100 Malaysian lives. There were two waves of the COVID-19 cases in Malaysia. First wave of 22 cases occurred from January 25 to February 15 with no death and full recovery of all cases. The ongoing second wave, which commenced on February 27, presented cases in several clusters, the biggest of which was the Sri Petaling Tabligh cluster with an infection rate of 6.5%, and making up 47% of all cases in Malaysia. Subsequently, other clusters appeared from local mass gatherings and imported cases of Malaysians returning from overseas. Healthcare workers carry high risks of infection due to the daily exposure and management of COVID-19 in the hospitals. However, 70% of them were infected through community transmission and not while handling patients. In vulnerable groups, the incidence of COVID-19 cases was highest among the age group 55 to 64 years. In terms of fatalities, 63% were reported to be aged above 60 years, and 81% had chronic comorbidities such as diabetes, hypertension, and heart diseases. The predominant COVID-19 strain in Malaysia is strain B, which is found exclusively in East Asia. However, strain A, which is mostly found in the USA and Australia, and strain C in Europe were also present. To contain the epidemic, Malaysia implemented a Movement Control Order (MCO) beginning on March 18 in 4 phases over 2 months, ending on May 12. In terms of economic impacts, Malaysia lost RM2.4 billion a day during the MCO period, with an accumulated loss of RM63 billion up to the end of April. Since May 4, Malaysia has relaxed the MCO and opened up its economic sector to relieve its economic burden. Currently, the best approach to achieving herd immunity to COVID-19 is through vaccination rather than by acquiring it naturally. There are at least two candidate vaccines which have reached the final stage of human clinical trials. Malaysia's COVID-19 case fatality rate is lower than what it is globally; this is due to the successful implementation of early preparedness and planning, the public health and hospital system, comprehensive contact tracing, active case detection, and a strict enhanced MCO.
APA, Harvard, Vancouver, ISO, and other styles
33

Lysenko, Viktor G., Aleksandra A. Levitskaia, Valery A. Nikolayev, Olga A. Chopik, Oleg Yu Pokhorukov, Oleg A. Masyukov, and Elena L. Rudneva. "Development of networking cooperation for implementing advanced professional training programs." Revista on line de Política e Gestão Educacional, May 1, 2021, 961–80. http://dx.doi.org/10.22633/rpge.v25iesp.2.15280.

Full text
Abstract:
Nowadays, Russian vocational education is overcoming a stage of profound transformations associated with the post-industrial society's peculiarities, which main characteristics are computerization and digitalization of the economy. Transformations in the digital economy determine new requirements for specialists' training, their competencies, and qualification. The rapid changes in socio-economic conditions cause the need to transform the vocational training system to meet the demands for specialists with competencies that correspond to the current technologies and methods of production. The solution to the relevant problems is facilitated by vocational education's devotion to "anticipation/advance" and "interaction" principles, which are being successfully implemented in the activities of advanced vocational training centers opened in Russia in 2019. The advanced vocational training ensures the development of new and promising professions in great demand by the regional economy. Simultaneously, the interaction principle's implementation allows ensuring interdepartmental coordination when performing the social order for vocational education made by the state, society, and individuals. The article considers forms and features of the interaction between educational organizations and partners, including social partnership, networking cooperation, public and private partnership, educational and technological clusters. The paper presents the Kemerovo region center's performance results for advanced vocational training concerning the creation of a regional network for educational organizations and enterprises involved in the joint development and implementation of advanced vocational training programs intended for specialist training.
APA, Harvard, Vancouver, ISO, and other styles
34

Stefan, Mihaela S., Penelope S. Pekow, Christopher M. Shea, Ashley M. Hughes, Nicholas S. Hill, Jay S. Steingrub, Mary Jo S. Farmer, et al. "Update to the study protocol for an implementation-effectiveness trial comparing two education strategies for improving the uptake of noninvasive ventilation in patients with severe COPD exacerbation." Trials 22, no. 1 (December 2021). http://dx.doi.org/10.1186/s13063-021-05855-9.

Full text
Abstract:
Abstract Background There is strong evidence that noninvasive ventilation (NIV) improves the outcomes of patients hospitalized with severe COPD exacerbation, and NIV is recommended as the first-line therapy for these patients. Yet, several studies have demonstrated substantial variation in NIV use across hospitals, leading to preventable morbidity and mortality. In addition, prior studies suggested that efforts to increase NIV use in COPD need to account for the complex and interdisciplinary nature of NIV delivery and the need for team coordination. Therefore, our initial project aimed to compare two educational strategies: online education (OLE) and interprofessional education (IPE), which targets complex team-based care in NIV delivery. Due to the impact of the COVID-19 pandemic on recruitment and planned intervention, we had made several changes in the study design, statistical analysis, and implementation strategies delivery as outlined in the methods. Methods We originally proposed a two-arm, pragmatic, cluster, randomized hybrid implementation-effectiveness trial comparing two education strategies to improve NIV uptake in patients with severe COPD exacerbation in 20 hospitals with a low baseline rate of NIV use. Due to logistical constrains and slow recruitment, we changed the study design to an opened cohort stepped-wedge design with three steps which will allow the institutions to enroll when they are ready to participate. Only the IPE strategy will be implemented, and the education will be provided in an online virtual format. Our primary outcome will be the hospital-level risk-standardized NIV proportion for the period post-IPE training, along with the change in rate from the period prior to training. Aim 1 will compare the change over time of NIV use among patients with COPD in the step-wedged design. Aim 2 will explore the mediators’ role (respiratory therapist autonomy and team functionality) on the relationship between the implementation strategies and effectiveness. Finally, in Aim 3, through interviews with providers, we will assess the acceptability and feasibility of the educational training. Conclusion The changes in study design will result in several limitation. Most importantly, the hospitals in the three cohorts are not randomized as they enroll based on their readiness. Second, the delivery of the IPE is virtual, and it is not known if remote education is conducive to team building. However, this study will be among the first to test the impact of IPE in the inpatient setting carefully and may generalize to other interventions directed to seriously ill patients. Trial registration ClinicalTrials.govNCT04206735. Registered on December 20, 2019;
APA, Harvard, Vancouver, ISO, and other styles
35

Haynie, Aisha, Sherry Jin, Leann Liu, Sherrill Pirsamadi, Benjamin Hornstein, April Beeks, Sarah Milligan, et al. "Public Health Surveillance in a Large Evacuation Shelter Post Hurricane Harvey." Online Journal of Public Health Informatics 10, no. 1 (May 22, 2018). http://dx.doi.org/10.5210/ojphi.v10i1.8955.

Full text
Abstract:
Objective1) Describe HCPH’s disease surveillance and prevention activities within the NRG Center mega-shelter; 2) Present surveillance findings with an emphasis on sharing tools that were developed and may be utilized for future disaster response efforts; 3) Discuss successes achieved, challenges encountered, and lessons learned from this emergency response.IntroductionHurricane Harvey made landfall along the Texas coast on August 25th, 2017 as a Category 4 storm. It is estimated that the ensuing rainfall caused record flooding of at least 18 inches in 70% of Harris County. Over 30,000 residents were displaced and 50 deaths occurred due to the devastation. At least 53 temporary refuge shelters opened in various parts of Harris County to accommodate displaced residents. On the evening of August 29th, Harris County and community partners set up a 10,000 bed mega-shelter at NRG Center, in efforts to centralize refuge efforts. Harris County Public Health (HCPH) was responsible for round-the-clock surveillance to monitor resident health status and prevent communicable disease outbreaks within the mega-shelter. This was accomplished through direct and indirect resident health assessments, along with coordinated prevention and disease control efforts. Despite HCPH’s 20-day active response, and identification of two relatively small but potentially worrisome communicable disease outbreaks, no large-scale disease outbreaks occurred within the NRG Center mega-shelter.MethodsActive surveillance was conducted in the NRG shelter to rapidly detect communicable and high-consequence illness and to prevent disease transmission. An online survey tool and novel epidemiology consulting method were developed to aid in this surveillance. Surveillance included daily review of onsite medical, mental health, pharmacy, and vaccination activities, as well as nightly cot-to-cot resident health surveys. Symptoms of infectious disease, exacerbation of chronic disease, and mental health issues among evacuees were closely monitored. Rapid epidemiology consultations were performed for shelter residents displaying symptoms consistent with communicable illness or other signs of distress during nightly cot surveys. Onsite rapid assay tests and public health laboratory testing were used to confirm disease diagnoses. When indicated, disease control measures were implemented and residents referred for further evaluation. Frequencies and percentages were used in the descriptive analysis.ResultsHarris County’s NRG Center mega-shelter housed 3,365 evacuees at its peak. 3,606 household health surveys were completed during 20 days of active surveillance, representing 7,152 individual resident evaluations, and 395 epidemiology consultations. Multifaceted surveillance uncovered influenza-like illness and gastrointestinal (GI) complaints, revealing an Influenza A outbreak of 20 cases, 3 isolated cases of strep throat, and a Norovirus cluster of 5 cases. Disease control activities included creation of respiratory and GI isolation rooms, provision of over 771 influenza vaccinations, generous distribution of hand sanitizer throughout the shelter, placement of hygiene signage, and frequent bilingual public health public service announcements in the dormitory areas. No widespread outbreaks of communicable disease occurred. Additionally, a number of shelter residents were referred to the clinic after reporting exacerbation of chronical conditions or mental health concerns, including one individual with suicidal ideations.ConclusionsEffective public health surveillance and implementation of disease control measures in disaster shelters are critical to detecting and preventing communicable illness. HCPH’s rigorous surveillance and response system in the NRG Center mega-shelter, including online survey tool and novel consultation method, resulted in timely identification and isolation of patients with gastrointestinal and influenza-like illness. These were likely key factors in the successful prevention of widespread disease transmission. Additional success factors included successful partnerships with onsite clinical and pharmacy teams, cooperative and engaged shelter leadership, synergistic internal surveillance team dynamics, availability of student volunteers, sufficient quantities of influenza vaccine, and access to mobile survey technology. Challenges, mostly related to scope and magnitude of response, included lack of pre-designed survey tools, relatively new staff without significant disaster experience, and simultaneous management of multiple surveillance activities within the community. Personal hurricane-related losses experienced by HCPH staff also impacted response efforts. HCPH’s rich disaster response experiences at the NRG mega-shelter and developed surveillance tools can serve as a planning guide for future public health emergencies in Harris County and other jurisdictions.
APA, Harvard, Vancouver, ISO, and other styles
36

Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2703.

Full text
Abstract:
Introduction In The World Is Not a Desktop, Marc Weisner, the principal scientist and manager of the computer science laboratory at Xerox PARC, stated that, “a good tool is an invisible tool.” Weisner cited eyeglasses as an ideal technology because with spectacles, he argued, “you look at the world, not the eyeglasses.” Although Weisner’s work at PARC played an important role in the creation of the field of “ubiquitous computing”, his ideal is widespread in many areas of technology design. Through repetition, and by design, technologies blend into our lives. While technologies, and communications technologies in particular, have a powerful mediating impact, many of the most pervasive effects are taken for granted by most users. When technology works smoothly, its nature and effects are invisible. But technologies do not always work smoothly. A tiny fracture or a smudge on a lens renders glasses quite visible to the wearer. The Microsoft Windows “Blue Screen of Death” on subway in Seoul (Photo credit Wikimedia Commons). Anyone who has seen a famous “Blue Screen of Death”—the iconic signal of a Microsoft Windows crash—on a public screen or terminal knows how errors can thrust the technical details of previously invisible systems into view. Nobody knows that their ATM runs Windows until the system crashes. Of course, the operating system chosen for a sign or bank machine has important implications for its users. Windows, or an alternative operating system, creates affordances and imposes limitations. Faced with a crashed ATM, a consumer might ask herself if, with its rampant viruses and security holes, she should really trust an ATM running Windows? Technologies make previously impossible actions possible and many actions easier. In the process, they frame and constrain possible actions. They mediate. Communication technologies allow users to communicate in new ways but constrain communication in the process. In a very fundamental way, communication technologies define what their users can say, to whom they say it, and how they can say it—and what, to whom, and how they cannot. Humanities scholars understand the power, importance, and limitations of technology and technological mediation. Weisner hypothesised that, “to understand invisibility the humanities and social sciences are especially valuable, because they specialise in exposing the otherwise invisible.” However, technology activists, like those at the Free Software Foundation (FSF) and the Electronic Frontier Foundation (EFF), understand this power of technology as well. Largely constituted by technical members, both organisations, like humanists studying technology, have struggled to communicate their messages to a less-technical public. Before one can argue for the importance of individual control over who owns technology, as both FSF and EFF do, an audience must first appreciate the power and effect that their technology and its designers have. To understand the power that technology has on its users, users must first see the technology in question. Most users do not. Errors are under-appreciated and under-utilised in their ability to reveal technology around us. By painting a picture of how certain technologies facilitate certain mistakes, one can better show how technology mediates. By revealing errors, scholars and activists can reveal previously invisible technologies and their effects more generally. Errors can reveal technology—and its power and can do so in ways that users of technologies confront daily and understand intimately. The Misprinted Word Catalysed by Elizabeth Eisenstein, the last 35 years of print history scholarship provides both a richly described example of technological change and an analysis of its effects. Unemphasised in discussions of the revolutionary social, economic, and political impact of printing technologies is the fact that, especially in the early days of a major technological change, the artifacts of print are often quite similar to those produced by a new printing technology’s predecessors. From a reader’s purely material perspective, books are books; the press that created the book is invisible or irrelevant. Yet, while the specifics of print technologies are often hidden, they are often exposed by errors. While the shift from a scribal to print culture revolutionised culture, politics, and economics in early modern Europe, it was near-invisible to early readers (Eisenstein). Early printed books were the same books printed in the same way; the early press was conceived as a “mechanical scriptorium.” Shown below, Gutenberg’s black-letter Gothic typeface closely reproduced a scribal hand. Of course, handwriting and type were easily distinguishable; errors and irregularities were inherent in relatively unsteady human hands. Side-by-side comparisons of the hand-copied Malmesbury Bible (left) and the black letter typeface in the Gutenberg Bible (right) (Photo credits Wikimedia Commons & Wikimedia Commons). Printing, of course, introduced its own errors. As pages were produced en masse from a single block of type, so were mistakes. While a scribe would re-read and correct errors as they transcribed a second copy, no printing press would. More revealingly, print opened the door to whole new categories of errors. For example, printers setting type might confuse an inverted n with a u—and many did. Of course, no scribe made this mistake. An inverted u is only confused with an n due to the technological possibility of letter flipping in movable type. As print moved from Monotype and Linotype machines, to computerised typesetting, and eventually to desktop publishing, an accidentally flipped u retreated back into the realm of impossibility (Mergenthaler, Swank). Most readers do not know how their books are printed. The output of letterpresses, Monotypes, and laser printers are carefully designed to produce near-uniform output. To the degree that they succeed, the technologies themselves, and the specific nature of the mediation, becomes invisible to readers. But each technology is revealed in errors like the upside-down u, the output of a mispoured slug of Monotype, or streaks of toner from a laser printer. Changes in printing technologies after the press have also had profound effects. The creation of hot-metal Monotype and Linotype, for example, affected decisions to print and reprint and changed how and when it is done. New mass printing technologies allowed for the printing of works that, for economic reasons, would not have been published before. While personal computers, desktop publishing software, and laser printers make publishing accessible in new ways, it also places real limits on what can be printed. Print runs of a single copy—unheard of before the invention of the type-writer—are commonplace. But computers, like Linotypes, render certain formatting and presentation difficult and impossible. Errors provide a space where the particulars of printing make technologies visible in their products. An inverted u exposes a human typesetter, a letterpress, and a hasty error in judgment. Encoding errors and botched smart quotation marks—a ? in place of a “—are only possible with a computer. Streaks of toner are only produced by malfunctioning laser printers. Dust can reveal the photocopied provenance of a document. Few readers reflect on the power or importance of the particulars of the technologies that produced their books. In part, this is because the technologies are so hidden behind their products. Through errors, these technologies and the power they have on the “what” and “how” of printing are exposed. For scholars and activists attempting to expose exactly this, errors are an under-exploited opportunity. Typing Mistyping While errors have a profound effect on media consumption, their effect is equally important, and perhaps more strongly felt, when they occur during media creation. Like all mediating technologies, input technologies make it easier or more difficult to create certain messages. It is, for example, much easier to write a letter with a keyboard than it is to type a picture. It is much more difficult to write in languages with frequent use of accents on an English language keyboard than it is on a European keyboard. But while input systems like keyboards have a powerful effect on the nature of the messages they produce, they are invisible to recipients of messages. Except when the messages contains errors. Typists are much more likely to confuse letters in close proximity on a keyboard than people writing by hand or setting type. As keyboard layouts switch between countries and languages, new errors appear. The following is from a personal email: hez, if there’s not a subversion server handz, can i at least have the root password for one of our machines? I read through the instructions for setting one up and i think i could do it. [emphasis added] The email was quickly typed and, in two places, confuses the character y with z. Separated by five characters on QWERTY keyboards, these two letters are not easily mistaken or mistyped. However, their positions are swapped on German and English keyboards. In fact, the author was an American typing in a Viennese Internet cafe. The source of his repeated error was his false expectations—his familiarity with one keyboard layout in the context of another. The error revealed the context, both keyboard layouts, and his dependence on a particular keyboard. With the error, the keyboard, previously invisible, was exposed as an inter-mediator with its own particularities and effects. This effect does not change in mobile devices where new input methods have introduced powerful new ways of communicating. SMS messages on mobile phones are constrained in length to 160 characters. The result has been new styles of communication using SMS that some have gone so far as to call a new language or dialect called TXTSPK (Thurlow). Yet while they are obvious to social scientists, the profound effects of text message technologies on communication is unfelt by most users who simply see the messages themselves. More visible is the fact that input from a phone keypad has opened the door to errors which reveal input technology and its effects. In the standard method of SMS input, users press or hold buttons to cycle through the letters associated with numbers on a numeric keyboard (e.g., 2 represents A, B, and C; to produce a single C, a user presses 2 three times). This system makes it easy to confuse characters based on a shared association with a single number. Tegic’s popular T9 software allows users to type in words by pressing the number associated with each letter of each word in quick succession. T9 uses a database to pick the most likely word that maps to that sequence of numbers. While the system allows for quick input of words and phrases on a phone keypad, it also allows for the creation of new types of errors. A user trying to type me might accidentally write of because both words are mapped to the combination of 6 and 3 and because of is a more common word in English. T9 might confuse snow and pony while no human, and no other input method, would. Users composing SMS’s are constrained by its technology and its design. The fact that text messages must be short and the difficult nature of phone-based input methods has led to unique and highly constrained forms of communication like TXTSPK (Sutherland). Yet, while the influence of these input technologies is profound, users are rarely aware of it. Errors provide a situation where the particularities of a technology become visible and an opportunity for users to connect with scholars exposing the effect of technology and activists arguing for increased user control. Google News Denuded As technologies become more complex, they often become more mysterious to their users. While not invisible, users know little about the way that complex technologies work both because they become accustomed to them and because the technological specifics are hidden inside companies, behind web interfaces, within compiled software, and in “black boxes” (Latour). Errors can help reveal these technologies and expose their nature and effects. One such system, Google’s News, aggregates news stories and is designed to make it easy to read multiple stories on the same topic. The system works with “topic clusters” that attempt to group articles covering the same news event. The more items in a news cluster (especially from popular sources) and the closer together they appear in time, the higher confidence Google’s algorithms have in the “importance” of a story and the higher the likelihood that the cluster of stories will be listed on the Google News page. While the decision to include or remove individual sources is made by humans, the act of clustering is left to Google’s software. Because computers cannot “understand” the text of the articles being aggregated, clustering happens less intelligently. We know that clustering is primarily based on comparison of shared text and keywords—especially proper nouns. This process is aided by the widespread use of wire services like the Associated Press and Reuters which provide article text used, at least in part, by large numbers of news sources. Google has been reticent to divulge the implementation details of its clustering engine but users have been able to deduce the description above, and much more, by watching how Google News works and, more importantly, how it fails. For example, we know that Google News looks for shared text and keywords because text that deviates heavily from other articles is not “clustered” appropriately—even if it is extremely similar semantically. In this vein, blogger Philipp Lenssen gives advice to news sites who want to stand out in Google News: Of course, stories don’t have to be exactly the same to be matched—but if they are too different, they’ll also not appear in the same group. If you want to stand out in Google News search results, make your article be original, or else you’ll be collapsed into a cluster where you may or may not appear on the first results page. While a human editor has no trouble understanding that an article using different terms (and different, but equally appropriate, proper nouns) is discussing the same issue, the software behind Google News is more fragile. As a result, Google News fails to connect linked stories that no human editor would miss. A section of a screenshot of Google News clustering aggregation showcasing what appears to be an error. But just as importantly, Google News can connect stories that most human editors will not. Google News’s clustering of two stories by Al Jazeera on how “Iran offers to share nuclear technology,” and by the Guardian on how “Iran threatens to hide nuclear program,” seem at first glance to be a mistake. Hiding and sharing are diametrically opposed and mutually exclusive. But while it is true that most human editors would not cluster these stories, it is less clear that it is, in fact, an error. Investigation shows that the two articles are about the release of a single statement by the government of Iran on the same day. The spin is significant enough, and significantly different, that it could be argued that the aggregation of those stories was incorrect—or not. The error reveals details about the way that Google News works and about its limitations. It reminds readers of Google News of the technological nature of their news’ meditation and gives them a taste of the type of selection—and mis-selection—that goes on out of view. Users of Google News might be prompted to compare the system to other, more human methods. Ultimately it can remind them of the power that Google News (and humans in similar roles) have over our understanding of news and the world around us. These are all familiar arguments to social scientists of technology and echo the arguments of technology activists. By focusing on similar errors, both groups can connect to users less used to thinking in these terms. Conclusion Reflecting on the role of the humanities in a world of increasingly invisible technology for the blog, “Humanities, Arts, Science and Technology Advanced Collaboratory,” Duke English professor Cathy Davidson writes: When technology is accepted, when it becomes invisible, [humanists] really need to be paying attention. This is one reason why the humanities are more important than ever. Analysis—qualitative, deep, interpretive analysis—of social relations, social conditions, in a historical and philosophical perspective is what we do so well. The more technology is part of our lives, the less we think about it, the more we need rigorous humanistic thinking that reminds us that our behaviours are not natural but social, cultural, economic, and with consequences for us all. Davidson concisely points out the strength and importance of the humanities in evaluating technology. She is correct; users of technologies do not frequently analyse the social relations, conditions, and effects of the technology they use. Activists at the EFF and FSF argue that this lack of critical perspective leads to exploitation of users (Stallman). But users, and the technology they use, are only susceptible to this type of analysis when they understand the applicability of these analyses to their technologies. Davidson leaves open the more fundamental question: How will humanists first reveal technology so that they can reveal its effects? Scholars and activists must do more than contextualise and describe technology. They must first render invisible technologies visible. As the revealing nature of errors in printing systems, input systems, and “black box” software systems like Google News show, errors represent a point where invisible technology is already visible to users. As such, these errors, and countless others like them, can be treated as the tip of an iceberg. They represent an important opportunity for humanists and activists to further expose technologies and the beginning of a process that aims to reveal much more. References Davidson, Cathy. “When Technology Is Invisible, Humanists Better Get Busy.” HASTAC. (2007). 1 September 2007 http://www.hastac.org/node/779>. Eisenstein, Elisabeth L. The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe. Cambridge, UK: Cambridge University Press, 1979. Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard UP, 1999. Lenssen, Philipp. “How Google News Indexes.” Google Blogscoped. 2006. 1 September 2007 http://blogoscoped.com/archive/2006-07-28-n49.html>. Mergenthaler, Ottmar. The Biography of Ottmar Mergenthaler, Inventor of the Linotype. New ed. New Castle, Deleware: Oak Knoll Books, 1989. Monotype: A Journal of Composing Room Efficiency. Philadelphia: Lanston Monotype Machine Co, 1913. Stallman, Richard M. Free Software, Free Society: Selected Essays of Richard M. Stallman. Boston, Massachusetts: Free Software Foundation, 2002. Sutherland, John. “Cn u txt?” Guardian Unlimited. London, UK. 2002. Swank, Alvin Garfield, and United Typothetae America. Linotype Mechanism. Chicago, Illinois: Dept. of Education, United Typothetae America, 1926. Thurlow, C. “Generation Txt? The Sociolinguistics of Young People’s Text-Messaging.” Discourse Analysis Online 1.1 (2003). Weiser, Marc. “The World Is Not a Desktop.” ACM Interactions. 1.1 (1994): 7-8. Citation reference for this article MLA Style Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/01-hill.php>. APA Style Hill, B. (Oct. 2007) "Revealing Errors," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/01-hill.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography