Journal articles on the topic 'Parallel and sequential regions'

To see the other types of publications on this topic, follow the link: Parallel and sequential regions.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel and sequential regions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

del Rio Astorga, David, Manuel F. Dolz, Luis Miguel Sánchez, J. Daniel García, Marco Danelutto, and Massimo Torquati. "Finding parallel patterns through static analysis in C++ applications." International Journal of High Performance Computing Applications 32, no. 6 (March 9, 2017): 779–88. http://dx.doi.org/10.1177/1094342017695639.

Full text
Abstract:
Since the ‘free lunch’ of processor performance is over, parallelism has become the new trend in hardware and architecture design. However, parallel resources deployed in data centers are underused in many cases, given that sequential programming is still deeply rooted in current software development. To address this problem, new methodologies and techniques for parallel programming have been progressively developed. For instance, parallel frameworks, offering programming patterns, allow expressing concurrency in applications to better exploit parallel hardware. Nevertheless, a large portion of production software, from a broad range of scientific and industrial areas, is still developed sequentially. Considering that these software modules contain thousands, or even millions, of lines of code, an extremely large amount of effort is needed to identify parallel regions. To pave the way in this area, this paper presents Parallel Pattern Analyzer Tool, a software component that aids the discovery and annotation of parallel patterns in source codes. This tool simplifies the transformation of sequential source code to parallel. Specifically, we provide support for identifying Map, Farm, and Pipeline parallel patterns and evaluate the quality of the detection for a set of different C++ applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Qawasmeh, Ahmad, Salah Taamneh, Ashraf H. Aljammal, Nabhan Hamadneh, Mustafa Banikhalaf, and Mohammad Kharabsheh. "Parallelism exploration in sequential algorithms via animation tool." Multiagent and Grid Systems 17, no. 2 (August 23, 2021): 145–58. http://dx.doi.org/10.3233/mgs-210347.

Full text
Abstract:
Different high performance techniques, such as profiling, tracing, and instrumentation, have been used to tune and enhance the performance of parallel applications. However, these techniques do not show how to explore the potential of parallelism in a given application. Animating and visualizing the execution process of a sequential algorithm provide a thorough understanding of its usage and functionality. In this work, an interactive web-based educational animation tool was developed to assist users in analyzing sequential algorithms to detect parallel regions regardless of the used parallel programming model. The tool simplifies algorithms’ learning, and helps students to analyze programs efficiently. Our statistical t-test study on a sample of students showed a significant improvement in their perception of the mechanism and parallelism of applications and an increase in their willingness to learn algorithms and parallel programming.
APA, Harvard, Vancouver, ISO, and other styles
3

P Sajan, Priya, and S. S. Kumar. "GVF snake algorithm-a parallel approach." International Journal of Engineering & Technology 7, no. 1.1 (December 21, 2017): 101. http://dx.doi.org/10.14419/ijet.v7i1.1.9206.

Full text
Abstract:
Multicore architecture is an emerging computer technology where multiple processing element will be acting as independent processing cores by sharing a common memory. Digital image segmentation is a widely used medical imaging application to extract regions of interest. GVF Active Contour is a region based segmentation technique which extracts curved and irregular shaped regions by diffusing gradient vectors and by the influence of internal and external forces. This requires prior knowledge on the geometric position and anatomical structures to locate the specific region defined within an image domain. This process requires complex mathematical calculations which in turn results in the immense consumption of CPU processing time. This may adversely affect the overall performance efficiency of the process. With the advancements in multicore technology, this processing time delay can be reduced by adapting parallelization in the computation of GVF field to the specific region of interest which is to be segmented. OpenMP is a shared memory parallel programming construct, which could implement multicore parallelism with extensive and powerful APIs thereby supporting the functionalities required to attain parallelism. This article provides a high level overview of OpenMP, its effectiveness and ease of implementation in adapting parallelism to existing traditional sequential methods using instruction, data and loop level parallelism. Performance comparison could be done with sequential versions of the program written in Matlab, Java and C languages with the proposed parallelized version of OpenMP. The result is also comparable with different operating systems like Windows and Linux.
APA, Harvard, Vancouver, ISO, and other styles
4

Zabiri, Haslinda, M. Ariff, Lemma Dendena Tufa, and Marappagounder Ramasamy. "A Comparison Study between Integrated OBFARX-NN and OBF-NN for Modeling of Nonlinear Systems in Extended Regions of Operation." Applied Mechanics and Materials 625 (September 2014): 382–85. http://dx.doi.org/10.4028/www.scientific.net/amm.625.382.

Full text
Abstract:
In this paper the combination of linear and nonlinear models in parallel for nonlinear system identification is investigated. A residuals-based sequential identification algorithm using parallel integration of linear Orthornormal basis filters-Auto regressive with exogenous input (OBFARX) and a nonlinear neural network (NN) models is developed. The model performance is then compared against previously developed parallel OBF-NN model in a nonlinear CSTR case study in extended regions of operation (i.e. extrapolation capability).
APA, Harvard, Vancouver, ISO, and other styles
5

Akyildiz, Ömer Deniz, Dan Crisan, and Joaquín Míguez. "Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization." Statistics and Computing 30, no. 6 (July 29, 2020): 1645–63. http://dx.doi.org/10.1007/s11222-020-09964-4.

Full text
Abstract:
Abstract We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth-order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad “flat” regions which are hard to minimize using gradient-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Malakar, Preeti, Thomas George, Sameer Kumar, Rashmi Mittal, Vijay Natarajan, Yogish Sabharwal, Vaibhav Saxena, and Sathish S. Vadhiyar. "A Divide and Conquer Strategy for Scaling Weather Simulations with Multiple Regions of Interest." Scientific Programming 21, no. 3-4 (2013): 93–107. http://dx.doi.org/10.1155/2013/682356.

Full text
Abstract:
Accurate and timely prediction of weather phenomena, such as hurricanes and flash floods, require high-fidelity compute intensive simulations of multiple finer regions of interest within a coarse simulation domain. Current weather applications execute these nested simulations sequentially using all the available processors, which is sub-optimal due to their sub-linear scalability. In this work, we present a strategy for parallel execution of multiple nested domain simulations based on partitioning the 2-D processor grid into disjoint rectangular regions associated with each domain. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. Experiments on IBM Blue Gene systems using WRF show that the proposed strategies result in performance improvement of up to 33% with topology-oblivious mapping and up to additional 7% with topology-aware mapping over the default sequential strategy.
APA, Harvard, Vancouver, ISO, and other styles
7

Chapuis, Guillaume, Mathilde Le Boudic-Jamin, Rumen Andonov, Hristo Djidjev, and Dominique Lavenier. "Parallel Seed-Based Approach to Multiple Protein Structure Similarities Detection." Scientific Programming 2015 (2015): 1–12. http://dx.doi.org/10.1155/2015/279715.

Full text
Abstract:
Finding similarities between protein structures is a crucial task in molecular biology. Most of the existing tools require proteins to be aligned in order-preserving way and only find single alignments even when multiple similar regions exist. We propose a new seed-based approach that discovers multiple pairs of similar regions. Its computational complexity is polynomial and it comes with a quality guarantee—the returned alignments have both root mean squared deviations (coordinate-based as well as internal-distances based) lower than a given threshold, if such exist. We do not require the alignments to be order preserving (i.e., we consider nonsequential alignments), which makes our algorithm suitable for detecting similar domains when comparing multidomain proteins as well as to detect structural repetitions within a single protein. Because the search space for nonsequential alignments is much larger than for sequential ones, the computational burden is addressed by extensive use of parallel computing techniques: a coarse-grain level parallelism making use of available CPU cores for computation and a fine-grain level parallelism exploiting bit-level concurrency as well as vector instructions.
APA, Harvard, Vancouver, ISO, and other styles
8

Cruz, Henry, Martina Eckert, Juan M. Meneses, and J. F. Martínez. "Fast Evaluation of Segmentation Quality with Parallel Computing." Scientific Programming 2017 (2017): 1–9. http://dx.doi.org/10.1155/2017/5767521.

Full text
Abstract:
In digital image processing and computer vision, a fairly frequent task is the performance comparison of different algorithms on enormous image databases. This task is usually time-consuming and tedious, such that any kind of tool to simplify this work is welcome. To achieve an efficient and more practical handling of a normally tedious evaluation, we implemented the automatic detection system, with the help of MATLAB®’s Parallel Computing Toolbox™. The key parts of the system have been parallelized to achieve simultaneous execution and analysis of segmentation algorithms on the one hand and the evaluation of detection accuracy for the nonforested regions, such as a study case, on the other hand. As a positive side effect, CPU usage was reduced and processing time was significantly decreased by 68.54% compared to sequential processing (i.e., executing the system with each algorithm one by one).
APA, Harvard, Vancouver, ISO, and other styles
9

LI, SHAODA, MINGZHE LIU, MIN ZHOU, and YIZHANG YIN. "STUDY OF RANDOM PARTICLE TRAFFIC ON A MULTI-LATTICE JUNCTION." Modern Physics Letters B 27, no. 09 (March 15, 2013): 1350063. http://dx.doi.org/10.1142/s0217984913500632.

Full text
Abstract:
In this paper, we investigate particle traffic on an m-input n-output (MINO) junction using totally asymmetric exclusion processes (TASEPs) under random sequential update. The model is suitable for description of biological transport. A general theoretical solution for traffic dynamics of TASEPs is developed based on a mean-field approximation. It is found that the low-density and high-density regions can be calculated qualitatively and quantitatively once the number of m and/or n is determined. The phase diagram, system current and density profiles are obtained through theoretical analysis and supported by Monte Carlo simulations. Comparison between a m-input n-output TASEP junction in random and parallel sequential updates has also been reported.
APA, Harvard, Vancouver, ISO, and other styles
10

YASHIN, Sergei N., Egor V. KOSHELEV, and Aleksandr V. KUPTSOV. "Developing an Industry and Innovation Cluster Strategy Through the Parallel and Sequential Real Options Method." Finance and Credit 27, no. 8 (August 30, 2021): 1724–47. http://dx.doi.org/10.24891/fc.27.8.1724.

Full text
Abstract:
Subject. This article considers the issues of formation of an innovation and industrial cluster's strategy through the parallel and sequential real options method. Objectives. The article aims to develop an innovation and industrial cluster's strategy formation methodology using parallel and sequential real options. Methods. For the study, we used the compound real options technique. Results. The article presents an original methodology for developing an innovation and industrial cluster's development strategy, taking into account the assessment of the priorities of the cluster itself and the region where it is located. The process of forming a strategy for the development of a pilot cluster of the electric power industry in the Nizhny Novgorod Oblast, represented by PAO TNS Energo NN, is considered as a case study of the implementation of the presented methodology. Conclusions and Relevance. The presented methodology will help further realize the innovative potential available in the region. The results obtained can be useful to public authorities when planning the development of industrial and innovation clusters and the harmonious development of the country's regions.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Mengchuang, Shasha Xia, Xiaochuan Li, Qin Yao, Yang Xu, and Zhiping Yin. "Systematic Reliability-Based Multidisciplinary Optimization by Parallel Adaptive Importance Candidate Region." Aerospace 9, no. 5 (April 26, 2022): 240. http://dx.doi.org/10.3390/aerospace9050240.

Full text
Abstract:
Reliability-based design optimization (RBDO) has become a prevalent design for aeronautical and aerospace engineering. The main problem is that it is impractical in complex cases with multi-failure regions, especially in multi-objective optimization. The active learning method can obtain an adaptive size of samples to get a relatively acceptable accuracy. The problem of RBDO using the traditional active learning Kriging (ALK) method is that the design space is generally still and only one training point is selected, which is not reasonable based on the concept of importance sampling and parallel calculation. As a consequence, the accuracy improvement is limited. In this paper, we investigate the method of obtaining an optimal size of design and reliability to assess space in parallel, simultaneously. A strategy of parallel adaptive candidate (PAIC) region with ALK is proposed and a sequential optimization and reliability assessment (SORA) method is modified to efficiently improve the accuracy. Importance sampling is used as a demonstration for the modified SORA with more accuracy. The method is then verified using mathematical cases and a scooping system of an amphibious aircraft.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Mengchuang, Shasha Xia, Xiaochuan Li, Qin Yao, Yang Xu, and Zhiping Yin. "Systematic Reliability-Based Multidisciplinary Optimization by Parallel Adaptive Importance Candidate Region." Aerospace 9, no. 5 (April 26, 2022): 240. http://dx.doi.org/10.3390/aerospace9050240.

Full text
Abstract:
Reliability-based design optimization (RBDO) has become a prevalent design for aeronautical and aerospace engineering. The main problem is that it is impractical in complex cases with multi-failure regions, especially in multi-objective optimization. The active learning method can obtain an adaptive size of samples to get a relatively acceptable accuracy. The problem of RBDO using the traditional active learning Kriging (ALK) method is that the design space is generally still and only one training point is selected, which is not reasonable based on the concept of importance sampling and parallel calculation. As a consequence, the accuracy improvement is limited. In this paper, we investigate the method of obtaining an optimal size of design and reliability to assess space in parallel, simultaneously. A strategy of parallel adaptive candidate (PAIC) region with ALK is proposed and a sequential optimization and reliability assessment (SORA) method is modified to efficiently improve the accuracy. Importance sampling is used as a demonstration for the modified SORA with more accuracy. The method is then verified using mathematical cases and a scooping system of an amphibious aircraft.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Mengchuang, Shasha Xia, Xiaochuan Li, Qin Yao, Yang Xu, and Zhiping Yin. "Systematic Reliability-Based Multidisciplinary Optimization by Parallel Adaptive Importance Candidate Region." Aerospace 9, no. 5 (April 26, 2022): 240. http://dx.doi.org/10.3390/aerospace9050240.

Full text
Abstract:
Reliability-based design optimization (RBDO) has become a prevalent design for aeronautical and aerospace engineering. The main problem is that it is impractical in complex cases with multi-failure regions, especially in multi-objective optimization. The active learning method can obtain an adaptive size of samples to get a relatively acceptable accuracy. The problem of RBDO using the traditional active learning Kriging (ALK) method is that the design space is generally still and only one training point is selected, which is not reasonable based on the concept of importance sampling and parallel calculation. As a consequence, the accuracy improvement is limited. In this paper, we investigate the method of obtaining an optimal size of design and reliability to assess space in parallel, simultaneously. A strategy of parallel adaptive candidate (PAIC) region with ALK is proposed and a sequential optimization and reliability assessment (SORA) method is modified to efficiently improve the accuracy. Importance sampling is used as a demonstration for the modified SORA with more accuracy. The method is then verified using mathematical cases and a scooping system of an amphibious aircraft.
APA, Harvard, Vancouver, ISO, and other styles
14

Lindborg, B. "Polymerase Domains of Human Immunodeficiency Virus Type 1 Reverse Transcriptase and Herpes Simplex Virus Type 1 DNA Polymerase: Their Predicted Three-Dimensional Structures and some Putative Functions in Comparison with E. Coli DNA Polymerase I. A Critical Survey." Antiviral Chemistry and Chemotherapy 3, no. 4 (August 1992): 223–41. http://dx.doi.org/10.1177/095632029200300405.

Full text
Abstract:
Hypothetical three-dimensional models for the entire polymerase domain of HIV-1 reverse transcriptase (HIV RT) and conserved regions of HSV-1 DNA polymerase (HSV pol) were created, primarily from literature data on mutations and principles of protein structure, and compared with those of E. coli DNA polymerase I (E. coli pol I). The corresponding parts, performing similar functions, were found to be analogous, not homologous, in structure with different β topologies and sequential arrangement. The polymerase domain of HSV pol is shown to form an anti-parallel β-sheet with α-helices, but with a topology different from that of the Klenow fragment of E. coli pol I. The main part of the polymerase domain of HIV RT is made up of a basically parallel β-sheet and α-helices with a topology similar to the nucleotide-binding p21 ras proteins. The putative functions of some conserved or invariant amino acids in the three polymerase families are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

TELENKOV, M. P., and YU A. MITYAGIN. "SEQUENTIAL RESONANT TUNNELING BETWEEN LANDAU LEVELS IN GaAs\AlGaAs SUPERLATTICES IN STRONG TILTED MAGNETIC AND ELECTRIC FIELDS." International Journal of Modern Physics B 21, no. 08n09 (April 10, 2007): 1594–99. http://dx.doi.org/10.1142/s0217979207043269.

Full text
Abstract:
The transverse resonant tunneling transport and electric field domain formation in GaAs/AlGaAs superlattices were investigated in a strong tilted magnetic field. The magnetic field component parallel to structure layers causes intensive tunneling transition between Landau levels with Δn≠0, resulting in the considerable "inhomogeneous" broadening of intersubband tunneling resonance as well as in the shift of the resonance toward higher electric fields. This leads to noticeable changes of the I-V characteristics of the superlattice, namely to smoothing of the periodic NDC structure on plateau-like regions caused by formation of the electric field domains and to the shift of the plateaus toward the higher applied voltage. The predicted behavior of the I-V characteristics of the structures in magnetic field was found experimentally.
APA, Harvard, Vancouver, ISO, and other styles
16

Ardell, J. L., and W. C. Randall. "Selective vagal innervation of sinoatrial and atrioventricular nodes in canine heart." American Journal of Physiology-Heart and Circulatory Physiology 251, no. 4 (October 1, 1986): H764—H773. http://dx.doi.org/10.1152/ajpheart.1986.251.4.h764.

Full text
Abstract:
Parasympathetic pathways mediating chronotropic and dromotropic responses to cervical vagal stimulation were determined from sequential, restricted, intrapericardial dissection around major cardiac vessels. Although right cervical vagal input evoked significantly greater bradycardia, supramaximal electrical stimulation of either vagus produced similar ventricular rates, both with and without simultaneous atrial pacing. Dissection of the triangular fat pad at the junction of the inferior vena cava-inferior left atrium (IVC-ILA) invariably eliminated all vagal input to the atrioventricular (AV) nodal region. Yet IVC-ILA dissection had minimal influence on evoked-chronotropic responses to either cervical vagal or stellate ganglia stimulation. Respective intrapericardial projection pathways, from either right or left vagi, are sufficiently distinct to allow unilateral parasympathetic denervation of the sinoatrial (SA) and atrioventricular (AV) nodal regions. Left vagal projections to the SA and AV nodal regions course primarily along and between the right pulmonary artery and left superior pulmonary vein. Right vagal projections to the SA and AV nodal regions are somewhat more diffuse but concentrate around the right pulmonary vein complex and adjacent segments of the right pulmonary artery. We conclude there are parallel, yet functionally distinct, inputs from right and left vagi to the SA and AV nodal regions.
APA, Harvard, Vancouver, ISO, and other styles
17

Buettner, Reinhard, Katharina Koenig, Martin Peifer, Katrin Stamm, Marc Christiaan Allardt Bos, Lucia Nogova, Thomas Zander, et al. "High-throughput parallel amplicon sequencing of common driver mutations from FFPE lung cancer samples in molecular pathologic routine diagnostics for a regional health care provider network." Journal of Clinical Oncology 31, no. 15_suppl (May 20, 2013): e12517-e12517. http://dx.doi.org/10.1200/jco.2013.31.15_suppl.e12517.

Full text
Abstract:
e12517 Background: Treatment paradigms for non–small-cell lung cancer have shifted from histology based towards incorporation of molecular subtypes involving particular genetic alterations such as mutations in EGFR or translocations of ALK. The list of targetable lesions is rapidly increasing including mutations in genes such as EGFR, HER2, KRAS, ALK, BRAF, PIK3CA, AKT1, ROS1, NRAS, FGFR1 and MAP2K1. Analysis of these potential targets is becoming a challenge in terms of work load, tissue availability as well as cost. Within the Network Genomic Medicine Lung Cancer (NGM), a regional molecular screening network of the Center for Integrated Oncology Köln Bonn, we aimed to improve the sequential analysis of a set of 9 target amplicons by Sanger sequencing using bench top ultra-deep parallel sequencing platforms. We aimed to reduce 1) the time requirement for comprehensive molecular diagnostics, 2) the minimal amount of formalin fixed paraffin embedded (FFPE) derived input DNA, 3) while at the same time increasing the number of target regions analysed. Methods: We established a multiplex PCR to amplify up to 640 lung cancer relevant target regions from at least 20ng of FFPE derived tumor DNA. The amplicon libraries were ligated to adapters encompassing medical identifier sequences that allowed multiplexing of up to 48 patients. The resulting libraries were sequenced on a benchtop Illumina platform (MiSeq). Mutations identified by parallel sequencing were confirmed by Sanger sequencing. Results: 330 patients were analyzed by traditional Sanger sequencing of 9 amplicons and the newly established parallel sequencing protocol. The time needed to complete the mutation screening was significantly reduced to 7 working days from previously 21 days. A total of at least 300ng of DNA was needed to complete the analysis of 9 amplicons by Sanger sequencing compared to 20 to 100ng of DNA needed for up to 640 amplicons analyzed by parallel sequencing. Conclusions: Newly multiplex PCR based parallel sequencing allows rapid comprehensive mutation testing in routine molecular pathological diagnostics even on small transbronchial biopsies.
APA, Harvard, Vancouver, ISO, and other styles
18

Tiwari, Suresh C., and Paul B. Green. "Shoot initiation on a Graptopetalum leaf: sequential scanning electron microscopic analysis for epidermal division patterns and quantitation of surface growth (kinematics)." Canadian Journal of Botany 69, no. 10 (October 1, 1991): 2302–19. http://dx.doi.org/10.1139/b91-291.

Full text
Abstract:
A non destructive mold and cast procedure allows repeated imaging of growing plant surfaces by scanning electron microscopy. Individual cells and their clonal progeny can be recognized in succesive images. This allows comprehensive analysis of cell lineage on the epidermis. It also provides data for the quantitative characterization of surface growth (kinematics). The technique and both types of analysis were applied to a small meristematic surface of a Graptopetalum leaf as it develops from a flat field of parallel cell files into a stem and leaves, all with radial symmetry. New organ surfaces formed from groups of cells that were not clones of the cells initially present. Hence cell image was not important. In the revision of symmetry some regions retain the original cell file polarity, others change it by 90°. Areas that do not change have mainly transverse anticlinal divisions. Areas that change maximally show repeated longitudinal divisions to make a rib meristem pattern with new cross walls normal to the direction of future growth. Growth direction soon changes. This occurs in regions known to undergo a 90° shift in cellulose reinforcement direction. It is concluded that distinctive anticlinal division behavior accompanies lateral formation and may be significant in the mechanism. Key words: anticlinal divisions, Graptopetalum, kinematics, morphogenesis, shoot initiation, sequential scanning electron microscopy.
APA, Harvard, Vancouver, ISO, and other styles
19

Menco, B. P. "Tight-junctional strands first appear in regions where three cells meet in differentiating olfactory epithelium: a freeze-fracture study." Journal of Cell Science 89, no. 4 (April 1, 1988): 495–505. http://dx.doi.org/10.1242/jcs.89.4.495.

Full text
Abstract:
Tight junctions of the olfactory epithelium of rat embryos were studied at the 14th day of gestation and during their subsequent development. Two different epithelial morphologies could be distinguished at the 14th gestational day. In one group of embryos the epithelial surface appeared undifferentiated, with tight-junctional strands found exclusively in regions where three cells met. The main orientation of these strands is in a direction parallel to the longitudinal orientation of the epithelial cells. These junctions resemble tight junctions that interconnect three cells, i.e. tricellular tight junctions, in that respect. However, unlike these the junctions mainly have single strands of particles, whereas tricellular junctions usually consist of paired strands of particles. Tight-junctional strands were completely absent in areas where two cells met. These areas, i.e. those of incipient bicellular tight junctions, had gap-junction-like aggregates of intramembranous particles. Another group of 14-day-old embryos displayed a differentiating olfactory epithelial surface with bicellular as well as tricellular tight-junctional strands. The latter ones were paired. Here too the tight-junctional belts displayed some gap-junction-like aggregates of particles, but there were considerably fewer of these than earlier. As one or the other tight-junctional appearance was always seen in a single freeze-fracture replica, it is reasonable to assume that the two tight-junctional appearances reflect a sequential pattern of differentiation peculiar to the whole surface of the olfactory epithelium, i.e. to surfaces of receptor cells as well as to surfaces of supporting cells. It would appear that, at the onset of olfactory epithelial differentiation, tight junctions first interconnect cells in regions where three cells meet and that tricellular strand formation precedes the formation of bicellular strands. When strands were present at the 14th day of embryonic development, their numbers were lower than those found later. However, strand packing, expressed as the density per micrometre of strands parallel to the epithelial surface, increased beginning at the 16th day of embryonic development.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Jung-Hua, Chun-Shun Tseng, Sih-Yin Shen, and Ya-Yun Jheng. "Self-Organizing Fusion Neural Networks." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 6 (July 20, 2007): 610–19. http://dx.doi.org/10.20965/jaciii.2007.p0610.

Full text
Abstract:
This paper presents a self-organizing fusion neural network (SOFNN) effective in performing fast clustering and segmentation. Based on a counteracting learning scheme, SOFNN employs two parameters that together control the training in a counteracting manner to obviate problems of over-segmentation and under-segmentation. In particular, a simultaneous region-based updating strategy is adopted to facilitate an interesting fusion effect useful for identifying regions comprising an object in a self-organizing way. To achieve reliable merging, a dynamic merging criterion based on both intra-regional and inter-regional local statistics is used. Such extension in adjacency not only helps achieve more accurate segmentation results, but also improves input noise tolerance. Through iterating the three phases of simultaneous updating, self-organizing fusion, and extended merging, the training process converges without manual intervention, thereby conveniently obviating the need of pre-specifying the terminating number of objects. Unlike existing methods that sequentially merge regions, all regions in SOFNN can be processed in parallel fashion, thus providing great potentiality for a fully parallel hardware implementation.
APA, Harvard, Vancouver, ISO, and other styles
21

Jabalameli, Amirhossein, and Aman Behal. "From Single 2D Depth Image to Gripper 6D Pose Estimation: A Fast and Robust Algorithm for Grabbing Objects in Cluttered Scenes." Robotics 8, no. 3 (July 30, 2019): 63. http://dx.doi.org/10.3390/robotics8030063.

Full text
Abstract:
In this paper, we investigate the problem of grasping previously unseen objects in unstructured environments which are cluttered with multiple objects. Object geometry, reachability, and force-closure analysis are considered to address this problem. A framework is proposed for grasping unknown objects by localizing contact regions on the contours formed by a set of depth edges generated from a single-view 2D depth image. Specifically, contact regions are determined based on edge geometric features derived from analysis of the depth map data. Finally, the performance of the approach is successfully validated by applying it to scenes with both single and multiple objects, in both simulation and experiments. Using sequential processing in MATLAB running on a 4th-generation Intel Core Desktop, simulation results with the benchmark Object Segmentation Database show that the algorithm takes 281 ms on average to generate the 6D robot pose needed to attach with a pair of viable grasping edges that satisfy reachability and force-closure conditions. Experimental results in the Assistive Robotics Laboratory at UCF using a Kinect One sensor and a Baxter manipulator outfitted with a standard parallel gripper showcase the feasibility of the approach in grasping previously unseen objects from uncontrived multi-object settings.
APA, Harvard, Vancouver, ISO, and other styles
22

McDermott, M., S. K. Prasad, S. Shekhar, and X. Zhou. "INTERESTING SPATIO-TEMPORAL REGION DISCOVERY COMPUTATIONS OVER GPU AND MAPREDUCE PLATFORMS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-4/W2 (July 10, 2015): 35–41. http://dx.doi.org/10.5194/isprsannals-ii-4-w2-35-2015.

Full text
Abstract:
Discovery of interesting paths and regions in spatio-temporal data sets is important to many fields such as the earth and atmospheric sciences, GIS, public safety and public health both as a goal and as a preliminary step in a larger series of computations. This discovery is usually an exhaustive procedure that quickly becomes extremely time consuming to perform using traditional paradigms and hardware and given the rapidly growing sizes of today’s data sets is quickly outpacing the speed at which computational capacity is growing. In our previous work (Prasad et al., 2013a) we achieved a 50 times speedup over sequential using a single GPU. We were able to achieve near linear speedup over this result on interesting path discovery by using Apache Hadoop to distribute the workload across multiple GPU nodes. Leveraging the parallel architecture of GPUs we were able to drastically reduce the computation time of a 3-dimensional spatio-temporal interest region search on a single tile of normalized difference vegetative index for Saudi Arabia. We were further able to see an almost linear speedup in compute performance by distributing this workload across several GPUs with a simple MapReduce model. This increases the speed of processing 10 fold over the comparable sequential while simultaneously increasing the amount of data being processed by 384 fold. This allowed us to process the entirety of the selected data set instead of a constrained window.
APA, Harvard, Vancouver, ISO, and other styles
23

Meyer, M., K. D. Schuster, H. Schulz, M. Mohr, and J. Piiper. "Alveolar slope and dead space of He and SF6 in dogs: comparison of airway and venous loading." Journal of Applied Physiology 69, no. 3 (September 1, 1990): 937–44. http://dx.doi.org/10.1152/jappl.1990.69.3.937.

Full text
Abstract:
Series (Fowler) dead space (VD) and slope of the alveolar plateau of two inert gases (He and SF6) with similar blood-gas partition coefficients (approximately 0.01) but different diffusivities were analyzed in 10 anesthetized paralyzed mechanically ventilated dogs (mean body wt 20 kg). Single-breath constant-flow expirograms were simultaneously recorded in two conditions: 1) after equilibration of lung gas with the inert gases at tracer concentrations [airway loading (AL)] and 2) during steady-state elimination of the inert gases continuously introduced into venous blood by a membrane oxygenator and partial arteriovenous bypass [venous loading (VL)]. VD was consistently larger for SF6 than for He, but there was no difference between AL and VL. The relative alveolar slope, defined as increment of partial pressure per increment of expired volume and normalized to mixed expired-inspired partial pressure difference, was larger by a factor of two in VL than in AL for both He and SF6. The He-to-SF6 ratio of relative alveolar slope was generally smaller than unity in both VL and AL. Whereas unequal ventilation-volume distribution combined with sequential emptying of parallel lung regions appears to be responsible for the sloping alveolar plateau during AL, the steeper slope during VL is attributed to the combined effects of continuing gas exchange and ventilation-perfusion inequality coupled with sequential emptying. The differences between He and SF6 point at the contributing role of diffusion-dependent mechanisms in intrapulmonary gas mixing.
APA, Harvard, Vancouver, ISO, and other styles
24

Xing, F., D. J. Foran, L. Yang, and X. Qi. "A Fast, Automatic Segmentation Algorithm for Locating and Delineating Touching Cell Boundaries in Imaged Histopathology." Methods of Information in Medicine 51, no. 03 (2012): 260–67. http://dx.doi.org/10.3414/me11-02-0015.

Full text
Abstract:
SummaryBackground: Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms.Objectives: In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging.Methods: It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts’ annotation and those generated using the automatic segmentation algorithm.Results: The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation.Conclusion: The proposed segmentation algorithm can accurately detect and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation.
APA, Harvard, Vancouver, ISO, and other styles
25

Quanrud, D. M., M. M. Karpiscak, K. E. Lansey, and R. G. Arnold. "Behavior of organic carbon during subsurface wetland treatment in the Sonoran Desert." Water Science and Technology 44, no. 11-12 (December 1, 2001): 267–72. http://dx.doi.org/10.2166/wst.2001.0839.

Full text
Abstract:
We examined the fate of organics during wetland treatment of secondary effluent and groundwater (control) flows in parallel, research-scale, subsurface-flow (SSF) wetland raceways at the Constructed Ecosystem Research Facility (CERF) located in Tucson, Arizona. The CERF facility enabled us to distinguish experimentally among effects on effluent quality due to season-dependent processes of evapotranspiration (ET) and wetlands-derived production of organics. Organics of wastewater and wetlands origin were compared in terms of their contributions to dissolved organic carbon (DOC) in wetland effluent. Elevated temperatures and associated biochemical activities increased DOC levels in wetland effluents during summer. In other words, DOC removal efficiency was negatively correlated to temperature. The contributions of ET and wetland-derived organics to elevation of DOC in wetland effluents during summer were roughly comparable. The elevation of organic carbon concentration during wetland polishing of wastewater effluent will lead to higher levels of disinfection by-products when treated waters are chlorinated prior to reuse. Results of this work are relevant to water managers in arid regions, which may incorporate wetlands into sequential wastewater treatments leading to potable reuse of reclaimed water.
APA, Harvard, Vancouver, ISO, and other styles
26

Götze, J., and U. Kempe. "A comparison of optical microscope- and scanning electron microscope-based cathodoluminescence (CL) imaging and spectroscopy applied to geosciences." Mineralogical Magazine 72, no. 4 (August 2008): 909–24. http://dx.doi.org/10.1180/minmag.2008.072.4.909.

Full text
Abstract:
AbstractCathodoluminescence (CL) imaging and spectroscopy are outstanding methods in several fields of geosciences. Cathodoluminescence can be examined using a wide variety of electron-beam equipment. Of special interest to geologists are optical microscopes (OMs) equipped with an electron gun. scanning electron microscopes (SEMs) and electron microprobes. Despite the similar kind of excitation, the results obtained may show marked differences. These are related to the use of focused or defocused as well as a scanned or stationary electron beam and the kind of signal acquisition. Images obtained by OM-CL (hot or cold acceleration) and SEM-CL differ due to different spatial resolution, true colour, grey-scale, or monochromatic detection, contrast inversion, phosphorescence effects, etc.Instrumentation used for spectroscopic studies may differ in sequential or parallel signal acquisition, wavelength range, spectral resolution, and the kind of analytical spot limitation. This is particularly important when investigating transient CL, rare earth element (REE) emissions, or luminescence in the near UV and IR regions as well as samples with small grain sizes and contrasting CL behaviour of adjacent mineral phases.In the present study, the influence of analytical parameters is demonstrated for certain mineral examples including zircon, fluorite, apatite, feldspar, quartz, corundum, kaolinite, and dickite.
APA, Harvard, Vancouver, ISO, and other styles
27

Manjunathaiah, M., and Denis A. Nicole. "Precise Analysis of Array Usage in Scientific Programs." Scientific Programming 6, no. 2 (1997): 229–42. http://dx.doi.org/10.1155/1997/312872.

Full text
Abstract:
The automatic transformation of sequential programs for efficient execution on parallel computers involves a number of analyses and restructurings of the input. Some of these analyses are based on computing array sections, a compact description of a range of array elements. Array sections describe the set of array elements that are either read or written by program statements. These sections can be compactly represented using shape descriptors such as regular sections, simple sections, or generalized convex regions. However, binary operations such as Union performed on these representations do not satisfy a straightforward closure property, e.g., if the operands to Union are convex, the result may be nonconvex. Approximations are resorted to in order to satisfy this closure property. These approximations introduce imprecision in the analyses and, furthermore, the imprecisions resulting from successive operations have a cumulative effect. Delayed merging is a technique suggested and used in some of the existing analyses to minimize the effects of approximation. However, this technique does not guarantee an exact solution in a general setting. This article presents a generalized technique to precisely compute Union which can overcome these imprecisions.
APA, Harvard, Vancouver, ISO, and other styles
28

Ibrahim, Yahya, Balázs Nagy, and Csaba Benedek. "Deep Learning-Based Masonry Wall Image Analysis." Remote Sensing 12, no. 23 (November 29, 2020): 3918. http://dx.doi.org/10.3390/rs12233918.

Full text
Abstract:
In this paper we introduce a novel machine learning-based fully automatic approach for the semantic analysis and documentation of masonry wall images, performing in parallel automatic detection and virtual completion of occluded or damaged wall regions, and brick segmentation leading to an accurate model of the wall structure. For this purpose, we propose a four-stage algorithm which comprises three interacting deep neural networks and a watershed transform-based brick outline extraction step. At the beginning, a U-Net-based sub-network performs initial wall segmentation into brick, mortar and occluded regions, which is followed by a two-stage adversarial inpainting model. The first adversarial network predicts the schematic mortar-brick pattern of the occluded areas based on the observed wall structure, providing in itself valuable structural information for archeological and architectural applications. The second adversarial network predicts the pixels’ color values yielding a realistic visual experience for the observer. Finally, using the neural network outputs as markers in a watershed-based segmentation process, we generate the accurate contours of the individual bricks, both in the originally visible and in the artificially inpainted wall regions. Note that while the first three stages implement a sequential pipeline, they interact through dependencies of their loss functions admitting the consideration of hidden feature dependencies between the different network components. For training and testing the network a new dataset has been created, and an extensive qualitative and quantitative evaluation versus the state-of-the-art is given. The experiments confirmed that the proposed method outperforms the reference techniques both in terms of wall structure estimation and regarding the visual quality of the inpainting step, moreover it can be robustly used for various different masonry wall types.
APA, Harvard, Vancouver, ISO, and other styles
29

Negrini, D., M. Del Fabbro, C. Gonano, S. Mukenge, and G. Miserocchi. "Distribution of diaphragmatic lymphatic lacunae." Journal of Applied Physiology 72, no. 3 (March 1, 1992): 1166–72. http://dx.doi.org/10.1152/jappl.1992.72.3.1166.

Full text
Abstract:
The morphology of the submesothelial lymphatic lacunae on the pleural and peritoneal surface over the tendinous and muscular portion of the diaphragm was studied in 10 anesthetized rabbits. The lymphatic network was evidenced by injecting 1 ml of colloidal carbon solution in the pleural (n = 5) or the peritoneal (n = 5) space. After 1 h of spontaneous breathing, the animal was killed and the diaphragm was fixed in situ by injection of approximately 5 ml of fixative in pleural and peritoneal spaces. Then both cavities were opened and the diaphragm was excised and pinned to a support. According to which cavity had received the injection, the peritoneal or the pleural side of the diaphragm was scanned by sequential imaging of the whole surface by use of a video camera connected to a stereomicroscope and to a video monitor. The anatomic design appeared as a network of lacunae running either parallel or perpendicular to the major axis of the tendinous or muscular fibers. The lacunae were more densely distributed on the tendinous peritoneal area than on the pleural one. Scanty lacunae were seen on the muscular regions of both diaphragmatic sides, characterized by large areas without lacunae. The average density of lacunae on tendinous and muscular regions was 6 and 1.7/cm2 for the pleural side and 25 and 3.4/cm2 for the peritoneal side, respectively. The average width of lacunae was 137.9 +/- 1.6 and 108.8 +/- 1.7 microns on the tendinous pleural and the peritoneal side, respectively, and 163 +/- 1.8 microns on the muscular portion of the pleural and peritoneal surfaces.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhang, C., Y. Peng, J. Chu, C. A. Shoemaker, and A. Zhang. "Integrated hydrological modelling of small- and medium-sized water storages with application to the upper Fengman Reservoir Basin of China." Hydrology and Earth System Sciences 16, no. 11 (November 6, 2012): 4033–47. http://dx.doi.org/10.5194/hess-16-4033-2012.

Full text
Abstract:
Abstract. Hydrological simulation in regions with a large number of water storages is difficult due to inaccurate water storage data. To address this issue, this paper presents an improved version of SWAT2005 (Soil and Water Assessment Tool, version 2005) using Landsat, a satellite-based dataset, an empirical storage classification method and some empirical relationships to estimate water storage and release from the various sizes of flow detention and regulation facilities. The SWAT2005 is enhanced by three features: (1) a realistic representation of the relationships between the surface area and volume of each type of water storages, ranging from small-sized flow detention ponds to medium- and large-sized reservoirs with the various flow regulation functions; (2) water balance and transport through a network combining both sequential and parallel streams and storage links; and (3) calibrations for both physical and human interference parameters. Through a real-world watershed case study, it is found that the improved SWAT2005 more accurately models small- and medium-sized storages than the original model in reproducing streamflows in the watershed. The improved SWAT2005 can be an effective tool to assess the impact of water storage on hydrologic processes, which has not been well addressed in the current modelling exercises.
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, C., Y. Peng, J. Chu, and C. A. Shoemaker. "Integrated hydrological modelling of small- and medium-sized water storages with application to the upper Fengman Reservoir Basin of China." Hydrology and Earth System Sciences Discussions 9, no. 3 (March 28, 2012): 4001–43. http://dx.doi.org/10.5194/hessd-9-4001-2012.

Full text
Abstract:
Abstract. Hydrological simulation in regions with a large number of water storages is difficult due to the inaccurate water storage data, including both topologic parameters and operational rules. To address this issue, this paper presents an improved version of SWAT2005 (Soil and Water Assessment Tool, version 2005) using the satellite-based dataset Landsat, an empirical storage classification method, and some empirical relationships to estimate water storage and release from the various levels of flow regulation facilities. The improved SWAT2005 is characterised by three features: (1) a realistic representation of the relationships between the water surface area and volume of each type of water storage, ranging from small-sized ponds for water flow regulation to large-sized and medium-sized reservoirs for water supply and hydropower generation; (2) water balance and transport through a network combining both sequential and parallel streams and storage links; and (3) calibrations for the physical parameters and the human interference parameters. Both the original and improved SWAT2005 are applied to the upper Fengman Reservoir Basin, and the results of these applications are compared. The improved SWAT2005 accurately models small- and medium-sized storages, indicating a significantly improved performance from that of the original model in reproducing streamflows.
APA, Harvard, Vancouver, ISO, and other styles
32

Steger, David J., Martina I. Lefterova, Lei Ying, Aaron J. Stonestrom, Michael Schupp, David Zhuo, Adam L. Vakoc, et al. "DOT1L/KMT4 Recruitment and H3K79 Methylation Are Ubiquitously Coupled with Gene Transcription in Mammalian Cells." Molecular and Cellular Biology 28, no. 8 (February 19, 2008): 2825–39. http://dx.doi.org/10.1128/mcb.02076-07.

Full text
Abstract:
ABSTRACT The histone H3 lysine 79 methyltransferase DOT1L/KMT4 can promote an oncogenic pattern of gene expression through binding with several MLL fusion partners found in acute leukemia. However, the normal function of DOT1L in mammalian gene regulation is poorly understood. Here we report that DOT1L recruitment is ubiquitously coupled with active transcription in diverse mammalian cell types. DOT1L preferentially occupies the proximal transcribed region of active genes, correlating with enrichment of H3K79 di- and trimethylation. Furthermore, Dot1l mutant fibroblasts lacked H3K79 di- and trimethylation at all sites examined, indicating that DOT1L is the sole enzyme responsible for these marks. Importantly, we identified chromatin immunoprecipitation (ChIP) assay conditions necessary for reliable H3K79 methylation detection. ChIP-chip tiling arrays revealed that levels of all degrees of genic H3K79 methylation correlate with mRNA abundance and dynamically respond to changes in gene activity. Conversion of H3K79 monomethylation into di- and trimethylation correlated with the transition from low- to high-level gene transcription. We also observed enrichment of H3K79 monomethylation at intergenic regions occupied by DNA-binding transcriptional activators. Our findings highlight several similarities between the patterning of H3K4 methylation and that of H3K79 methylation in mammalian chromatin, suggesting a widespread mechanism for parallel or sequential recruitment of DOT1L and MLL to genes in their normal “on” state.
APA, Harvard, Vancouver, ISO, and other styles
33

Wragg, David, Matthew O'Brien, Marco Di Michiel, Francesca Lønstad Bleken, Helmer Fjellvåg, and Unni Olsbye. "Processing Extremely Large Powder Diffraction Datasets Using The Rietveld Method." Acta Crystallographica Section A Foundations and Advances 70, a1 (August 5, 2014): C502. http://dx.doi.org/10.1107/s2053273314094972.

Full text
Abstract:
Modern in situ synchrotron powder diffraction experiments can produce massive volumes of data which are of suitable quality for real structural information to be extracted. Parametric Rietveld refinement (Stinton, 2007) is an ideal method for dealing with such datasets as a huge number of diffraction patterns can be processed in parallel while the number of refined parameters is reduced by linking between scans. The time saving compared to earlier sequential methods of refinement using batch files is very significant with processing times being reduced from weeks to a few hours. The stability of the parametric method allows not only extraction of information from data with very weak trends but also refinement of entire slices of a tomographic map including the regions with zero diffraction. The power of the technique will be illustrated by examples from reactor scanning experiments with high time resolution (Wragg, 2012, 2013) as well as more conventional in situ powder diffraction and operando experiments combining diffraction with mass spectrometry. The extraction of structural information from complete tomographic datasets and reconstructions with real structural parameters will also be demonstrated. The figure shows time and space resolved c-axis data for the SAPO-34 catalyst during methanol to olefin conversion, together with mass spectrometry data collected during the experiment.
APA, Harvard, Vancouver, ISO, and other styles
34

Mayo, K. H., R. C. Cavalli, A. R. Peters, R. Boelens, and R. Kaptein. "Sequence-specific 1H-n.m.r. assignments and peptide backbone conformation in rat epidermal growth factor." Biochemical Journal 257, no. 1 (January 1, 1989): 197–205. http://dx.doi.org/10.1042/bj2570197.

Full text
Abstract:
The solution conformation of rat epidermal growth factor (EGF) has been investigated by proton n.m.r. techniques. Two-dimensional proton n.m.r. experiments have allowed sequential resonance assignments to be made for most protons. On the basis of these assignments, two regions of anti-parallel beta-sheet structure have been derived from the n.m.r. data. A beta-sheet segment running from about V19 to V23 (capital letters refer to amino acids in the single-letter notation) is folded onto a beta-sheet segment running from R28 to N32 and joined by a chain reversal from E24 to D27. A second region involves a beta-turn from V34 to Y37, which starts a short beta-sheet up to G39, followed by a chain reversal up to Q43, which leads to folding of the C-terminal beta-sheet segment, i.e. H44-R45, running antiparallel to the short Y37 beta-sheet segment. The N-terminal segment up to G18 exists in a multiple bend conformation and is folded on to the V29-V23/R28-N32 beta-sheet such that Y10, Y13, Y22 and Y29 are proximal to each other. Structural comparison of rat, murine and human EGFs indicates a number of highly conserved structural features common to at least these species of EGF.
APA, Harvard, Vancouver, ISO, and other styles
35

Nakano, Teppei, Takashi Morie, Makoto Nagata, and Atsushi Iwata. "A Cellular-Automaton-Type Region Extraction Algorithm and its FPGA Implementation." Journal of Robotics and Mechatronics 17, no. 4 (August 20, 2005): 378–86. http://dx.doi.org/10.20965/jrm.2005.p0378.

Full text
Abstract:
This paper proposes a new region extraction algorithm and its digital LSI architecture based on cellular-automaton operation for very fast image processing. The algorithm sequentially extracts each region defined by a closed boundary. From digital logic simulation using Verilog-HDL, the proposed circuit with pixel-parallel operation can operate 100 times faster than serial labeling for a 100×100-pixel image. We implemented the proposed circuit in an FPGA for 30×30-pixel image processing. In an experiment with the FPGA, five regions are successfully extracted one by one within 6μs at a clock frequency of 25MHz.
APA, Harvard, Vancouver, ISO, and other styles
36

Issa, John B., Gilad Tocker, Michael E. Hasselmo, James G. Heys, and Daniel A. Dombeck. "Navigating Through Time: A Spatial Navigation Perspective on How the Brain May Encode Time." Annual Review of Neuroscience 43, no. 1 (July 8, 2020): 73–93. http://dx.doi.org/10.1146/annurev-neuro-101419-011117.

Full text
Abstract:
Interval timing, which operates on timescales of seconds to minutes, is distributed across multiple brain regions and may use distinct circuit mechanisms as compared to millisecond timing and circadian rhythms. However, its study has proven difficult, as timing on this scale is deeply entangled with other behaviors. Several circuit and cellular mechanisms could generate sequential or ramping activity patterns that carry timing information. Here we propose that a productive approach is to draw parallels between interval timing and spatial navigation, where direct analogies can be made between the variables of interest and the mathematical operations necessitated. Along with designing experiments that isolate or disambiguate timing behavior from other variables, new techniques will facilitate studies that directly address the neural mechanisms that are responsible for interval timing.
APA, Harvard, Vancouver, ISO, and other styles
37

Beketnova, Yuliya. "Methodology for analyzing financial monitoring data on the example of business entities." Economics and the Mathematical Methods 57, no. 3 (2021): 32. http://dx.doi.org/10.31857/s042473880014912-2.

Full text
Abstract:
Abstract. The purpose of the article is to propose methodology for analyzing financial monitoring data that takes into account the need to process large volumes of heterogeneous data, the latency of the sought characteristics, and also satisfies the criterion for the time and resource indicators of the data processing process. It is necessary to move from sequential expert inspections of the objects to parallel massive automated inspections, taking into account modern methodological and instrumental capabilities in the context of the digital transformation of public administration. The lack of a methodology for data analysis in the field of financial monitoring prevents the widespread introduction of automation of the processes of assessing the situation and making decisions at different hierarchical levels of the public administration contour, the formation of integral assessments of business entities, which determines the timeliness and importance of this study. The problem of optimizing the choice of business entities according to the information of financial monitoring to determine the priority of verification is posed in meaningful terms and mathematically. The article provides an illustration of the proposed methodology using the example of data on business entities. Using the method of the main components of factor analysis, an integral indicator of the deviant component of the activity of an economic entity was found. The obtained estimates are verified using the pattern recognition theory and their internal convergence is confirmed. On the basis of the obtained measures of deviant activity of economic entities, a map of the propensity to legalize money in the regions has been synthesized. The practical value of this approach lies in the fact that, based on the ranking of the regions according to their susceptibility to money laundering, recommendations can be developed to improve the current practice of conducting financial investigations and to efficiently redistribute resources.
APA, Harvard, Vancouver, ISO, and other styles
38

Tsubaki, Junko, Vivian Hwa, Stephen M. Twigg, and Ron G. Rosenfeld. "Differential Activation of the IGF Binding Protein-3 Promoter by Butyrate in Prostate Cancer Cells." Endocrinology 143, no. 5 (May 1, 2002): 1778–88. http://dx.doi.org/10.1210/endo.143.5.8766.

Full text
Abstract:
Abstract Sodium butyrate (NaB), a dietary micronutrient, is a potent growth inhibitor that initiates cell differentiation in many cell types, including prostate cancer cells. The molecular mechanisms by which these effects occur remain largely unknown. In this study, we investigated the effects of NaB on the expression of IGF binding protein (IGFBP)-3, a known growth regulator, in two human prostate cancer cell lines (PC-3 and LNCaP). Treatment with NaB (0–10 mm) caused a dose-dependent stimulation of IGFBP-3 mRNA expression and parallel increases in protein levels. A specific histone deacetylase inhibitor, trichostatin A (TSA) similarly induced IGFBP-3 expression, indicating that histone hyperacetylation may be critical in the regulation of IGFBP-3 expression. To investigate the molecular mechanism of NaB-regulated IGFBP-3 expression, 1.87 kb of the human IGFBP-3 gene promoter was cloned into the pGL2-basic luciferase reporter vector. In both PC-3 and LNCaP cells, NaB (10 mm) significantly increased luciferase activity 20- to 30-fold, compared with the untreated control. However, using 5′ sequential deletion constructs of the IGFBP-3 promoter, the NaB response sequences in the IGFBP-3 promoter were different in PC-3 and LNCaP cells. Our studies identified a region, −75 to +69 from the start of transcription (+1), that is fully inducible by NaB treatment in LNCaP cells, but not in PC-3 cells. Unlike other well characterized NaB-regulated genes, Sp1 DNA sequences are not involved in NaB up-regulation of IGFBP-3 gene in LNCaP cells. Further deletion studies identified two independent regions critical for NaB-induced transactivation in LNCaP cells. These regions contain consensus binding sites for p53 and GATA, respectively, but mutational analyses and gel shift assays suggested that, while the p53 response element is required for NaB responsiveness, neither p53 nor GATA are involved. In summary, we have demonstrated that 1) NaB significantly up-regulates IGFBP-3 mRNA and protein levels in PC-3 and LNCaP prostate cancer cells; and 2) novel butyrate- responsive elements lacking consensus Sp1 sites are used in LNCaP cells.
APA, Harvard, Vancouver, ISO, and other styles
39

Lespay-Rebolledo, Carolyne, Andrea Tapia-Bustos, Ronald Perez-Lobos, Valentina Vio, Emmanuel Casanova-Ortiz, Nancy Farfan-Troncoso, Marta Zamorano-Cataldo, et al. "Sustained Energy Deficit Following Perinatal Asphyxia: A Shift towards the Fructose-2,6-bisphosphatase (TIGAR)-Dependent Pentose Phosphate Pathway and Postnatal Development." Antioxidants 11, no. 1 (December 29, 2021): 74. http://dx.doi.org/10.3390/antiox11010074.

Full text
Abstract:
Labor and delivery entail a complex and sequential metabolic and physiologic cascade, culminating in most circumstances in successful childbirth, although delivery can be a risky episode if oxygen supply is interrupted, resulting in perinatal asphyxia (PA). PA causes an energy failure, leading to cell dysfunction and death if re-oxygenation is not promptly restored. PA is associated with long-term effects, challenging the ability of the brain to cope with stressors occurring along with life. We review here relevant targets responsible for metabolic cascades linked to neurodevelopmental impairments, that we have identified with a model of global PA in rats. Severe PA induces a sustained effect on redox homeostasis, increasing oxidative stress, decreasing metabolic and tissue antioxidant capacity in vulnerable brain regions, which remains weeks after the insult. Catalase activity is decreased in mesencephalon and hippocampus from PA-exposed (AS), compared to control neonates (CS), in parallel with increased cleaved caspase-3 levels, associated with decreased glutathione reductase and glutathione peroxidase activity, a shift towards the TIGAR-dependent pentose phosphate pathway, and delayed calpain-dependent cell death. The brain damage continues long after the re-oxygenation period, extending for weeks after PA, affecting neurons and glial cells, including myelination in grey and white matter. The resulting vulnerability was investigated with organotypic cultures built from AS and CS rat newborns, showing that substantia nigra TH-dopamine-positive cells from AS were more vulnerable to 1 mM of H2O2 than those from CS animals. Several therapeutic strategies are discussed, including hypothermia; N-acetylcysteine; memantine; nicotinamide, and intranasally administered mesenchymal stem cell secretomes, promising clinical translation.
APA, Harvard, Vancouver, ISO, and other styles
40

Pancerz, Krzysztof, and Andrew Schumann. "Slime Mould Games Based on Rough Set Theory." International Journal of Applied Mathematics and Computer Science 28, no. 3 (September 1, 2018): 531–44. http://dx.doi.org/10.2478/amcs-2018-0041.

Full text
Abstract:
Abstract We define games on the medium of plasmodia of slime mould, unicellular organisms that look like giant amoebae. The plasmodia try to occupy all the food pieces they can detect. Thus, two different plasmodia can compete with each other. In particular, we consider game-theoretically how plasmodia of Physarum polycephalum and Badhamia utricularis fight for food. Placing food pieces at different locations determines the behavior of plasmodia. In this way, we can program the plasmodia of Physarum polycephalum and Badhamia utricularis by placing food, and we can examine their motion as a Physarum machine-an abstract machine where states are represented as food pieces and transitions among states are represented as movements of plasmodia from one piece to another. Hence, this machine is treated as a natural transition system. The behavior of the Physarum machine in the form of a transition system can be interpreted in terms of rough set theory that enables modeling some ambiguities in motions of plasmodia. The problem is that there is always an ambiguity which direction of plasmodium propagation is currently chosen: one or several concurrent ones, i.e., whether we deal with a sequential, concurrent or massively parallel motion. We propose to manage this ambiguity using rough set theory. Firstly, we define the region of plasmodium interest as a rough set; secondly, we consider concurrent transitions determined by these regions as a context-based game; thirdly, we define strategies in this game as a rough set; fourthly, we show how these results can be interpreted as a Go game.
APA, Harvard, Vancouver, ISO, and other styles
41

Kelly, J. T., A. K. Prasad, and A. S. Wexler. "Detailed flow patterns in the nasal cavity." Journal of Applied Physiology 89, no. 1 (July 1, 2000): 323–37. http://dx.doi.org/10.1152/jappl.2000.89.1.323.

Full text
Abstract:
The human nasal cavity filters and conditions inspired air while providing olfactory function. Detailed experimental study of nasal airflow patterns has been limited because of the complex geometry of the nasal cavity. In this work, particle image velocimetry was used to determine two-dimensional instantaneous velocity vector fields in parallel planes throughout a model of the nasal cavity that was subjected to a nonoscillatory flow rate of 125 ml/s. The model, which was fabricated from 26 computed tomography scans by using rapid prototyping techniques, is a scaled replica of a human right nasal cavity. The resulting vector plots show that the flow is laminar and regions of highest velocity are in the nasal valve and in the inferior airway. The relatively low flow in the olfactory region appears to protect the olfactory bulb from particulate pollutants. Low flows were also observed in the nasal meatuses, whose primary function has been the subject of debate. Comparison of sequentially recorded data suggests a steady flow.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhou, Xin, Xiaomei Liao, Lauren M. Kunz, Sharon-Lise T. Normand, Molin Wang, and Donna Spiegelman. "A maximum likelihood approach to power calculations for stepped wedge designs of binary outcomes." Biostatistics 21, no. 1 (August 1, 2018): 102–21. http://dx.doi.org/10.1093/biostatistics/kxy031.

Full text
Abstract:
Summary In stepped wedge designs (SWD), clusters are randomized to the time period during which new patients will receive the intervention under study in a sequential rollout over time. By the study’s end, patients at all clusters receive the intervention, eliminating ethical concerns related to withholding potentially efficacious treatments. This is a practical option in many large-scale public health implementation settings. Little statistical theory for these designs exists for binary outcomes. To address this, we utilized a maximum likelihood approach and developed numerical methods to determine the asymptotic power of the SWD for binary outcomes. We studied how the power of a SWD for detecting risk differences varies as a function of the number of clusters, cluster size, the baseline risk, the intervention effect, the intra-cluster correlation coefficient, and the time effect. We studied the robustness of power to the assumed form of the distribution of the cluster random effects, as well as how power is affected by variable cluster size. % SWD power is sensitive to neither, in contrast to the parallel cluster randomized design which is highly sensitive to variable cluster size. We also found that the approximate weighted least square approach of Hussey and Hughes (2007, Design and analysis of stepped wedge cluster randomized trials. Contemporary Clinical Trials 28, 182–191) for binary outcomes under-estimates the power in some regions of the parameter spaces, and over-estimates it in others. The new method was applied to the design of a large-scale intervention program on post-partum intra-uterine device insertion services for preventing unintended pregnancy in the first 1.5 years following childbirth in Tanzania, where it was found that the previously available method under-estimated the power.
APA, Harvard, Vancouver, ISO, and other styles
43

Boulton, G. S., and C. D. Clark. "The Laurentide ice sheet through the last glacial cycle: the topology of drift lineations as a key to the dynamic behaviour of former ice sheets." Transactions of the Royal Society of Edinburgh: Earth Sciences 81, no. 4 (1990): 327–47. http://dx.doi.org/10.1017/s0263593300020836.

Full text
Abstract:
ABSTRACTStudy of satellite images from most of the area of the Canadian mainland once covered by the Laurentide ice sheet reveals a complex pattern of superimposed drift lineations. They are believed to have formed subglacially and parallel to ice flow. Aerial photographs reveal patterns of superimposition which permit the sequence of lineation patterns to be identified. The sequential lineation patterns are interpreted as evidence of shifting patterns of flow in an evolving ice sheet. Flow stages are recognised which reflect roughly synchronous integrated patterns of ice sheet flow. Comparison with stratigraphic sections in the Hudson Bay Lowlands suggests that all the principal stages may have formed during the last, Wisconsinan, glacial cycle. Analogy between Flow stage lineation patterns and the form and flow patterns of modern ice sheets permits reconstruction of patterns of ice divides and centres of mass which moved by 1000–2000 km during the glacial period. There is evidence that during the early Wisconsinan, ice sheet formation in Keewatin may have been independent of that in Labrador–Quebec, and that these two ice masses joined to form a major early Wisconsinan ice sheet. Subsequently the western dome decayed whilst the eastern dome remained relatively stable. A western dome then re-formed, and fused with the eastern dome to form the late Wisconsinan ice sheet before final decay.Because of strong coupling between three-dimensional ice sheet geometry and atmospheric circulation, it is suggested that the major changes of geometry must have been associated with large scale atmospheric circulation changes.Lineation patterns suggest very little erosional/depositional activity in ice divide regions, and can be used to reconstruct large scale patterns of erosion/deposition.The sequence of flow stages through time provides an integrative framework allowing sparse stratigraphic data to be used most efficiently in reconstructing ice sheet history in time and space.
APA, Harvard, Vancouver, ISO, and other styles
44

Lesmana, Harry, Georgios E. Christakopoulos, Katie Giger Seu, Mary Risinger, Hatice Duzkale, Neha Dagaonkar, Kejian Zhang, and Theodosia A. Kalfa. "Clinical Application of Massively Parallel Sequencing in the Diagnosis of Hereditary Hemolytic and Dyserythropoietic Anemias." Blood 128, no. 22 (December 2, 2016): 4746. http://dx.doi.org/10.1182/blood.v128.22.4746.4746.

Full text
Abstract:
Abstract The Hereditary Hemolytic Anemias (HHAs) are a genetically heterogeneous group of anemias characterized by decreased red blood cell (RBC) survival because of defects in hemoglobin, RBC membrane proteins or enzymes. The diagnosis of this group of disorders is complex and challenging requiring analysis of the morphology of RBCs, hemoglobin electrophoresis, and a battery of phenotypic assays. The phenotypic analysis is often problematic in transfusion dependent patients or at times of presentation with a hemolytic crisis as transfused blood or reticulocytosis confounds diagnostic testing. Molecular genetic testing has grown in popularity in the diagnosis of hereditary hemolytic anemias as it is not affected by transfusions or other clinical variables and provides additional insight into the mechanism of the disease. We have developed a Next Generation Sequencing (NGS) panel for HHA due to RBC membrane disorders and enzymopathies and congenital dyserythropoietic anemias (CDA). CDAs, although collectively rare, are included in the panel as they are occasionally misdiagnosed as hereditary spherocytosis (HS) due to their clinical characteristics of hemolysis, increased osmotic fragility, and splenomegaly albeit with inadequate reticulocytosis We reviewed the results of 282 sequential HHA/CDA panels testing for patients with suspected HHA or CDA diagnosis, performed and interpreted at Cincinnati Children's Hospital Medical Center between 1/2013-5/2016. Forty-three samples were omitted from the final analysis due to diagnosis of other disorders, indicating that negative results were true-negatives. For the analysis of the remaining 239 panels, all results were reviewed and categorized based on the type of testing ordered: comprehensive HHA/CDA (32 genes), RBC membrane disorders (13 genes), RBC enzyme disorders (14 genes), or CDA (6 genes). The protein-coding exons plus 25 bases of exon-intron junction as well as promoter sequences were included in the design. Genomic DNA was isolated from blood and target regions were enriched using the Haloplex technology. Enriched samples were then sequenced on an Illumina MiSeq benchtop sequencer with 150 base pair, paired-end reads. Sequencing reads were aligned to the human genome reference sequence and analysis of coverage and variants was completed using NextGENe software. All positive findings were confirmed by Sanger sequencing. These 239 panels included 159 (66.5%) comprehensive HHA/CDA panels, 41 (17.2%) RBC membrane disorder panels, 10 (4.2%) RBC enzyme disorder panels, and 29 (12.1%) CDA panels. Overall, a diagnosis was confirmed or identified in 135 (56.5%) patients with specific genotype of hereditary spherocytosis in 52 patients; hereditary elliptocytosis in 15 patients; hereditary pyropoikilocytosis in 7 patients; hereditary stomatocytosis/xerocytosis in 12 patients; South East Asian Ovalocytosis in 1 patient; G6PD deficiency in 15 patients; pyruvate kinase deficiency in 17 patients; other rare RBC enzymopathies in 6 patients; and CDA in 10 patients. The clinical performance of RBC membrane disorder and RBC enzyme disorder panels were comparable between 68-70% in reaching a final diagnosis, while CDA panel confirmed final diagnosis in only 20% of suspected cases. The overall low prevalence, complexity of diagnosis with findings of dyserythropoiesis in bone marrow studies in patients with severe HHA, and evidence of locus heterogeneity in CDA might explain this result. Among patients with suspected RBC membrane disorders, approximately 14% were eventually diagnosed with hereditary xerocytosis (HX). HX diagnosis is critical to make in such patients since splenectomy is contraindicated due to the high risk of life-threatening thrombophilia complications. In more than half (56.5%) of all cases with suspected hereditary hemolytic anemia, genetic testing provided or confirmed the diagnosis and optimized patients' clinical management. Further genetic counseling and testing for other at-risk family members was made possible by achieving molecular diagnosis. Genetic testing substantially altered management in approximately 14% of cases with suspected RBC membrane disorders due to the diagnosis of HX. In conclusion, genetic testing has a significant clinical utility and may facilitate and improve diagnosis, prognosis and management considerations in patients with hereditary hemolytic or dyserythropoietic anemia. Figure 1 Figure 1. Figure 2 Figure 2. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
45

Jeon, Suyeon, and Yong Seok Heo. "Efficient Multi-Scale Stereo-Matching Network Using Adaptive Cost Volume Filtering." Sensors 22, no. 15 (July 23, 2022): 5500. http://dx.doi.org/10.3390/s22155500.

Full text
Abstract:
While recent deep learning-based stereo-matching networks have shown outstanding advances, there are still some unsolved challenges. First, most state-of-the-art stereo models employ 3D convolutions for 4D cost volume aggregation, which limit the deployment of networks for resource-limited mobile environments owing to heavy consumption of computation and memory. Although there are some efficient networks, most of them still require a heavy computational cost to incorporate them to mobile computing devices in real-time. Second, most stereo networks indirectly supervise cost volumes through disparity regression loss by using the softargmax function. This causes problems in ambiguous regions, such as the boundaries of objects, because there are many possibilities for unreasonable cost distributions which result in overfitting problem. A few works deal with this problem by generating artificial cost distribution using only the ground truth disparity value that is insufficient to fully regularize the cost volume. To address these problems, we first propose an efficient multi-scale sequential feature fusion network (MSFFNet). Specifically, we connect multi-scale SFF modules in parallel with a cross-scale fusion function to generate a set of cost volumes with different scales. These cost volumes are then effectively combined using the proposed interlaced concatenation method. Second, we propose an adaptive cost-volume-filtering (ACVF) loss function that directly supervises our estimated cost volume. The proposed ACVF loss directly adds constraints to the cost volume using the probability distribution generated from the ground truth disparity map and that estimated from the teacher network which achieves higher accuracy. Results of several experiments using representative datasets for stereo matching show that our proposed method is more efficient than previous methods. Our network architecture consumes fewer parameters and generates reasonable disparity maps with faster speed compared with the existing state-of-the art stereo models. Concretely, our network achieves 1.01 EPE with runtime of 42 ms, 2.92M parameters, and 97.96G FLOPs on the Scene Flow test set. Compared with PSMNet, our method is 89% faster and 7% more accurate with 45% fewer parameters.
APA, Harvard, Vancouver, ISO, and other styles
46

Stupak, Eugeniuš, and Romualdas Baušys. "GENERATION OF THE UNSTRUCTURED FE-GRIDS FOR COMPLEX 2D OBJECTS/NESTRUKTŪRINIŲ BE TINKLŲ GENERAVIMAS SUDĖTINGIEMS DVIMAČIAMS OBJEKTAMS." JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 6, no. 1 (February 28, 2000): 17–24. http://dx.doi.org/10.3846/13921525.2000.10531559.

Full text
Abstract:
For the numerical simulation of engineering problems, the finite element method (FEM) is among the most popular approaches. One of the main concerns in a finite element analysis is the adequacy of the finite element grid. The accuracy of the FEM depends on the size, shape and placement of the elements. On the other hand, the total computational cost is determined by the total number of elements in FE model. An increased accuracy can be obtained by the global reduction of the element size, but this can be characterised by drastically increased computational cost. Thus, in many engineering applications it is desirable to generate not regular FE mesh with finer grid in the regions where accuracy of numerical simulation is of most importance and with more coarse grid in the other regions. In this paper we present a new approach to the grid generation of the multimaterial or multidomain engineering systems by the advancing front technique. This technique has proved successful in generating unstructured meshes in two and three dimensions [1–9]. The algorithm of the technique is summarised in section 2. Common for all approaches of advancing front mesh generation is that the generation problem is divided into three parts. First, the specification of the mesh size attributes, second, the discretisation of the boundaries, and, third, the discretisation of the interior of the domain. In the advancing front technique the front is defined as the boundary between the gridded and ungridded region. The key algorithmic step that must be addressed to advancing front methods is the proper introduction of new elements into the ungridded region. For triangular and tetrahedral grids the elements are introduced sequentially one at a time. The most obvious advantage of the advancing front method is that it directly incorporates free form geometry. Direct implementation of the advancing front technique for multimaterial or multidomain engineering applications is still challenging. Grid generation in the place of few materials or domain contact must ensure the compatibility of nodes on common boundary segments (nodes on common boundary segments must be in the same positions). The advancing front technique does not include non-convex domain, so at the first step non-convex domain of discretisation is decomposed into few convex subdomains. The subdomain of interest must be defined by describing a course background mesh of triangle elements, covering the entire multidomain region, which forms the input for finite element analysis. In this work, a black box architecture expert system has been developed which incorporates the information about the object geometry as well as the boundary and loading conditions, distribution of materials characteristics to generate an a priori (before the finite element analysis is carried out) mesh which is more refined around the critical regions (singularities, re-entrant corners, regions with high-stress concentration, etc) of the problem domain. This system uses a new concept of subtracting to locate the critical regions in the domain and to assign priority and mesh size to them. This involves the decomposition of the original structure into substructures (or primitives) for which an initial and approximate analysis can be performed by using analytical solutions and heuristics. When incorporated into and compared with the traditional approach to the adaptive finite element analysis, it is expected that the proposed approach, which starts the process with near optimal meshes, will be more accurate and efficient. Several numerical examples are presented and discussed. Examples demonstrate that our approach enables to generate the compatible meshes for multimaterial or multidomain problems. The quality of meshes is good, there are no ill-shaped elements. By the proposed expert system we can generate the mesh for any complex structure. The generation of 2D meshes is only the first step using the proposed expert system; in future we shall extend it for 3D meshes. During the last decade a lot of research has been devoted to extension of the advancing front technique to the parallel computers [8, 10, 11]. But the application of the technique to parallel processors is still challenging. In fact, we have to solve how to minimise inter-processor communication during mesh generation of subdomains. The proposed expert system for complex structures grid generation enables to use it with parallel computers. At the first step the domain of discretisation is decomposed into subdomains and all the surfaces defining the boundaries of subdomains to be gridded are triangulated. Later all subdomains can be meshed concurrently and no more inter-processor communication is required. The master task sends to workers tasks information about dividing common boundaries and information of each subdomain. The workers tasks receive their subdomain data and mesh their subdomain. Later the master receives the information from the workers tasks and joins gridded subdomains to one structure, ensuring the compatibility of nodes on common boundaries. So this suggested expert system enables to minimise the communications and costs of computations. The implementation of the expert system to parallel processors is to be done in the future.
APA, Harvard, Vancouver, ISO, and other styles
47

Knight, Samantha JL, Elham Sadighi Akha, Adele Timbs, Tariq Enver, Andrew R. Pettitt, Jenny Taylor, Chris S. Hatton, and Anna Schuh. "Identification of Novel Recurrent Copy Number Variations and Regions of Copy-Neutral Loss of Heterozygosity by High Resolution Genomic Array in Pre-Treatment and Relapsed B-CLL." Blood 114, no. 22 (November 20, 2009): 1098. http://dx.doi.org/10.1182/blood.v114.22.1098.1098.

Full text
Abstract:
Abstract Abstract 1098 Poster Board I-120 Background B-cell chronic lymphocytic leukaemia (B-CLL) is the most common form of adult leukaemia in the Western World. It is a heterogeneous disease and important biological and clinical differences have been identified. However, the molecular mechanisms underlying emergence and maintenance of B-CLL after treatment remain elusive. Array based comparative genomic hybridization (aCGH) has revolutionized our ability to perform genome wide analyses of copy number variation (CNV) within cancer genomes. Single Nucleotide Polymorphism arrays (aSNP) provide genotyping and copy number variation data and detect regions of copy neutral Loss of Heterozygosity (cnLOH) with the potential to indicate genes involved in leukaemia pathogenesis. Both technologies are evolving rapidly and emerging platforms are thought to allow high resolution (HR) of abnormalities down to a single gene level. Aim The aim of the current study was therefore to test a HR-aCGH and a HR-aSNP platform for their ability to detect large and small CNVs and regions of cnLOH in B-CLL. More specifically, we wanted to: Method We used a high resolution 244K aCGH platform and a 1Mio SNP array in parallel to test and characterize enriched B-CLL peripheral blood samples (>80% CD19+;CD5+) from 44 clinically annotated patients collected at our institution. To distinguish CNVs seen commonly in the general population the results were compared with ‘in house’ control data sets and the Database of Genomic Variants (http://projects.tcag.ca/variation/). Results Our results show that large abnormalities, already noted by FISH, were reliably identified and the boundaries of abnormalities at 11q22.3, 13q14.2 and 17p could be defined more precisely. In addition, novel and recurrent CNVs within the sample set were identified (1p33; 3p24.3; 3p14.2; 4q12; 4q13.3; 6q21; 6q27; 8p22; 10q24; 11p15.4; 11q12; 11q13.4; 11q14.1; 11q22.1; 11q23.3; 13q14.11; 14q21.1; 15q15.1; 15q25.3; 17p13.3; 17q22; 18p11.32; 18p23; 19p13.13; 19p13.12; 19p13.32; 22q11.21; 22q11.22). Interestingly, some of these abnormalities contain single gene alterations involving oncogenes, chemokine receptors, kinases and transcription factors important in B cell development and differentiation. Assessment of smaller CNVs (less then 10 consecutive oligonucleotides) also revealed recurrent CNVs involving single genes that were clustered according to function and pathways. Comparison of paired pre-treatment and relapse samples showed differences in large CNVs in 6 out of the 14 pairs with the majority being losses within the relapse sample. In particular, relapse samples contained new losses within 2q33.1-2q37.1; 4q13.2-4q13.3; 5q31.3-5q34; 7q36.3; 10q23.1-10q25.1 11q12.3 and multiple losses within 13q14.1-13q14.3. Taken together, these data indicates that genomic instability plays a role in clonal evolution and selection after treatment in at least some patients. Analysis of a bigger cohort of matched pre-treatment and relapse samples is on-going. The importance of copy neutral LOH in B-CLL has been a subject of debate. Using the 1Mio HR-aSNP, we were able to detect multiple regions of cnLOH throughout the genome. Examination of the four regions that are known to have prognostic significance when deleted identified cnLOH involving 13q11-13q34(ter) and cnLOH of 13q21.1-q34(ter) outside the FISH region. Deletions of the 17p13.1 locus including the p53 gene confer poor prognosis in B-CLL and direct treatment decisions. Interestingly, we were able to identify cnLOH involving this region in 5% of samples. In addition, we also noticed cnLOH in 17p13.2 containing genes previously implicated in cancer. The exact pathogenetic and prognostic implications of these findings remain to be established. Conclusion Using HR-aCGH and HR-aSNP we have identified novel recurrent CNVs and regions of cnLOH in patients with B-CLL. Sequential analysis of the same patients over time suggests that at least in some patients, clonal complexity and dynamics are driven by genomic instability. Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
48

Aubele, Michaela, Margaret Cummings, Axel Walch, Horst Zitzelsberger, Jörg Nährig, Heinz Höfler, and Martin Werner. "Heterogeneous Chromosomal Aberrations in Intraductal Breast Lesions Adjacent to Invasive Carcinoma." Analytical Cellular Pathology 20, no. 1 (2000): 17–24. http://dx.doi.org/10.1155/2000/930246.

Full text
Abstract:
There is evidence that breast cancer is a heterogeneous disease phenotypically as well as molecular biologically. So far, heterogeneity on the molecular biological level has not been investigated in potential precursor lesions, such as ductal hyperplasia (DH) and ductal carcinomain situ(DCIS). In this study we applied comparative genomic hybridization (CGH) to formalin‐fixed, paraffin‐embedded breast tissue with DH and DCIS, adjacent to invasive ductal carcinoma (IDC), to screen these potential precursor lesions for whole genomic chromosomal imbalances. Laser‐microdissection was used to select pure cell populations from the sections. Isolated DNA was amplified by degenerate oligonucleotide primed PCR (DOP‐PCR) and further processed for CGH analysis.Investigating multiple samples (n=25) from four patients we found an average of 5.6 ± 0.9 (mean ± SEM) chromosomal imbalances already present in DH. In the twelve DCIS lesions an average of 10.8 (±0.9) aberrations was identified with 14.8 (±0.8) aberrations in the four adjacent IDC lesions. The increasing number of chromosomal changes in parallel with the histopathological sequence corroborate the hypothesis, that the carcinomas may have developed through a sequential progression from normal to proliferative epithelium and eventually into carcinoma. However, heterogeneous results were identified in the multiple samples per entity from the same patient, demonstrated mainly in the DCIS samples in the chromosomal regions 6p, 9p, 11q, 16p and 17q, in the DH samples by 3p, 16p and 17q. This heterogeneous findings were most pronounced within the DH and was less in the DCIS and IDC samples. The only aberration consistently found in all samples – even in all DH samples – was amplification of the 20q13 region.Our results demonstrate, that the applied combination of laser‐microdissection, DOP‐PCR and CGH, may serve to analyse breast carcinogenesis pathways in suitable histological material. However, so far, it is unclear how to handle heterogeneous results and these make identification of relevant changes more difficult. Setting a threshold and valuating only those chromosomal changes which are present in a majority of samples may be one possibility. This involves however, the risk that infrequent but possibly significant aberrations may be missed.Figures onhttp://www.esacp.org/acp/2000/20‐1/aubele.htm.
APA, Harvard, Vancouver, ISO, and other styles
49

Kay, Andrew, and Peter Lupton. "Sequential to parallel buffer refinement." Formal Aspects of Computing 4, no. 5 (September 1992): 487–92. http://dx.doi.org/10.1007/bf01211395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Jeong Yong, Nicholas Mazzoleni, and Matthew Bryant. "Modeling of Resistive Forces and Buckling Behavior in Variable Recruitment Fluidic Artificial Muscle Bundles." Actuators 10, no. 3 (February 26, 2021): 42. http://dx.doi.org/10.3390/act10030042.

Full text
Abstract:
Fluidic artificial muscles (FAMs), also known as McKibben actuators, are a class of fiber-reinforced soft actuators that can be pneumatically or hydraulically pressurized to produce muscle-like contraction and force generation. When multiple FAMs are bundled together in parallel and selectively pressurized, they can act as a multi-chambered actuator with bioinspired variable recruitment capability. The variable recruitment bundle consists of motor units (MUs)—groups of one of more FAMs—that are independently pressurized depending on the force demand, similar to how groups of muscle fibers are sequentially recruited in biological muscles. As the active FAMs contract, the inactive/low-pressure units are compressed, causing them to buckle outward, which increases the spatial envelope of the actuator. Additionally, a FAM compressed past its individual free strain applies a force that opposes the overall force output of active FAMs. In this paper, we propose a model to quantify this resistive force observed in inactive and low-pressure FAMs and study its implications on the performance of a variable recruitment bundle. The resistive force behavior is divided into post-buckling and post-collapse regions and a piecewise model is devised. An empirically-based correction method is proposed to improve the model to fit experimental data. Analysis of a bundle with resistive effects reveals a phenomenon, unique to variable recruitment bundles, defined as free strain gradient reversal.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography