Siga este enlace para ver otros tipos de publicaciones sobre el tema: Concurrent Component-Based Systems.

Artículos de revistas sobre el tema "Concurrent Component-Based Systems"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Concurrent Component-Based Systems".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Cleaveland, Rance. "Specification formalisms for component-based concurrent systems". ACM SIGSOFT Software Engineering Notes 25, n.º 1 (enero de 2000): 42–43. http://dx.doi.org/10.1145/340855.340876.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kapová, Lucia y Steffen Becker. "Systematic Refinement of Performance Models for Concurrent Component-based Systems". Electronic Notes in Theoretical Computer Science 264, n.º 1 (agosto de 2010): 73–90. http://dx.doi.org/10.1016/j.entcs.2010.07.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Li, Yi, Weidi Sun y Meng Sun. "Mediator: A component-based modeling language for concurrent and distributed systems". Science of Computer Programming 192 (junio de 2020): 102438. http://dx.doi.org/10.1016/j.scico.2020.102438.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ali, Awad, Mohammed Bakri Bashir, Alzubair Hassan, Rafik Hamza, Samar M. Alqhtani, Tawfeeg Mohmmed Tawfeeg y Adil Yousif. "Design-Time Reliability Prediction Model for Component-Based Software Systems". Sensors 22, n.º 7 (6 de abril de 2022): 2812. http://dx.doi.org/10.3390/s22072812.

Texto completo
Resumen
Software reliability is prioritised as the most critical quality attribute. Reliability prediction models participate in the prevention of software failures which can cause vital events and disastrous consequences in safety-critical applications or even in businesses. Predicting reliability during design allows software developers to avoid potential design problems, which can otherwise result in reconstructing an entire system when discovered at later stages of the software development life-cycle. Several reliability models have been built to predict reliability during software development. However, several issues still exist in these models. Current models suffer from a scalability issue referred to as the modeling of large systems. The scalability solutions usually come at a high computational cost, requiring solutions. Secondly, consideration of the nature of concurrent applications in reliability prediction is another issue. We propose a reliability prediction model that enhances scalability by introducing a system-level scenario synthesis mechanism that mitigates complexity. Additionally, the proposed model supports modeling of the nature of concurrent applications through adaption of formal statistical distribution toward scenario combination. The proposed model was evaluated using sensors-based case studies. The experimental results show the effectiveness of the proposed model from the view of computational cost reduction compared to similar models. This reduction is the main parameter for scalability enhancement. In addition, the presented work can enable system developers to know up to which load their system will be reliable via observation of the reliability value in several running scenarios.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Bajunaid, Noor y Daniel A. Menascé. "Efficient modeling and optimizing of checkpointing in concurrent component-based software systems". Journal of Systems and Software 139 (mayo de 2018): 1–13. http://dx.doi.org/10.1016/j.jss.2018.01.032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Pham, Thanh-Trung, Xavier Défago y Quyet-Thang Huynh. "Reliability prediction for component-based software systems: Dealing with concurrent and propagating errors". Science of Computer Programming 97 (enero de 2015): 426–57. http://dx.doi.org/10.1016/j.scico.2014.03.016.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Autili, Marco, Leonardo Mostarda, Alfredo Navarra y Massimo Tivoli. "Synthesis of decentralized and concurrent adaptors for correctly assembling distributed component-based systems". Journal of Systems and Software 81, n.º 12 (diciembre de 2008): 2210–36. http://dx.doi.org/10.1016/j.jss.2008.04.006.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Aoumeur, Nasreddine y Gunter Saake. "Dynamically evolving concurrent information systems specification and validation: a component-based Petri nets proposal". Data & Knowledge Engineering 50, n.º 2 (agosto de 2004): 117–73. http://dx.doi.org/10.1016/j.datak.2003.10.005.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chen, Bin, Jie Hu, Jin Qi y Weixing Chen. "Concurrent multi-process graph-based design component synthesis: Framework and algorithm". Engineering Applications of Artificial Intelligence 97 (enero de 2021): 104051. http://dx.doi.org/10.1016/j.engappai.2020.104051.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pujari, Niharika, Abhishek Ray y Jagannath Singh. "An efficient and precise dynamic slicing for concurrent component-oriented programs". International Journal of Knowledge-based and Intelligent Engineering Systems 25, n.º 4 (18 de febrero de 2022): 449–64. http://dx.doi.org/10.3233/kes-210088.

Texto completo
Resumen
A dynamic slicing algorithm is proposed in this paper along with its implementation which is dynamic for concurrent Component-oriented programs carrying multiple threads. As a part of representing the concurrent COP (CCOP) effectively, an intermediate graph is developed called Concurrent Component Dependency Graph (CCmDG). The system dependence graph (SDG) for individual components and interfaces are integrated to represent the above intermediate graph. It also consists of some new dependence edges which have been triggered for connecting the individual dependence graph of each component with the interface. Based on the graph created for the CCOP, a dynamic slicing algorithm is proposed, which sets the resultant by making the executed nodes marked during run time in Concurrent Components Dynamic Slicing (CCmDS) appropriately. For checking the competence of our algorithm, five case studies have been considered and also compared with an existing technique. From the study, we found that our algorithm results in smaller and precise size slice compared to the existing algorithm in less time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Bertoni, Alessandro y Marco Bertoni. "Supporting Early Stage Set-Based Concurrent Engineering with Value Driven Design". Proceedings of the Design Society: International Conference on Engineering Design 1, n.º 1 (julio de 2019): 2367–76. http://dx.doi.org/10.1017/dsi.2019.243.

Texto completo
Resumen
AbstractSet-Based Concurrent Engineering is commonly adopted to drive the development of complex products and systems. However, its application requires design information about a future product that is often not mature enough in the early design stages, and that it is not encompassing a service and lifecycle- oriented perspective. There is a need for manufacturers to understand, since the early design stages, how customer value is created along the lifecycle of a product from a hardware and service perspective, and how to use such information to screen radically new technologies, trade-off promising design configurations and commit to a design concept. The paper presents an approach for the multidisciplinary value assessment of design concepts in sub-systems design, encompassing the high-level concept screening and the trade-off of different design concepts, and enabling the integration of value models results into a Set-based Concurrent Engineering process. The approach is described through its application in the case study of the development of a subsystem component for a commercial aircraft engine.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Rahimi, Shahram, Rishath A. S. Rias y Elham S. Khorasani. "An Open-Bisimilarity Based Automated Verification Tool for -Calculus Family of Process Calculi". International Journal of Software Science and Computational Intelligence 4, n.º 1 (enero de 2012): 55–83. http://dx.doi.org/10.4018/jssci.2012010103.

Texto completo
Resumen
The complexity of designing concurrent and highly-evolving interactive systems has grown to a point where system verification has become a hurdle. Fortunately, formal verification methods have arrived at the right time. They detect errors, inconsistencies and incompleteness at early development stages of a system formally modeled using a formal specification language. -calculus (Milner, 1999) is one such formal language which provides strong mathematical base that can be used for verifying system specifications. But manually verifying the specifications of concurrent systems is a very tedious and error-prone work, especially if the specifications are large. Consequently, an automated verification tool would be essential for efficient system design and development. In addition, formal verification tools are vital ingredient to fully harness the potential of component-based software composition. The authors developed such an automated verification tool which is highly portable and seamlessly integrates with the visualization, reduction and performance evaluation tools introduced (Ahmad & Rahimi, 2008; Rahimi, 2006; Rahimi et al., 2001, 2008) to provide a comprehensive tool for designing and analyzing multi process/agent systems. Open-Bisimulation (Sangiorgi, 1996) concept is utilized as the theoretical base for the design and implementation of the tool which incorporates an expert system implemented in Java Expert System Shell (JESS) (Friedman-Hill, 2003).
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Lowe, Gavin. "Parameterized verification of systems with component identities, using view abstraction". International Journal on Software Tools for Technology Transfer 24, n.º 2 (26 de febrero de 2022): 287–324. http://dx.doi.org/10.1007/s10009-022-00648-0.

Texto completo
Resumen
AbstractThe parameterized verification problem seeks to verify all members of some collection of systems. We consider the parameterized verification problem applied to systems that are composed of an arbitrary number of component processes, together with some fixed processes. The components are taken from one or more families, each family representing one role in the system; all components within a family are symmetric to one another. Processes communicate via synchronous message passing. In particular, each component process has an identity, which may be included in messages, and passed to third parties. We extend Abdulla et al.’s technique of view abstraction, together with techniques based on symmetry reduction, to this setting. We give an algorithm and implementation that allows such systems to be verified for an arbitrary number of components: we do this for both safety and deadlock-freedom properties. We apply the techniques to a number of examples. We can model both active components, such as threads, and passive components, such as nodes in a linked list: thus our approach allows the verification of unbounded concurrent datatypes operated on by an unbounded number of threads. We show how to combine view abstraction with additional techniques in order to deal with other potentially infinite aspects of the analysis: for example, we deal with potentially infinite specifications, such as a datatype being a queue; and we deal with unbounded types of data stored in a datatype.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Chen, Ye y Zhelong Wang. "A hierarchical method for human concurrent activity recognition using miniature inertial sensors". Sensor Review 37, n.º 1 (16 de enero de 2017): 101–9. http://dx.doi.org/10.1108/sr-05-2016-0085.

Texto completo
Resumen
Purpose Existing studies on human activity recognition using inertial sensors mainly discuss single activities. However, human activities are rather concurrent. A person could be walking while brushing their teeth or lying while making a call. The purpose of this paper is to explore an effective way to recognize concurrent activities. Design/methodology/approach Concurrent activities usually involve behaviors from different parts of the body, which are mainly dominated by the lower limbs and upper body. For this reason, a hierarchical method based on artificial neural networks (ANNs) is proposed to classify them. At the lower level, the state of the lower limbs to which a concurrent activity belongs is firstly recognized by means of one ANN using simple features. Then, the upper-level systems further distinguish between the upper limb movements and infer specific concurrent activity using features processed by the principle component analysis. Findings An experiment is conducted to collect realistic data from five sensor nodes placed on subjects’ wrist, arm, thigh, ankle and chest. Experimental results indicate that the proposed hierarchical method can distinguish between 14 concurrent activities with a high classification rate of 92.6 per cent, which significantly outperforms the single-level recognition method. Practical implications In the future, the research may play an important role in many ways such as daily behavior monitoring, smart assisted living, postoperative rehabilitation and eldercare support. Originality/value To provide more accurate information on people’s behaviors, human concurrent activities are discussed and effectively recognized by using a hierarchical method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Xu, Mingdi, Zhaoyang Jin, Shengjie Ye y Haipeng Fan. "Characteristic Canonical Analysis-Based Attack Detection of Industrial Control Systems in the Geological Drilling Process". Processes 12, n.º 9 (23 de septiembre de 2024): 2053. http://dx.doi.org/10.3390/pr12092053.

Texto completo
Resumen
Modern industrial control systems (ICSs), which consist of sensor nodes, actuators, and buses, contribute significantly to the enhancement of production efficiency. Massive node arrangements, security vulnerabilities, and complex operating status characterize ICSs, which lead to a threat to the industrial processes’ stability. In this work, a condition-monitoring method for ICSs based on canonical variate analysis with probabilistic principal component analysis is proposed. This method considers the essential information of the operating data. Firstly, the one-way analysis of variance method is utilized to select the major variables that affect the operating performance. Then, a concurrent monitoring model based on probabilistic principal component analysis is established on both the serially correlated canonical subspace and its residual subspace, which is divided by canonical variate analysis. After that, monitoring statistics and control limits are constructed. Finally, the effectiveness and superiority of the proposed method are validated through comparisons with actual drilling operations. The method has better sensitivity than traditional monitoring methods. The experimental result reveals that the proposed method can effectively monitor the operating performance in a drilling process with its highest accuracy of 92.31% and a minimum monitoring delay of 11 s. The proposed method achieves much better effectiveness through real-world process scenarios due to its distributed structural division and the characteristic canonical analysis conducted in this paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Dong, Shengli, Xinghan Xu, Yuhang Chen, Yifang Zhang y Shengzheng Wang. "Double-Layer Distributed and Integrated Fault Detection Strategy for Non-Gaussian Dynamic Industrial Systems". Entropy 26, n.º 10 (25 de septiembre de 2024): 815. http://dx.doi.org/10.3390/e26100815.

Texto completo
Resumen
Currently, with the increasing scale of industrial systems, multisensor monitoring data exhibit large-scale dynamic Gaussian and non-Gaussian concurrent complex characteristics. However, the traditional principal component analysis method is based on Gaussian distribution and uncorrelated assumptions, which are greatly limited in practice. Therefore, developing a new fault detection method for large-scale Gaussian and non-Gaussian concurrent dynamic systems is one of the urgent challenges to be addressed. To this end, a double-layer distributed and integrated data-driven strategy based on Laplacian score weighting and integrated Bayesian inference is proposed. Specifically, in the first layer of the distributed strategy, we design a Jarque–Bera test module to divide all multisensor monitoring variables into Gaussian and non-Gaussian blocks, successfully solving the problem of different data distributions. In the second layer of the distributed strategy, we design a dynamic augmentation module to solve dynamic problems, a K-means clustering module to mine local similarity information of variables, and a Laplace scoring module to quantitatively evaluate the structural retention ability of variables. Therefore, this double-layer distributed strategy can simultaneously combine the different distribution characteristics, dynamism, local similarity, and importance of variables, comprehensively mining the local information of the multisensor data. In addition, we develop an integrated Bayesian inference strategy based on detection performance weighting, which can emphasize the differential contribution of local models. Finally, the fault detection results for the Tennessee Eastman production system and a diesel engine working system validate the superiority of the proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Sanan, David, Yongwang Zhao, Shang-Wei Lin y Liu Yang. "CSim 2". ACM Transactions on Programming Languages and Systems 43, n.º 1 (abril de 2021): 1–46. http://dx.doi.org/10.1145/3436808.

Texto completo
Resumen
To make feasible and scalable the verification of large and complex concurrent systems, it is necessary the use of compositional techniques even at the highest abstraction layers. When focusing on the lowest software abstraction layers, such as the implementation or the machine code, the high level of detail of those layers makes the direct verification of properties very difficult and expensive. It is therefore essential to use techniques allowing to simplify the verification on these layers. One technique to tackle this challenge is top-down verification where by means of simulation properties verified on top layers (representing abstract specifications of a system) are propagated down to the lowest layers (that are an implementation of the top layers). There is no need to say that simulation of concurrent systems implies a greater level of complexity, and having compositional techniques to check simulation between layers is also desirable when seeking for both feasibility and scalability of the refinement verification. In this article, we present CSim 2 a (compositional) rely-guarantee-based framework for the top-down verification of complex concurrent systems in the Isabelle/HOL theorem prover. CSim 2 uses CSimpl, a language with a high degree of expressiveness designed for the specification of concurrent programs. Thanks to its expressibility, CSimpl is able to model many of the features found in real world programming languages like exceptions, assertions, and procedures. CSim 2 provides a framework for the verification of rely-guarantee properties to compositionally reason on CSimpl specifications. Focusing on top-down verification, CSim 2 provides a simulation-based framework for the preservation of CSimpl rely-guarantee properties from specifications to implementations. By using the simulation framework, properties proven on the top layers (abstract specifications) are compositionally propagated down to the lowest layers (source or machine code) in each concurrent component of the system. Finally, we show the usability of CSim 2 by running a case study over two CSimpl specifications of an Arinc-653 communication service. In this case study, we prove a complex property on a specification, and we use CSim 2 to preserve the property on lower abstraction layers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Ahzaliza, Dian, Hasan Maksum, Wakhinuddin Wakhinuddin y Eko Indrawan. "Evaluation of Learning Program Subjects for Building Utility Systems Based on Facilities and Infrastructure Standards Using the CIPPO Model at SMK Negeri 2 Banda Aceh". Jurnal Pendidikan Teknologi Kejuruan 5, n.º 1 (16 de febrero de 2022): 1–7. http://dx.doi.org/10.24036/jptk.v5i1.24723.

Texto completo
Resumen
Based on interviews at SMK Negeri 2 Banda Aceh in 2021, obtained from 25 students, 4 people did not pass the skills test in the subject of building utility systems, with an average knowledge score of 79.2 and an average skill score of 78.4 One of the factors is the inadequate infrastructure in carrying out the learning process, especially practicum learning. The purpose of the study was to evaluate the learning process of building utility system subjects in the Construction and Property Engineering Expertise Program using the CIPPO model at SMK Negeri 2 Banda Aceh City. This research is an evaluation research using the CIPPO model, namely the evaluation of the context, input, process, product and outcome. The type of research is a mixed method with Concurrent Triangulation Strategy. Sources of qualitative data were obtained from the Principal, Deputy Principal of the School of Facilities and Infrastructure, Head of the Construction and Property Engineering Expertise Program, Head of Lab and 5. Teachers who teach building utility systems at SMK Negeri 2 Banda Aceh. Quantitative data were taken from 40 students of class XII and XIII. The results showed that the context component was in the sufficient category (73.59%), the Input component was in the sufficient category (75.69%), the process component was in the sufficient category (72.01%), the Product component was in the sufficient category (66.5 %), and the outcome component is also in the sufficient category (73.25%). Because all components are in the sufficient category, it is recommended to make appropriate improvements to improve student achievement abilities in the subject of building utility systems in the Construction and Property Engineering Expertise Program at SMK Negeri 2 Banda Aceh.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

CAO, JIAN y SHENSHENG ZHANG. "AN INTEGRATED MULTI-AGENT CSCW SYSTEM FOR CONCURRENT PRODUCT DEVELOPMENT". International Journal of Information Technology & Decision Making 01, n.º 03 (septiembre de 2002): 423–40. http://dx.doi.org/10.1142/s0219622002000270.

Texto completo
Resumen
Product development capability is more and more important for an enterprise in a knowledge-based economic era. In the philosophy of concurrent engineering, product development should be carried out in a concurrent way. Computer support is necessary for Concurrent Product Development (CPD). As an excellent tool to meet complex needs, CSCW has been used in CPD. But nearly all CSCW systems that have been developed so far concentrate on a more or less narrow sub-field of cooperative work. Thus, the need of integrated CSCW applications are apparent. The agent is a suitable programming paradigm that can be used to meet the complex needs. In this paper, a P-PROCE (Process, Product, Resource, Organization, Control & Evaluation) model is introduced for CPD firstly. By categorizing the agents of the multi-agent system (MAS) into different types of agent according to P-PROCE model and offering a structure of MAS, the CPD is mapped to MAS. The cooperation among agents is very important for MAS. In the paper, a two-layer cooperation structure of MAS is proposed. In the macro layer, agent based workflow control the CPD process and in the micro layer the entity agents interact with each other directly to fulfill the task. The key issues of these two cooperation layers are discussed in the paper. Component based structure of agent and an implemented case are also provided in the paper.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

VERLINDEN, NICO y DIRK JANSSENS. "Algebraic properties of processes for Local Action Systems". Mathematical Structures in Computer Science 12, n.º 4 (agosto de 2002): 423–48. http://dx.doi.org/10.1017/s096012950100353x.

Texto completo
Resumen
Graph rewriting has been used extensively to model the behaviour of concurrent systems and to provide a formal semantics for them. In this paper, we investigate processes for Local Action Systems (LAS); LAS generalize several types of graph rewriting based on node replacement and embedding. An important difference between processes for Local Action Systems and the process notions that have been introduced for other systems, for example, Petri nets, is the presence of a component describing the embedding mechanism. The aim of the paper is to develop a methodology for dealing with this embedding mechanism: we introduce a suitable representation (a dynamic structure) for it, and then investigate the algebraic properties of this representation. This leads to a simple characterization of the configurations of a process and to a number of equational laws for dynamic structures. We illustrate the use of these laws by providing an equational proof of one of the basic results for LAS processes, namely that the construction yielding the result graph of a process behaves well with respect to the sequential composition of processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

San, Khin Thida, Sun Ju Mun, Yeong Hun Choe y Yoon Seok Chang. "UAV Delivery Monitoring System". MATEC Web of Conferences 151 (2018): 04011. http://dx.doi.org/10.1051/matecconf/201815104011.

Texto completo
Resumen
UAV-based delivery systems are increasingly being used in the logistics field, particularly to achieve faster last-mile delivery. This study develops a UAV delivery system that manages delivery order assignments, autonomous flight operation, real time control for UAV flights, and delivery status tracking. To manage the delivery item assignments, we apply the concurrent scheduler approach with a genetic algorithm. The present paper describes real time flight data based on a micro air vehicle communication protocol (MAVLink). It also presents the detailed hardware components used for the field tests. Finally, we provide UAV component analysis to choose the suitable components for delivery in terms of battery capacity, flight time, payload weight and motor thrust ratio.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Uzunidis, Dimitris, Fotini Apostolopoulou, Gerasimos Pagiatakis y Alexandros Stavdas. "Analysis of Available Components and Performance Estimation of Optical Multi-Band Systems". Eng 2, n.º 4 (8 de noviembre de 2021): 531–43. http://dx.doi.org/10.3390/eng2040034.

Texto completo
Resumen
Optical multi-band (OMB) systems exploit the low-loss spectrum of the single mode fiber (SMF) and are key enablers to increase the transportation capacity and node connectivity of already deployed systems. The realization of OMB systems is mainly based on the technological advances on the component and system level, and for this purpose, a broad gamut of various structural elements, such as transceivers, amplifiers, filters, etc. have been commercialized already or are close to commercialization. This wide range of options, which aid in unlocking the concurrent transmission in all amplification bands, is reviewed here for the first time, whilst their pros and cons as well as their limitations are discussed. Furthermore, the needs for additional components in order to fully exploit the ≈390 nm low-loss wavelength range of SMF, which spans from 1260 to 1650 nm, are highlighted. Finally, based on a physical layer formalism, which incorporates the impact of the most important physical layer constraints for an OMB system, the attainable capacity and transparent reach of each amplification band are quantified.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Otal, Antonio, Francisco Celada, Jose Chimeno, Javier Vijande, Santiago Pellejero, Maria-Jose Perez-Calatayud, Elena Villafranca et al. "Review on Treatment Planning Systems for Cervix Brachytherapy (Interventional Radiotherapy): Some Desirable and Convenient Practical Aspects to Be Implemented from Radiation Oncologist and Medical Physics Perspectives". Cancers 14, n.º 14 (17 de julio de 2022): 3467. http://dx.doi.org/10.3390/cancers14143467.

Texto completo
Resumen
Intracavitary brachytherapy (BT, Interventional Radiotherapy, IRT), plays an essential role in the curative intent of locally advanced cervical cancer, for which the conventional approach involves external beam radiotherapy with concurrent chemotherapy followed by BT. This work aims to review the different methodologies used by commercially available treatment planning systems (TPSs) in exclusive magnetic resonance imaging-based (MRI) cervix BT with interstitial component treatments. Practical aspects and improvements to be implemented into the TPSs are discussed. This review is based on the clinical expertise of a group of radiation oncologists and medical physicists and on interactive demos provided by the software manufacturers. The TPS versions considered include all the new tools currently in development for future commercial releases. The specialists from the supplier companies were asked to propose solutions to some of the challenges often encountered in a clinical environment through a questionnaire. The results include not only such answers but also comments by the authors that, in their opinion, could help solve the challenges covered in these questions. This study summarizes the possibilities offered nowadays by commercial TPSs, highlighting the absence of some useful tools that would notably improve the planning of MR-based interstitial component cervix brachytherapy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Fernando, GVC y Teguh Kristian Perdamaian. "Integrating palliative care into primary healthcare systems: Advocacy efforts, milestones and challenges in Asia". Malaysian Family Physician 19 (23 de octubre de 2024): 61. https://doi.org/10.51866/cm0007.

Texto completo
Resumen
Palliative care is a vital component of primary healthcare systems, especially in Asia, where the ageing population is expected to increase significantly in the coming years. Integrating palliative care into primary healthcare systems is a crucial strategy for achieving universal access to palliative care. It is necessary to take concurrent actions to achieve this integration, including integrating palliative care into public health policies, educating primary healthcare workers, establishing appropriate service structures and ensuring the availability of controlled medications. Healthcare professionals involved with primary care, often led by physicians, play a significant role in driving the implementation of primary palliative care in Asia, as evidenced by their involvement in community- and home-based palliative care in India and primary palliative care for patients with cancer in Indonesia. However, there are challenges associated with implementing these actions in each country. Therefore, it is crucial to examine the ongoing advocacy efforts, milestones, obstacles and strategies that shape this process in the Asian context.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Parrot, Olivier, Claude De Paoli, Alain Rouge y Catherine Dutey. "4.3.2 Assessing the relevance of systems engineering for electrical commercial product development". INCOSE International Symposium 10, n.º 1 (julio de 2000): 328–35. http://dx.doi.org/10.1002/j.2334-5837.2000.tb00394.x.

Texto completo
Resumen
AbstractTraditionally, MGE UPS Systems, like most commercial product suppliers, was used to rely on stamped‐commercial product development practices to manage time‐to‐market. But practices like concurrent engineering, project‐based and steering committee‐driven organization, system life‐cycle, are not anymore sufficient to answer always growing global market constraints. New practices as well as the solving of concurrent engineering side effects are becoming unavoidable.This concerns in particular the introduction of global systems optimization Vs. Component optimization, which traditionally drives product commercial product development.Systems engineering offers potential solutions to solve that issue although it was used so far for types of development and types of system which correspond to features that are almost opposite to ours (large vs. small systems, long vs. fast development, mono‐customer vs. market).Time is a very precious resource for companies like ours, it is very difficult to find some time and motivation to set new practices in use; and especially practices that somehow tell that “you should waste a bit of your time at the beginning of the project to save more later on”.Not an idea to be easily accepted by a product developer or manager ! This is why we carried out the SYRENA experiment, to address both in technical, managerial and cultural terms, the adoption of some systems engineering practices to develop faster small electrical systems.This paper gives a bird's‐eye‐view of the findings of the SYRENA project w.r.t. to the technical, managerial and cultural acceptance of systems engineering.From a technical standpoint, the experiment focused more specifically on the interface between marketing and engineering and addressed the interface of the system‐level requirements engineering process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Fernandes Costa, Tássio, Álvaro Sobrinho, Lenardo Chaves e Silva, Leandro Dias da Silva y Angelo Perkusich. "Coloured Petri Nets-Based Modeling and Validation of Insulin Infusion Pump Systems". Applied Sciences 12, n.º 3 (29 de enero de 2022): 1475. http://dx.doi.org/10.3390/app12031475.

Texto completo
Resumen
Safety and effectiveness are crucial quality attributes for insulin infusion pump systems. Therefore, regulatory agencies require the quality evaluation and approval of such systems before the market to decrease the risk of harm, motivating the usage of a formal Model-Based Approach (MBA) to improve quality. Nevertheless, using a formal MBA increases costs and development time because it requires expert knowledge and thorough analyses of behaviors. We aim to assist the quality evaluation of such systems in a cost-effective and time-efficient manner, providing re-usable project artifacts by applying our proposed approach (named MBA with CPN—MBA/CPN). We defined a Coloured Petri nets MBA and a case study on a commercial insulin infusion pump system to verify and validate a reference model (as a component of MBA/CPN), describing quality assessment scenarios. We also conducted an empirical evaluation to verify the productivity and reusability of modelers when using the reference model. Such a model is relevant to reason about behaviors and quality evaluation of such concurrent and complex systems. During the empirical evaluation, using the reference model, 66.7% of the 12 interviewed modelers stated no effort, while 8.3% stated low effort, 16.7% medium effort, and 8.3% considerable effort. Based on the modelers’ knowledge, we implemented a web-based application to assist them in re-using our proposed approach, enabling simulation-based training. Although a reduced number of modelers experimented with our approach, such an evaluation provided insights to improve the MBA/CPN. Given the empirical evaluation and the case study results, MBA/CPN showed to be relevant to assess the quality of insulin infusion pump systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Olzak, L. A., J. P. Thomas y T. D. Wickens. "Simultaneously Judging Contrast in Different Orientation and Frequency Bands". Perception 25, n.º 1_suppl (agosto de 1996): 116. http://dx.doi.org/10.1068/v96l0604.

Texto completo
Resumen
Previous discrimination experiments suggest that suprathreshold contrast signals carried by orthogonally oriented gratings are combined when component frequencies are similar, but contrast signals are not combined across frequency bands regardless of orientation. We investigated neural and attentional processes underlying these findings in a concurrent-response paradigm. Test stimuli were composed of two superimposed sinusoidal gratings. In one condition, two 3 cycles deg−1 gratings were superimposed at orthogonal orientations to form plaids. In the other condition, the component gratings were 3 cycles deg−1 and 15 cycles deg−1, both vertical. In each condition, each component independently took one of two slightly different contrast values, combined all possible ways to create four stimuli. On each trial, one stimulus appeared for 1 s. Observers made two contrast discrimination judgments, one on the vertical (or low frequency) component, the other based on the horizontal (or high frequency) grating. Highly correlated response patterns and the collapse of two decision axes into nearly one confirmed that observers were unable to make independent judgments of contrast on orthogonally oriented components of similar frequency. Analyses suggested that decisions were based primarily on a single neural signal representing the sum or average of the two contrasts present. When cues to discrimination were in different frequency bands, observers showed a marked inability to perform the simultaneous judgment task, choosing idiosyncratic strategies to maximise performance. Analyses indicated that the contrast information was processed through separate and independent pathways, but that information from the two bands was not simultaneously available to the observer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Brugali, Davide y Nico Hochgeschwender. "Software Product Line Engineering for Robotic Perception Systems". International Journal of Semantic Computing 12, n.º 01 (marzo de 2018): 89–107. http://dx.doi.org/10.1142/s1793351x18400056.

Texto completo
Resumen
Control systems for autonomous robots are concurrent, distributed, embedded, real-time and data intensive software systems. A real-world robot control system is composed of tens of software components. For each component providing robotic functionality, tens of different implementations may be available. The difficult challenge in robotic system engineering consists in selecting a coherent set of components, which provide the functionality required by the application requirements, taking into account their mutual dependencies. This challenge is exacerbated by the fact that robotics system integrators and application developers are usually not specifically trained in software engineering. In various application domains, software product line (SPL) development has proven to be the most effective approach to face this kind of challenges. In a previous paper [D. Brugali and N. Hochgeschwender, Managing the functional variability of robotic perception systems, in First IEEE Int. Conf. Robotic Computing, 2017, pp. 277–283.] we have presented a model-based approach to the development of SPL for robotic perception systems, which integrates two modeling technologies developed by the authors: The HyperFlex toolkit [L. Gherardi and D. Brugali, Modeling and reusing robotic software architectures: The HyperFlex toolchain, in IEEE Int. Conf. Robotics and Automation, 2014, pp. 6414–6420.] and the Robot Perception Specification Language (RPSL) [N. Hochgeschwender, S. Schneider, H. Voos and G. K. Kraetzschmar, Declarative specification of robot perception architectures, in 4th Int. Conf. Simulation, Modeling, and Programming for Autonomous Robots, 2014, pp. 291–302.]. This paper extends our previous work by illustrating the entire development process of an SPL for robot perception systems with a real case study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Borja, V., R. Bell y J. A. Harding. "Assisting design for manufacture using the data model driven approach". Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 215, n.º 12 (1 de diciembre de 2001): 1757–71. http://dx.doi.org/10.1177/095440540121501209.

Texto completo
Resumen
The data model driven approach argues that computer aided engineering systems should be based on information data models in order to properly support the concurrent design of products. These models are the foundation for database representations of products and factories, and enable information sharing across unlinked software applications that address different stages of the product life cycle. This paper presents a product data model capable of capturing product life cycle information, and in particular its utilization for representing manufacturing information is described. A manufacturing data model that depicts the capabilities of manufacturing cells in terms of their processes and resources is also introduced. The potential benefits of using these data models to support design for manufacture are shown through a case study. The case study includes implementation of the models, their utilization representing a product and three manufacturing facilities, and demonstrates their value in the redesign of a component.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Trad, Antoine. "A Relational DataBase based Enterprise Transformation Projects". International Journal of Mathematics and Computers in Simulation 17 (14 de junio de 2023): 1–11. http://dx.doi.org/10.46300/9102.2023.17.1.

Texto completo
Resumen
Enterprise Transformation Projects (ETP) are important for ensuring long-term business sustainability and operational excellence, but these projects are complex to finalize and have a high failure rate. Transformation complexities are related to various concurrent factors like the use of sets of uncoherent commercial tools/products, simplistic gap estimations, status evaluations, needed cross-functional skills, and many others. Therefore, there is a need to implement an In-House Implemented (IHI) methodology and framework to support ETPs. But such IHI solutions take a long time to be implemented and to be tested; and this article tries to propose a realistic solution that is based on DataBase (DB) or more precisely Relational DBs (RDB). RDB-based IHI solutions and concepts can be gradually built on the usage of internal information systems without the need for continuous colossal investments in external products. The proposed RBD concept tries to show it can support an ETP because the RDB is a component that is used in all ETP operations and subsystems. RDBs contain all the needed information, structures, integrity check mechanisms, and applied mathematical constructs. The proposed RDB-based ETP (RDBbETP) concept adopts a Polymathic-holistic approach, which used iterative change and implementation phases. The RDBbETP uses the author’s Applied Holistic Mathematical Model (AHMM) to interface and manage the RDB (AHMM4RDB).
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Ginting, Mimaika Luluina, Chek Hooi Wong, Zoe Zon Be Lim, Robin Wai Munn Choo, Sheena Camilla Hirose Carlsen, Grace Sum y Hubertus Johannes Maria Vrijhoef. "A Patient-Centred Medical Home Care Model for Community-Dwelling Older Adults in Singapore: A Mixed-Method Study on Patient’s Care Experience". International Journal of Environmental Research and Public Health 19, n.º 8 (14 de abril de 2022): 4778. http://dx.doi.org/10.3390/ijerph19084778.

Texto completo
Resumen
Patient-Centred Medical Home (PCMH) is a strategy to enhance patient-centredness to improve care experience. We aimed to understand patient experience of an integrated PCMH model for complex community-dwelling older adults in Singapore. We used a mixed-method design with a prospective single-group pre-post quantitative component and a concurrent qualitative component. Participants were administered the validated Consumer Assessment of Health Providers and Systems Clinician & Group Survey (CG-CAHPS) at baseline (N = 184) and 6-month (N = 166) post-enrolment. We conducted focus group discussions (FGDs) on a purposive sample of 24 participants. Both methods suggest better care experience in PCMH relative to usual care. There were improvements in the CG-CAHPS measures on patient–provider communication, care coordination, office staff interactions, support for patients in caring for their own health, and provider rating in PCMH relative to usual care. In the FGDs, participants reported benefits of consolidated appointments and positive experience in sustained patient–provider relationship, shared-decision making, and family/caregiver engagement in PCMH. Participants may not fully comprehend the concept of integrated care, hindering both the effective communication of the intended care model and perceived benefits such as the provision of multidisciplinary team-based care.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Hartle, Larissa, Liana Mendes-Santos, Eduarda Barbosa, Giulia Balboni y Helenice Charchat-Fichman. "Evidence of the validity of a novel version of the computerized cognitive screening battery CompCog". Dementia & Neuropsychologia 15, n.º 4 (diciembre de 2021): 485–96. http://dx.doi.org/10.1590/1980-57642021dn15-040010.

Texto completo
Resumen
ABSTRACT Although the availability of the computer-based assessment has increased over the years, neuropsychology has not carried out a significant paradigm shift since the personal computer’s popularization in the 1980s. To keep up with the technological advances of healthcare and neuroscience in general, more efforts must be made in the field of clinical neuropsychology to develop and validate new and more technology-based instruments, especially considering new variables and paradigms when compared to paper and pencil tests. Objective: This study’s objective was to produce concurrent validity evidence of the novel version of the computerized cognitive screening battery CompCog. Methods: Participants performed a traditional paper and pencil neuropsychological testing session and another session where CompCog was administrated. The data of a total of 50 young adult college students were used in the analyses. Results: Results have shown moderate and strong correlations between CompCog’s tasks and their equivalents considering paper and pencil tests. Items clustered in agreement with the subtest division in a principal component analysis. Conclusions: The findings suggest that CompCog is valid for measuring the cognitive processes its tasks intend to evaluate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Schultz, Emily B., J. Clint Iles, Thomas G. Matney, Andrew W. Ezell, James S. Meadows, Ted D. Leininger, W. Cade Booth y J. Paul Jeffreys. "Stand-Level Growth and Yield Component Models for Red Oak–Sweetgum Forests on Mid-South Minor Stream Bottoms". Southern Journal of Applied Forestry 34, n.º 4 (1 de noviembre de 2010): 161–75. http://dx.doi.org/10.1093/sjaf/34.4.161.

Texto completo
Resumen
Abstract Greater emphasis is being placed on Southern bottomland hardwood management, but relatively few growth and yield prediction systems exist that are based on sufficient measurements. We present the aggregate stand-level expected yield and structural component equations for a red oak (Quercus sectionLobatae)-sweetgum (Liquidambar styraciflua L.) growth and yield model. Measurements from 638 stand-level observations on 258 distinct permanent growth and yield plots collected in 1981, 1988, 1994, and 2006 in minor stream bottoms in Mississippi and Alabama provided data for model development. Equations for average height of dominant and codominant red oaks, trees/ac, arithmetic mean diameter, quadratic mean diameter, and volume were selected on the basis of significance of independent variables, coefficient of determination, index of fit, and biological validity assessment. These models produce expected average yields for combined species or species groups in naturally developing stands and provide an average baseline for individuals managing their lands for the red oak–sweetgum complex. Models will be integrated with log grade volume and diameter distribution models that are in concurrent development to produce a growth and yield system capable of comparing management alternatives on a financial basis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Ibrahim, Fausat Motunrayo, Benson Osikabor, Bolanle Tawakalitu Olatunji, Grace Oluwatobi Ogunwale y Olawale Julius Aluko. "Forest in the Context of Social Change: Traditional Orientation and Forest Mystification in a Nigerian Forest-Reserve Setting". Changing Societies & Personalities 5, n.º 3 (11 de octubre de 2021): 496. http://dx.doi.org/10.15826/csp.2021.5.3.147.

Texto completo
Resumen
This article exposits the mystification of forests among people residing in proximity to a forest reserve in southwestern Nigeria. The theory of material engagement and the ecology of human development support the position that the forest is a classical motivator of traditional culture. Still, socio-cultural change is prevalent. As an element of this change, forest-based social cognition warrants systematic examination in the interest of environmental sustainability. This is because the concurrent conveyance of sustainability-promoting immaterial culture across generations is a component of the pathway to a sustainable future. Moreover, systems theory posits that social events affect each other. Since social change is not solitary but encompassing, forest mystification was examined along with other indicators of traditional orientation including attitude towards―religion, ageing, gender; and cultural enthusiasm. The results indicate that forest mystification is still huge and connected with orientations towards ageing and cultural enthusiasm. This exemplifies the Yorùbá social context’s manifestation of continuity as opposed to change in forest culture; and stands in solidarity with traditional African mentality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Юсим, Vyacheslav Yusim, Свирчевский y Vadim Svirchevskiy. "The Relationship of Economy and Industry Development Macroconstants". Economics 5, n.º 2 (17 de abril de 2017): 29–38. http://dx.doi.org/10.12737/25147.

Texto completo
Resumen
The article substantiates benchmarks of reindustrialization economy, Russia. While reindustrialization is understood as a technology base, capable of concurrent development of industry and the economy as a whole. The authors examine the global trend of decline in the share of industry in the gross product of the developed countries and the aim is to find indicators of development communication industry with the development of the economy as a whole. Based on the analysis of statistical information of development indicators large technologically developed countries it is demonstrated that long-term acceleration of economies is based on accelerating their industrial component. It is proved that there is a single quantitative indicator of the quality of the economy and its industrial sector, which is characterized by the ability of these macrosystems for longer-term accelerating their development. The article substantiates that industrial policy must be based on the use of previously unknown indicative guidelines development: macroconstants of development of economies and their industrial sector. The range of quantitative ratio of quality indicators of economy and quality indicators of their industry, the prevailing for most of the major countries of the world technological leaders. The authors consider the roots of pushback to development of technological environment and it is proved that there are promising developments of Russian researchers on technology reform in the large socioeconomic systems, and positive changes in the provocation techniques backward technological habitat.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

HAIDER, NEENA B., PAUL DEMARCO, ARNE M. NYSTUEN, XIAONA HUANG, RICHARD S. SMITH, MAUREEN A. MCCALL, JÜRGEN K. NAGGERT y PATSY M. NISHINA. "The transcription factorNr2e3functions in retinal progenitors to suppress cone cell generation". Visual Neuroscience 23, n.º 6 (noviembre de 2006): 917–29. http://dx.doi.org/10.1017/s095252380623027x.

Texto completo
Resumen
The transcription factorNr2e3is an essential component for development and specification of rod and cone photoreceptors; however, the mechanism through which it acts is not well understood. In this study, we useNr2e3rd7/rd7mice that harbor a mutation inNr2e3, to serve as a model for the human retinal disease Enhanced S Cone Syndrome. Our studies reveal that NR2E3 is expressed in late retinal progenitors and differentiating photoreceptors of the developing retina and localized to the cell bodies of mature rods and cones. In particular, we demonstrate that the abnormal increase in cone photoreceptors observed inNr2e3rd7/rd7mice arise from ectopic mitotic progenitor cells that are present in the outer nuclear layer of the matureNr2e3rd7/rd7retina. A prolonged phase of proliferation is observed followed by abnormal retinal lamination with fragmented and disorganized photoreceptor synapses that result in a progressive loss of rod and cone function. An extended and pronounced wave of apoptosis is also detected at P30 and temporally correlates with the phase of prolonged proliferation. Approximately twice as many apoptotic cells were detected compared to proliferating cells. This wave of apoptosis appears to affect both rod and cone cells and thus may account for the concurrent loss of rod and cone function. We further show thatNr2e3rd7/rd7cones do not express rod specific genes andNr2e3rd7/rd7rods do not express cone specific genes. Our studies suggest that, based on its temporal and spatial expression, NR2E3 acts simultaneously in different cell types: in late mitotic progenitors, newly differentiating post mitotic cells, and mature rods and cones. In particular, this study reveals the function of NR2E3 in mitotic progenitors is to repress the cone generation program. NR2E3 is thus one of the few genes known to influence the competency of retinal progenitors while simultaneously directing the rod and cone differentiation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

KOUZAPAS, DIMITRIOS, NOBUKO YOSHIDA, RAYMOND HU y KOHEI HONDA. "On asynchronous eventful session semantics". Mathematical Structures in Computer Science 26, n.º 2 (10 de noviembre de 2014): 303–64. http://dx.doi.org/10.1017/s096012951400019x.

Texto completo
Resumen
Event-driven programming is one of the major paradigms in concurrent and communication-based programming, where events are typically detected as the arrival of messages on asynchronous channels. Unfortunately, the flexibility and performance of traditional event-driven programming come at the cost of more complex programs: low-level APIs and the obfuscation of event-driven control flow make programs difficult to read, write and verify.This paper introduces a π-calculus with session types that modelsevent-driven session programming(called ESP) and studies its behavioural theory. The main characteristics of the ESP model are asynchronous, order-preserving message passing, non-blocking detection of event/message arrivals and dynamic inspection of session types. Session types offer formal safety guarantees, such as communication and event handling safety, and programmatic benefits that overcome problems with existing event-driven programming languages and techniques. The new typed bisimulation theory developed for the ESP model is distinct from standard synchronous or asynchronous bisimulation, capturing the semantic nature of eventful session-based processes. The bisimilarity coincides with reduction-closed barbed congruence.We demonstrate the features and benefits of ESP and the behavioural theory through two key use cases. First, we examine an encoding and the semantic behaviour of the event selector, a central component of general event-driven systems, providing core results for verifying type-safe event-driven applications. Second, we examine the Lauer–Needham duality, building on the selector encoding and bisimulation theory to prove that a systematic transformation from multithreaded to event-driven session processes is type- and semantics-preserving.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Garcia Figueiredo-Pinto, Danilo, Ip-Shing Fan y Fernando Teixeira Mendes Abrahão. "Operational Availability Optimization Model Based on the Integration of Predictive and Scheduled Maintenance". PHM Society European Conference 6, n.º 1 (29 de junio de 2021): 11. http://dx.doi.org/10.36001/phme.2021.v6i1.2816.

Texto completo
Resumen
Health monitoring technologies and data analytics are increasingly widespread in the aviation industry following the growth in the capacity and speed of abundant and accurate data generation and transmission from the aircraft systems. These advances are fueling a change process in aircraft maintenance strategy towards a more proactive, precise, and effective approach consolidated in the concepts of Integrated Vehicle Health Monitoring (IVHM) and Prognostics and Health Management (PHM). Following that, several model-based and data-driven prognostics methods for Remaining Useful Life (RUL) estimation have been developed in the pursuit of improving predictive maintenance interventions for different types of components. Recent papers showcased the significant challenges faced to achieve forecast accuracy as posed by the inherent uncertainty involved in the functional dynamics of complex systems. This work acknowledges these difficulties and tackles variability by embracing it in the methodology deployed by means of considering in its framework the confidence intervals associated with the estimates for a predefined level of confidence. Nevertheless, the ability to pinpoint times-to-failure by itself is arguably not enough to yield better operational results and improve support levels of service given that scattered standalone interventions may even cause failure occurrences and total downtime to increase. This study demonstrates the rationale behind those effects and exposes the necessity for a method for achieving a compromise to optimally accommodate the concurrent economic, reliability and maintainability goals which are, respectively, the maximization of component useful life expenditure, the minimization of the running-into-failure risk and the minimization of total downtime. Further on, the article explores the problem in detail identifying the key parameters pointed out in the literature that need to be addressed by the modelling process to ensure the soundness of the method. The text then proposes a solution consisting of an innovative analytical model that optimizes operational availability through the dynamic allocation of flight-hours to each aircraft part of a fleet based on the integration of predictive and scheduled maintenance, minimizing total downtime, while accounting for prognostics uncertainty and the associated the risk of failure and incurring in corrective maintenance. The intended outcome is the capability of providing dynamic maintenance plans specially adjusted to each tail number according to its assessed health status and a calculated prognostic that considers predetermined future flights specifically attributed to optimize the overall availability of the fleet. An illustrative case study involving multiple components with different aging parameters equipping the aircraft of a small military fleet operating from a single base was used to test the solution and produced results that corroborate the validity of the approach adopted and demonstrate the model’s value and effectiveness. The results also indicate there is significant potential to expand the study and encourage its further development to contemplate multiple-base scenarios and incorporate more detailed aspects such as tasks location within the aircraft, availability of spare parts and resources in general, out-of-phase items and its implementation together with a simulation tool to generalize its application. The main contributions of the study are twofold. It adds on the theoretical complexity by tackling systems of systems instead of the predominant single component approach, and it provides a model with an optimizing objective function to improve maintenance planning in real-life.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Shah, Ismail, Hasnain Iftikhar, Sajid Ali y Depeng Wang. "Short-Term Electricity Demand Forecasting Using ComponentsEstimation Technique". Energies 12, n.º 13 (1 de julio de 2019): 2532. http://dx.doi.org/10.3390/en12132532.

Texto completo
Resumen
Currently, in most countries, the electricity sector is liberalized, and electricity is traded in deregulated electricity markets. In these markets, electricity demand is determined the day before the physical delivery through (semi-)hourly concurrent auctions. Hence, accurate forecasts are essential for efficient and effective management of power systems. The electricity demand and prices, however, exhibit specific features, including non-constant mean and variance, calendar effects, multiple periodicities, high volatility, jumps, and so on, which complicate the forecasting problem. In this work, we compare different modeling techniques able to capture the specific dynamics of the demand time series. To this end, the electricity demand time series is divided into two major components: deterministic and stochastic. Both components are estimated using different regression and time series methods with parametric and nonparametric estimation techniques. Specifically, we use linear regression-based models (local polynomial regression models based on different types of kernel functions; tri-cubic, Gaussian, and Epanechnikov), spline function-based models (smoothing splines, regression splines), and traditional time series models (autoregressive moving average, nonparametric autoregressive, and vector autoregressive). Within the deterministic part, special attention is paid to the estimation of the yearly cycle as it was previously ignored by many authors. This work considers electricity demand data from the Nordic electricity market for the period covering 1 January 2013–31 December 2016. To assess the one-day-ahead out-of-sample forecasting accuracy, Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE) are calculated. The results suggest that the proposed component-wise estimation method is extremely effective at forecasting electricity demand. Further, vector autoregressive modeling combined with spline function-based regression gives superior performance compared with the rest.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Bache, Martin R., Christopher D. Newton, John Paul Jones, Stephen Pattison, Louise Gale, Pascual Ian Nicholson y Eleri Weston. "Advances in Damage Monitoring Techniques for the Detection of Failure in SiCf/SiC Ceramic Matrix Composites". Ceramics 2, n.º 2 (15 de mayo de 2019): 347–71. http://dx.doi.org/10.3390/ceramics2020028.

Texto completo
Resumen
From a disruptive perspective, silicon carbide (SiC)-based ceramic matrix composites (CMCs) provide a considerable temperature and weight advantage over existing material systems and are increasingly finding application in aerospace, power generation and high-end automotive industries. The complex structural architecture and inherent processing artefacts within CMCs combine to induce inhomogeneous deformation and damage prior to ultimate failure. Sophisticated mechanical characterisation is vital in support of a fundamental understanding of deformation in CMCs. On the component scale, “damage tolerant” design and lifing philosophies depend upon laboratory assessments of macro-scale specimens, incorporating typical fibre architectures and matrix under representative stress-strain states. This is important if CMCs are to be utilised to their full potential within industrial applications. Bulk measurements of strain via extensometry or even localised strain gauging would fail to characterise the ensuing inhomogeneity when performing conventional mechanical testing on laboratory scaled coupons. The current research has, therefore, applied digital image correlation (DIC), electrical resistance monitoring and acoustic emission techniques to the room and high-temperature assessment of ceramic matrix composites under axial tensile and fatigue loading, with particular attention afforded to a silicon carbide fibre-reinforced silicon carbide composite (SiCf/SiC) variant. Data from these separate monitoring techniques plus ancillary use of X-ray computed tomography, in-situ scanning electron microscopy and optical inspection were correlated to monitor the onset and progression of damage during mechanical loading. The benefits of employing a concurrent, multi-technique approach to monitoring damage in CMCs are demonstrated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Lee, Hae-Jun. "Dynamic Context Awareness of Universal Middleware based for IoT SNMP Service Platform". Tehnički glasnik 17, n.º 2 (13 de mayo de 2023): 185–91. http://dx.doi.org/10.31803/tg-20221221115431.

Texto completo
Resumen
This study focused on the Universal Middleware design for the IoT (Internet of Things) service gateway for the implementation module of the convergence platform. Recently, IoT service gateway including convergence platform could be supported on dynamic module system that is required mounting and recognized intelligent status with the remote network protocol. These awareness concepts support the dynamic environment of the cross-platform distributed computing technology is supported by these idea as a Universal Middleware for network substitution. Distribution system commonly used in recent embedded systems include CORBA (Common Object Request Broker Architecture), RMI (Remote Method Invocation), DCE (Distributed Computing Environment) for dynamic service interface, and suggested implementations of a device object context. However, the aforementioned technologies do not support each standardization of application services, communication protocols, and data, but are also limited in supporting inter-system scalability. In particular, in order to configure an IoT service module, the system can be simplified, and an independent service module can be configured as long as it can support the standardization of modules based on hardware and software components. This paper proposed a design method for Universal Middleware that, by providing IoT modules and service gateways with scalability for configuring operating system configuration, may be utilized as an alternative. This design could be a standardized interface provisioning way for hardware and software components as convergence services, and providing a framework for system construction. Universal Middleware Framework could be presented and dynamic environment standardization module of network protocols, various application service modules such as JINI (Apache River), UPnP (Universal Plug & Play), SLP (Service Location Protocol) bundles that provide communication facilities, and persistence data module. In this IoT gateway, management for based Universal Middleware framework support and available for each management operation, application service component could be cross-executed over SNMP (Simple Network Management Protocol) version 1, version 2, and version 3. The way of SNMP extension service modules are conducted cross-support each module and independent system meta-information that could be built life cycle management component through the MIB (Management Information Base) information unit analysis. Therefore, the MIB role of relation with the Dispatcher applied to support multiple concurrent SNMP messages by receiving incoming messages and managing the transfer of PDU (Protocol Data Unit) between the RFC 1906 network in this study. Results of the study revealed utilizing Universal Middleware that dynamic situations of context objects with mechanisms and tools to publish information could be consisted of IoT to standardize module interfaces to external service clients as a convergence between hardware and software platforms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Reiser, Jakob, Zhennan Lai, Xian-Yang Zhang y Roscoe O. Brady. "Development of Multigene and Regulated Lentivirus Vectors". Journal of Virology 74, n.º 22 (15 de noviembre de 2000): 10589–99. http://dx.doi.org/10.1128/jvi.74.22.10589-10599.2000.

Texto completo
Resumen
ABSTRACT Previously we described safe and efficient three-component human immunodeficiency virus type 1 (HIV-1)-based gene transfer systems for delivery of genes into nondividing cells (H. Mochizuki, J. P. Schwartz, K. Tanaka, R. O. Brady, and J. Reiser, J. Virol. 72:8873–8883, 1998). To apply such vectors in anti-HIV gene therapy strategies and to express multiple proteins in single target cells, we have engineered HIV-1 vectors for the concurrent expression of multiple transgenes. Single-gene vectors, bicistronic vectors, and multigene vectors expressing up to three exogenous genes under the control of two or three different transcriptional units, placed within the viralgag-pol coding region and/or the viral nef andenv genes, were designed. The genes encoding the enhanced version of green fluorescent protein (EGFP), mouse heat-stable antigen (HSA), and bacterial neomycin phosphotransferase were used as models whose expression was detected by fluorescence-activated cell sorting, fluorescence microscopy, and G418 selection. Coexpression of these reporter genes in contact-inhibited primary human skin fibroblasts (HSFs) persisted for at least 6 weeks in culture. Coexpression of theHSA and EGFP reporter genes was also achieved following cotransduction of target cells using two separate lentivirus vectors encoding HSA and EGFP, respectively. For the regulated expression of transgenes, tetracycline (Tet)-regulatable lentivirus vectors encoding the reverse Tet transactivator (rtTA) and EGFP controlled by a Tet-responsive element (TRE) were constructed. A binary HIV-1-based vector system consisting of a lentivirus encoding rtTA and a second lentivirus harboring a TRE driving the EGFPreporter gene was also designed. Doxycycline-modulated expression of the EGFP transgene was confirmed in transduced primary HSFs. These versatile vectors can potentially be used in a wide range of gene therapy applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Olzak, Lynn A. y Thomas D. Wickens. "Discrimination of Complex Patterns: Orientation Information is Integrated across Spatial Scale; Spatial-Frequency and Contrast Information are Not". Perception 26, n.º 9 (septiembre de 1997): 1101–20. http://dx.doi.org/10.1068/p261101.

Texto completo
Resumen
Real-world objects are complex, containing information at multiple orientations and spatial scales. It is well established that at initial cortical stages of processing, local information about an image is separately represented at multiple spatial scales. However, it is not yet established how these early representations are later integrated across scale to signal useful information about complex stimulus features, such as edges and textures. In the studies reported here, we investigate the scale-integration processes involved in distinguishing among complex patterns. We use a concurrent-response paradigm in which observers simultaneously judge two components of compound gratings that differ widely in spatial frequency. In different experiments, each component takes one of two slightly different values along the dimensions of spatial frequency, contrast, or orientation. Using analyses developed within the framework of a multivariate extension of signal-detection theory, we ask how information about the frequency, contrast, or orientation of the components is or is not integrated across the two grating components. Our techniques permit us to isolate and identify interactions due to excitatory or inhibitory processes from effects due to noise, and to separately assess any attentional limitations that might occur in processing. Results indicate that orientation information is fully integrated across spatial scales within a limited orientation band and that decisions are based entirely on the summed information. Information about spatial frequency and contrast is not summed over spatial scale; cross-scale results show sensory independence. However, our results suggest that observers cannot simultaneously use information about frequency or contrast when it is presented at different spatial scales. Our results provide direct evidence for the existence of a higher-level summing circuit tailored to signal information about orientation. The properties of this mechanism differ substantially from edge-detector mechanisms proposed by Marr and others.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Defer, E., J. P. Pinty, S. Coquillat, J. M. Martin, S. Prieur, S. Soula, E. Richard et al. "An overview of the lightning and atmospheric electricity observations collected in Southern France during the HYdrological cycle in Mediterranean EXperiment (HyMeX), Special Observation Period 1". Atmospheric Measurement Techniques Discussions 7, n.º 8 (4 de agosto de 2014): 8013–65. http://dx.doi.org/10.5194/amtd-7-8013-2014.

Texto completo
Resumen
Abstract. The PEACH (Projet en Electricité Atmosphérique pour la Campagne HyMeX – the Atmospheric Electricity Project of HyMeX Program) project is the Atmospheric Electricity component of the HyMeX (Hydrology cycle in the Mediterranean Experiment) experiment and is dedicated to the observation of both lightning activity and electrical state of continental and maritime thunderstorms in the area of the Mediterranean Sea. During the HyMeX SOP1 (Special Observation Period; 5 September–6 November 2012), four European Operational Lightning Locating Systems (OLLSs) (ATDNET, EUCLID, LINET, ZEUS) and the HyMeX Lightning Mapping Array network (HyLMA) were used to locate and characterize the lightning activity over the Southeastern Mediterranean at flash, storm and regional scales. Additional research instruments like slow antennas, video cameras, micro-barometer and microphone arrays were also operated. All these observations in conjunction with operational/research ground-based and airborne radars, rain gauges and in situ microphysical records aimed at characterizing and understanding electrically active and highly precipitating events over Southeastern France that often lead to severe flash floods. Simulations performed with Cloud Resolving Models like Meso-NH and WRF are used to interpret the results and to investigate further the links between dynamics, microphysics, electrification and lightning occurrence. A description of the different instruments deployed during the field campaign as well as the available datasets is given first. Examples of concurrent observations from radio frequency to acoustic for regular and atypical lightning flashes are then presented showing a rather comprehensive description of lightning flashes available from the SOP1 records. Then examples of storms recorded during HyMeX SOP1 over Southeastern France are briefly described to highlight the unique and rich dataset collected. Finally the next steps of the work required for the delivery of reliable lightning-derived products to the HyMeX community are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Baxter, Douglas A., Carmen C. Canavier, John W. Clark y John H. Byrne. "Computational Model of the Serotonergic Modulation of Sensory Neurons in Aplysia". Journal of Neurophysiology 82, n.º 6 (1 de diciembre de 1999): 2914–35. http://dx.doi.org/10.1152/jn.1999.82.6.2914.

Texto completo
Resumen
Serotonergic modulation of the sensory neurons that mediate the gill- and tail-withdrawal reflexes of Aplysia is a useful model system for studies of neuronal plasticity that contributes to learning and memory. The effects of serotonin (5-HT) are mediated, in part, via two protein kinases (protein kinase A, PKA, and protein kinase C, PKC), which in turn, modulate at least four membrane currents, including a S (“serotonin-sensitive”) K+ current ( I K,S), a steeply voltage-dependent K+ current ( I K-V), a slow component of the Ca2+-activated K+ current ( I K,Ca-S), and a L-type Ca2+current ( I Ca-L). The present study investigated how the modulation of these currents altered the spike duration and excitability of sensory neurons and examined the relative contributions of PKA- and PKC-mediated effects to the actions of 5-HT. A Hodgkin-Huxley type model was developed that described the ionic conductances in the somata of sensory neurons. The descriptions of these currents and their modulation were based largely on voltage-clamp data from sensory neurons. Simulations were preformed with the program SNNAP (Simulator for Neural Networks and Action Potentials). The model was sufficient to replicate empirical data that describes the membrane currents, action potential waveform and excitability as well as their modulation by application of 5-HT, increased levels of adenosine cyclic monophosphate or application of active phorbol esters. In the model, modulation of I K-V by PKC played a dominate role in 5-HT-induced spike broadening, whereas the concurrent modulation of I K,S and I K,Ca-S by PKA primarily accounted for 5-HT-induced increases in excitability. Finally, simulations indicated that a PKC-induced increase in excitability resulted from decreases of I K,S and I K,Ca-S, which was likely the indirect result of cross-talk between the PKC and PKA systems. The results provide several predictions that warrant additional experimental investigation and illustrate the importance of considering indirect as well as direct effects of modulatory agents on the modulation of membrane currents.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Усанова y Elena Usanova. "Formation of the Basic Level of Geometry and Graphics Competence of Students in E-Learning". Geometry & Graphics 4, n.º 1 (17 de marzo de 2016): 64–72. http://dx.doi.org/10.12737/18059.

Texto completo
Resumen
Dynamic development of high-tech engineering on the basis of CE/PLM methodology requires intensification of training of engineering personnel and improvement of its quality. Informatization of geometrical graphic training with use of graphic means of representation of educational information (various forms of compression information) and CAD systems allows teachers more productively to communicate with students in e-learning. This requires a valid scientific rationale for the possibility of effective formation of the geometric- graphic competence in the format of blended learning in the conditions of informatization of geometrical graphic training. In the 2014/2015 academic year at Kazan national research technical University conducted a comparative experiment to assess the efficiency of formation of the geometric-graphic competence in traditional and e-learning. The effectiveness (quality, intensity, efficiency) of the basic geometric and graphic training in the format of blended learning is confirmed by the results of the experiment while ensuring comfortable equally significant of training and informational interaction between student, teacher and interactive electronic educational resources in personal learning and problem-based teamwork using the method of projects. By the organizing of learning activities in this format can be set the task of training the target group (team) of developers technical objects, which participants will form a group project mentality at the stage of learning. Assessments of some criteria in component composition of the qualities of a person for control the formation of geometric- graphic competence of future graduates, such as: motivation, quality thinking, etc., characterizing the integrity, using the techniques of the subject adaptability to professional activity in a concurrent engineering by specialists in the field of engineering psychology is not yet sufficiently developed. It is not possible to fully automate the monitoring of the formation of the geometric-graphic competence.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Nazarpour, Hosein, Yliès Falcone, Saddek Bensalem y Marius Bozga. "Concurrency-preserving and sound monitoring of multi-threaded component-based systems: theory, algorithms, implementation, and evaluation". Formal Aspects of Computing 29, n.º 6 (6 de marzo de 2017): 951–86. http://dx.doi.org/10.1007/s00165-017-0422-6.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Berg, Peter y Michael L. Pace. "Continuous measurement of air–water gas exchange by underwater eddy covariance". Biogeosciences 14, n.º 23 (11 de diciembre de 2017): 5595–606. http://dx.doi.org/10.5194/bg-14-5595-2017.

Texto completo
Resumen
Abstract. Exchange of gases, such as O2, CO2, and CH4, over the air–water interface is an important component in aquatic ecosystem studies, but exchange rates are typically measured or estimated with substantial uncertainties. This diminishes the precision of common ecosystem assessments associated with gas exchanges such as primary production, respiration, and greenhouse gas emission. Here, we used the aquatic eddy covariance technique – originally developed for benthic O2 flux measurements – right below the air–water interface (∼ 4 cm) to determine gas exchange rates and coefficients. Using an acoustic Doppler velocimeter and a fast-responding dual O2–temperature sensor mounted on a floating platform the 3-D water velocity, O2 concentration, and temperature were measured at high-speed (64 Hz). By combining these data, concurrent vertical fluxes of O2 and heat across the air–water interface were derived, and gas exchange coefficients were calculated from the former. Proof-of-concept deployments at different river sites gave standard gas exchange coefficients (k600) in the range of published values. A 40 h long deployment revealed a distinct diurnal pattern in air–water exchange of O2 that was controlled largely by physical processes (e.g., diurnal variations in air temperature and associated air–water heat fluxes) and not by biological activity (primary production and respiration). This physical control of gas exchange can be prevalent in lotic systems and adds uncertainty to assessments of biological activity that are based on measured water column O2 concentration changes. For example, in the 40 h deployment, there was near-constant river flow and insignificant winds – two main drivers of lotic gas exchange – but we found gas exchange coefficients that varied by several fold. This was presumably caused by the formation and erosion of vertical temperature–density gradients in the surface water driven by the heat flux into or out of the river that affected the turbulent mixing. This effect is unaccounted for in widely used empirical correlations for gas exchange coefficients and is another source of uncertainty in gas exchange estimates. The aquatic eddy covariance technique allows studies of air–water gas exchange processes and their controls at an unparalleled level of detail. A finding related to the new approach is that heat fluxes at the air–water interface can, contrary to those typically found in the benthic environment, be substantial and require correction of O2 sensor readings using high-speed parallel temperature measurements. Fast-responding O2 sensors are inherently sensitive to temperature changes, and if this correction is omitted, temperature fluctuations associated with the turbulent heat flux will mistakenly be recorded as O2 fluctuations and bias the O2 eddy flux calculation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Abdoli, Mohammad, Karl Lapo, Johann Schneider, Johannes Olesch y Christoph K. Thomas. "Toward quantifying turbulent vertical airflow and sensible heat flux in tall forest canopies using fiber-optic distributed temperature sensing". Atmospheric Measurement Techniques 16, n.º 3 (14 de febrero de 2023): 809–24. http://dx.doi.org/10.5194/amt-16-809-2023.

Texto completo
Resumen
Abstract. The paper presents a set of fiber-optic distributed temperature sensing (FODS) experiments to expand the existing microstructure approach for horizontal turbulent wind direction by adding measurements of turbulent vertical component, as well as turbulent sensible heat flux. We address the observational challenge to isolate and quantify the weaker vertical turbulent motions from the much stronger mean advective horizontal flow signals. In the first part of this study, we test the ability of a cylindrical shroud to reduce the horizontal wind speed while keeping the vertical wind speed unaltered. A white shroud with a rigid support structure and 0.6 m diameter was identified as the most promising setup in which the correlation of flow properties between shrouded and reference systems is maximized. The optimum shroud setup reduces the horizontal wind standard deviation by 35 %, has a coefficient of determination of 0.972 for vertical wind standard deviations, and a RMSE of less than 0.018 ms−1 when compared to the reference. Spectral analysis showed a fixed ratio of spectral energy reduction in the low frequencies, e.g., <0.5 Hz, for temperature and wind components, momentum, and sensible heat flux. Unlike low frequencies, the ratios decrease exponentially in the high frequencies, which means the shroud dampens the high-frequency eddies with a timescale <6 s, considering both spectra and cospectra together. In the second part, the optimum shroud configuration was installed around a heated fiber-optic cable with attached microstructures in a forest to validate our findings. While this setup failed to isolate the magnitude and sign of the vertical wind perturbations from FODS in the shrouded portion, concurrent observations from an unshrouded part of the FODS sensor in the weak-wind subcanopy of the forest (12–17 m above ground level) yielded physically meaningful measurements of the vertical motions associated with coherent structures. These organized turbulent motions have distinct sweep and ejection phases. These strong flow signals allow for detecting the turbulent vertical airflow at least 60 % of the time and 71 % when conditional sampling was applied. Comparison of the vertical wind perturbations against those from sonic anemometry yielded correlation coefficients of 0.35 and 0.36, which increased to 0.53 and 0.62 for conditional sampling. This setup enabled computation of eddy covariance-based direct sensible heat flux estimates solely from FODS, which are reported here as a methodological and computational novelty. Comparing them against those from eddy covariance using sonic anemometry yielded an encouraging agreement in both magnitude and temporal variability for selected periods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Liu, Shixuan, Tianle Pu, Li Zeng, Yunfei Wang, Haoxiang Cheng y Zhong Liu. "Reinforcement Learning-Based Network Dismantling by Targeting Maximum-Degree Nodes in the Giant Connected Component". Mathematics 12, n.º 17 (6 de septiembre de 2024): 2766. http://dx.doi.org/10.3390/math12172766.

Texto completo
Resumen
Tackling the intricacies of network dismantling in complex systems poses significant challenges. This task has relevance across various practical domains, yet traditional approaches focus primarily on singular metrics, such as the number of nodes in the Giant Connected Component (GCC) or the average pairwise connectivity. In contrast, we propose a unique metric that concurrently targets nodes with the highest degree and reduces the GCC size. Given the NP-hard nature of optimizing this metric, we introduce MaxShot, an innovative end-to-end solution that leverages graph representation learning and reinforcement learning. Through comprehensive evaluations on both synthetic and real-world datasets, our method consistently outperforms leading benchmarks in accuracy and efficiency. These results highlight MaxShot’s potential as a superior approach to effectively addressing the network dismantling problem.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía