Journal articles on the topic 'Input-output specification model'

To see the other types of publications on this topic, follow the link: Input-output specification model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Input-output specification model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

ALEM, Habtamu. "Effects of model specification, short-run, and long-run inefficiency: an empirical analysis of stochastic frontier models." Agricultural Economics (Zemědělská ekonomika) 64, No. 11 (November 26, 2018): 508–16. http://dx.doi.org/10.17221/341/2017-agricecon.

Full text
Abstract:
This paper examines the recent advances in stochastic frontier (SF) models and its implications for the performance of Norwegian crop-producing farms. In contrast to the previous studies, we used a cost function in multiple input-output frameworks to estimate both long-run (persistent) and short-run (transient) inefficiency. The empirical analysis is based on unbalanced farm-level panel data for 1991–2013 with 3 885 observations from 455 Norwegian farms specialising in crop production. We estimated seven SF panel data models grouped into four categories regarding the assumptions used to the nature of inefficiency. The estimated cost efficiency scores varied from 53–95%, showing that the results are sensitive to how the inefficiency is modeled and interpreted.
APA, Harvard, Vancouver, ISO, and other styles
2

Xia, Xue, Yan Ru Zhong, Yu Chu Qin, and Liu Jing Ji. "Research on Operational Model of New-Generation GPS Based on Dynamic Description Logic." Applied Mechanics and Materials 128-129 (October 2011): 702–5. http://dx.doi.org/10.4028/www.scientific.net/amm.128-129.702.

Full text
Abstract:
Operation and Operator are the key technologies in the new-generation geometrical product specification and verification (GPS). In order to solve the geometrical specification problems of product functional requirements and the ambiguity problems of product specification, this paper utilizes a new method based on dynamic description logic to describe the fundamental concepts of geometrical specification. It analyzes the geometrical features of geometrical product functional specification. By establishing the model of the operations, describing the input and output parameters in the specification and verification process, and analyzing the preconditions and results of the executions of specification operator and verification operator, the paper simplifies the operation results and overcomes the shortcomings of ambiguity and inconsistency in product specification process. Finally, it takes the specification process of perpendicularity as an example to prove the feasibility and validity of this method.
APA, Harvard, Vancouver, ISO, and other styles
3

MO, YUCHANG, and XINMIN YANG. "A NEW APPROACH TO VERIFY STATECHART SPECIFICATIONS FOR REACTIVE SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 18, no. 06 (September 2008): 785–802. http://dx.doi.org/10.1142/s0218194008003908.

Full text
Abstract:
The application domain, such as communication networks and embedded controllers for telephony, automobiles, trains and avionics systems, requires very high quality reactive systems, so an important phase in the development of reactive systems is the verification of their conceptual models before implementation. Normally in the requirement analysis phase, system property can be represented as an input and output labeled transition system (IOLTS), which is a transition system labeled by inputs and outputs between the system and the environment. This paper describes an attempt to propose an approach to verify Statechart specifications for reactive systems given IOLTS property specifications. We develop a suitable semantics model — observable semantics, an abstract semantics model only which describes outside observable behavior and suffers from less state explosion problem by reducing infinite or large state spaces to small ones. Then we propose two methods to verify the conformance between a given IOLTS property specification and a Statechart specification: two-phase verification and on-the-fly verification. Compared with two-phase verification, on-the-fly verification needs less storage and computation-time, especially when the target Statechart specification is very large or likely to have many errors.
APA, Harvard, Vancouver, ISO, and other styles
4

Többen, Johannes Reinhard, Martin Distelkamp, Britta Stöver, Saskia Reuschel, Lara Ahmann, and Christian Lutz. "Global Land Use Impacts of Bioeconomy: An Econometric Input–Output Approach." Sustainability 14, no. 4 (February 9, 2022): 1976. http://dx.doi.org/10.3390/su14041976.

Full text
Abstract:
Many countries have set ambiguous targets for the development of a bioeconomy that not only ensures sufficient production of high-quality foods but also contributes to decarbonization, green jobs and reducing import dependency through biofuels and advanced biomaterials. However, feeding a growing and increasingly affluent world population and providing additional biomass for a future bioeconomy all within planetary boundaries constitute an enormous challenge for achieving the Sustainable Development Goals (SDG). Global economic models mapping the complex network of global supply such as multiregional input–output (MRIO) or computable general equilibrium (CGE) models have been the workhorses to monitor the past as well as possible future impacts of the bioeconomy. These approaches, however, have often been criticized for their relatively low amount of detail on agriculture and energy, or for their lack of an empirical base for the specification of agents’ economic behavior. In this paper, we address these issues and present a hybrid macro-econometric model that combines a comprehensive mapping of the world economy with highly detailed submodules of agriculture and the energy sector in physical units based on FAO and IEA data. We showcase the model in a case study on the future global impacts of the EU’s bioeconomy transformation and find small positive economic impacts at the cost of a considerable increase in land use mostly outside of Europe.
APA, Harvard, Vancouver, ISO, and other styles
5

Deman, S. "Stability of Supply Coefficients and Consistency of Supply-Driven and Demand-Driven Input—Output Models." Environment and Planning A: Economy and Space 20, no. 6 (June 1988): 811–16. http://dx.doi.org/10.1068/a200811.

Full text
Abstract:
The motivation behind this paper is to review the debate on inconsistencies in and growing concern over the validity of a supply-driven model as an alternative to the Leontief demand-driven model for interindustry analysis. Both the Leontief demand-driven and the Ghoshian alternative supply-constrained allocation specification are critically discussed. Various claims and counterclaims have been addressed in the literature available on this issue. A general condition is derived for consistency between the two approaches, and a theorem is stated in which greater confidence is provided in the use of the supply-driven model for interindustry analysis.
APA, Harvard, Vancouver, ISO, and other styles
6

Tezak, Nikolas, Armand Niederberger, Dmitri S. Pavlichin, Gopal Sarma, and Hideo Mabuchi. "Specification of photonic circuits using quantum hardware description language." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 370, no. 1979 (November 28, 2012): 5270–90. http://dx.doi.org/10.1098/rsta.2011.0526.

Full text
Abstract:
Following the simple observation that the interconnection of a set of quantum optical input–output devices can be specified using structural mode VHSIC hardware description language, we demonstrate a computer-aided schematic capture workflow for modelling and simulating multi-component photonic circuits. We describe an algorithm for parsing circuit descriptions to derive quantum equations of motion, illustrate our approach using simple examples based on linear and cavity-nonlinear optical components, and demonstrate a computational approach to hierarchical model reduction.
APA, Harvard, Vancouver, ISO, and other styles
7

Ören, Tuncer. "Coupling concepts for simulation: A systematic and comprehensive view and advantages with declarative models." International Journal of Modeling, Simulation, and Scientific Computing 05, no. 02 (February 25, 2014): 1430001. http://dx.doi.org/10.1142/s1793962314300015.

Full text
Abstract:
A brief review of the importance of simulation-based engineering and science (including social sciences) is followed by a historic perspective of model-based simulation. Section 2 is on declarative modeling of component systems as well as its advantages for self-documentation and for computer-aided checks and coupling. As an example for declarative modeling, General System Theory (GEST) implementor is given. In Sec. 3, basic concepts for coupling of component models, and rules for computer-assisted coupling specification are explained. Section 4 is devoted to possible computerized checks in couplings of declarative models such as: (1) automatic unit checking to avoid meaningless input/output matching at the time of coupling specification, (2) automatic threshold checking to provide warnings and/or to avoid disasters, and (3) automatic unit conversion for convenience of using library models. Section 5 is about several layers of nested couplings for modeling systems of systems. In Sec. 6, two types of variable couplings are discussed: (1) couplings with variable connections (to allow input/output relations of models to depend on time or state conditions) and (2) coupling with variable component models (to allow component (or coupled) models to be switched based on time or state conditions). Section 7 is on the use of multimodels as component models in couplings. Section 8 is on types of inputs and their use in couplings as well as on external inputs to simulation studies. In Sec. 9, conclusions and future work for complex systems are outlined. Especially, the values of simulation systems engineering as well as understanding and avoidance of misunderstanding in cognitive and emotive simulations are stressed. Appendix A is a list of almost 50 types of couplings and Appendix B lists over 50 terms related with couplings in modeling and simulation. To show the richness of "input" concept which is important in specification of input/output relations of component models, Appendix C lists almost 150 types of inputs. Information shared in this article may be useful in developing advanced modeling and simulation software, tools and environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Gnatenko, Anton Romanovich, and Vladimir Anatolyevich Zakharov. "On the Satisfiability and Model Checking for one Parameterized Extension of Linear-time Temporal Logic." Modeling and Analysis of Information Systems 28, no. 4 (December 18, 2021): 356–71. http://dx.doi.org/10.18255/1818-1015-2021-4-356-371.

Full text
Abstract:
Sequential reactive systems are computer programs or hardware devices which process the flows of input data or control signals and output the streams of instructions or responses. When designing such systems one needs formal specification languages capable of expressing the relationships between the input and output flows. Previously, we introduced a family of such specification languages based on temporal logics $LTL$, $CTL$ and $CTL^*$ combined with regular languages. A characteristic feature of these new extensions of conventional temporal logics is that temporal operators and basic predicates are parameterized by regular languages. In our early papers, we estimated the expressive power of the new temporal logic $Reg$-$LTL$ and introduced a model checking algorithm for $Reg$-$LTL$, $Reg$-$CTL$, and $Reg$-$CTL^*$. The main issue which still remains unclear is the complexity of decision problems for these logics. In the paper, we give a complete solution to satisfiability checking and model checking problems for $Reg$-$LTL$ and prove that both problems are Pspace-complete. The computational hardness of the problems under consideration is easily proved by reducing to them the intersection emptyness problem for the families of regular languages. The main result of the paper is an algorithm for reducing the satisfiability of checking $Reg$-$LTL$ formulas to the emptiness problem for Buchi automata of relatively small size and a description of a technique that allows one to check the emptiness of the obtained automata within space polynomial of the size of input formulas.
APA, Harvard, Vancouver, ISO, and other styles
9

Poison, Rudolph A., and C. Richard Shumway. "Structure of South Central Agricultural Production." Journal of Agricultural and Applied Economics 22, no. 2 (December 1990): 153–63. http://dx.doi.org/10.1017/s1074070800001905.

Full text
Abstract:
Abstract Using a dual economic specification of a multiproduct technology, the structure of agricultural production was tested for five South Central states (Texas, Oklahoma, Arkansas, Mississippi, and Louisiana). A comprehensive set of output supplies and input demands comprised the estimation equations in each state. Evidence of nonjoint production in a subset of commodities was detected in four of the five states. Several commodities also satisfied sufficient conditions for consistent aggregation. However, the specific outputs satisfying each structural property varied by state. Sufficient conditions for consistent geographic aggregation across the states were not satisfied. These results provide empirical guidance and important cautions for legitimately simplifying state-level model specifications of southern agricultural production.
APA, Harvard, Vancouver, ISO, and other styles
10

Hucka, Michael, Frank T. Bergmann, Andreas Dräger, Stefan Hoops, Sarah M. Keating, Nicolas Le Novère, Chris J. Myers, et al. "Systems Biology Markup Language (SBML) Level 2 Version 5: Structures and Facilities for Model Definitions." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 731–901. http://dx.doi.org/10.1515/jib-2015-271.

Full text
Abstract:
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
APA, Harvard, Vancouver, ISO, and other styles
11

Dwisaputra, Indra, Tareg Mahmoud, Meti Megayanti, Irvan Budiawan, and Pranoto Hidaya R. "Pengaruh Jumlah Input Dan Membership Function Fuzzy Logic Control Pada Robot Keseimbangan Beroda Dua." Manutech : Jurnal Teknologi Manufaktur 8, no. 02 (May 7, 2019): 19–24. http://dx.doi.org/10.33504/manutech.v8i02.16.

Full text
Abstract:
In this paper, a dynamic model of two-wheeled balancing robot has been created, and the two types of FLC has been designed. The Mamdani methods used on both FLC. The first FLC uses pendulum tilt angle theta (θ) as the input and it requires the motor torque to keep the robot remains balanced as the output. The second FLC uses two inputs, the first input is theta (θ) and the second input is the change in the value of theta (θ) which is the output torque of the motor. The second plant model and FLC built by using Matlab Simulink. The first case is one input using 5 membership functions (mf). The second case is two inputs using the 5 and 7 mf. The characteristics and effects of the changes in the input and mf have been simulated in the Simulink and compared. By expanding the number of the inputs can reduce motor specification required in balancing robot. Meanwhile, by increasing the number mf, it can improve the performance of the controller much faster to reach the settling time.
APA, Harvard, Vancouver, ISO, and other styles
12

Chan, Kelvin, Raymond Chan, and Mila Nikolova. "A Convex Model for Edge-Histogram Specification with Applications to Edge-Preserving Smoothing." Axioms 7, no. 3 (August 2, 2018): 53. http://dx.doi.org/10.3390/axioms7030053.

Full text
Abstract:
The goal of edge-histogram specification is to find an image whose edge image has a histogram that matches a given edge-histogram as much as possible. Mignotte has proposed a non-convex model for the problem in 2012. In his work, edge magnitudes of an input image are first modified by histogram specification to match the given edge-histogram. Then, a non-convex model is minimized to find an output image whose edge-histogram matches the modified edge-histogram. The non-convexity of the model hinders the computations and the inclusion of useful constraints such as the dynamic range constraint. In this paper, instead of considering edge magnitudes, we directly consider the image gradients and propose a convex model based on them. Furthermore, we include additional constraints in our model based on different applications. The convexity of our model allows us to compute the output image efficiently using either Alternating Direction Method of Multipliers or Fast Iterative Shrinkage-Thresholding Algorithm. We consider several applications in edge-preserving smoothing including image abstraction, edge extraction, details exaggeration, and documents scan-through removal. Numerical results are given to illustrate that our method successfully produces decent results efficiently.
APA, Harvard, Vancouver, ISO, and other styles
13

Grobelna, Iwona, and Paweł Szcześniak. "Interpreted Petri Nets Applied to Autonomous Components within Electric Power Systems." Applied Sciences 12, no. 9 (May 9, 2022): 4772. http://dx.doi.org/10.3390/app12094772.

Full text
Abstract:
In this article, interpreted Petri nets are applied to the area of power and energy systems. These kinds of nets, equipped with input and output signals for communication with the environment, have so far proved to be useful in the specification of control systems and cyber–physical systems (in particular, the control part), but they have not been used in power systems themselves. Here, interpreted Petri nets are applied to the specification of autonomous parts within power and energy systems. An electric energy storage (EES) system is presented as an application system for the provision of a system service for stabilizing the power of renewable energy sources (RES) or highly variable loads. The control algorithm for the EES is formally written as an interpreted Petri net, allowing it to benefit from existing analysis and verification methods. In particular, essential properties of such specifications can be checked, including, e.g., liveness, safety, reversibility, and determinism. This enables early detection of possible structural errors. The results indicate that interpreted Petri nets can be successfully used to model and analyze autonomous control components within power energy systems.
APA, Harvard, Vancouver, ISO, and other styles
14

Sun, Tao, and Xinming Ye. "A Model Reduction Method for Parallel Software Testing." Journal of Applied Mathematics 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/595897.

Full text
Abstract:
Modeling and testing for parallel software systems are very difficult, because the number of states and execution sequences expands significantly caused by parallel behaviors. In this paper, a model reduction method based on Coloured Petri Net (CPN) is shown, which could generate a functionality-equivalent and trace-equivalent model with smaller scale. Model-based testing for parallel software systems becomes much easier after the model is reduced by the reduction method. Specifically, a formal model for software system specification is constructed based on CPN. Then the places in the model are divided into input places, output places, and internal places; the transitions in the model are divided into input transitions, output transitions, and internal transitions. Internal places and internal transitions could be reduced if preconditions are matching, and some other operations should be done for functionality equivalence and trace equivalence. If the place and the transition are in a parallel structure, then many execution sequences will be removed from the state space. We have proved the equivalence and have analyzed the reduction effort, so that we could get the same testing result with much lower testing workload. Finally, some practices and a performance analysis show that the method is effective.
APA, Harvard, Vancouver, ISO, and other styles
15

Bochdansky, Alexander B., and Don Deibel. "Consequences of model specification for the determination of gut evacuation rates: redefining the linear model." Canadian Journal of Fisheries and Aquatic Sciences 58, no. 5 (May 1, 2001): 1032–42. http://dx.doi.org/10.1139/f01-041.

Full text
Abstract:
The combination of gut contents and evacuation rate is an important tool to determine in situ feeding rates of many organisms. Traditionally used equations of gut evacuation models, however, have made a systematic comparison among models difficult. We changed the notation of the linear gut evacuation model to provide an algorithm compatible with the commonly used exponential model. Using examples from the literature, we demonstrate that prolonged retention of food after a true linear pattern of gut evacuation can be mistaken for an exponential pattern. In many of these examples, use of an exponential gut evacuation model causes overestimation of food consumption by approximately twofold. In situations where the entire gastrointestinal tract is examined instead of the stomach only, the time course of the gut content is best described by a plug-flow reactor for which the "input = output" rule applies. In contrast, the assumption of a "proportional release" must be met rigorously for the exponential model to be valid, which requires more complex feedback mechanisms than the simple plug-flow reactor. The new algorithm for the linear model is also consistent with the empirical observation that the gut evacuation rate is proportional to the initial gut content.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Ley, S. Askarian, M. Mohammadzaheri, and F. Samadi. "Simulation and Experimental Study of Inverse Heat Conduction Problem." Advanced Materials Research 233-235 (May 2011): 2820–23. http://dx.doi.org/10.4028/www.scientific.net/amr.233-235.2820.

Full text
Abstract:
In this paper, a neural network method is proposed to solve a one dimensional inverse heat conduction problem (IHCP). The method relies on input/output data of an unknown system to create an intelligent neural network model. Multi layer perceptrons with recurrent properties are utilised in the model. Prepared input/output data are used to train the neural network. Reliable checking processes are also offered to justify the robustness of the method. A numerical sequential function specification (SFS) method is used as another technique to solve the IHCP. The numerical result is compared with that of the proposed method and good agreement is shown between the two methods. However, the numerical method can be only used to solve the IHCP off-line due to the high computation requirement. The proposed neural network method can be used in real-time situations as shown in the experimental tests.
APA, Harvard, Vancouver, ISO, and other styles
17

Hucka, Michael, Frank T. Bergmann, Stefan Hoops, Sarah M. Keating, Sven Sahle, James C. Schaff, Lucian P. Smith, and Darren J. Wilkinson. "The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core." Journal of Integrative Bioinformatics 12, no. 2 (June 1, 2015): 382–549. http://dx.doi.org/10.1515/jib-2015-266.

Full text
Abstract:
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
APA, Harvard, Vancouver, ISO, and other styles
18

Jankowski-Mihułowicz, Piotr, Wojciech Lichoń, and Mariusz Węglarski. "Numerical Model of Directional Radiation Pattern Based on Primary Antenna Parameters." International Journal of Electronics and Telecommunications 61, no. 2 (June 1, 2015): 191–97. http://dx.doi.org/10.1515/eletel-2015-0025.

Full text
Abstract:
Abstract The new numerical model of directional radiation pattern, in which part of energy is emitted in side and back lobs, has been presented in this paper. The model input values are determined on the base of primary parameters that can be read from the datasheet of used antennas. The special software tool NmAntPat has been elaborated to carry out the described task. The elaborated model, program and output files can be easily implemented into an analysis of radio wave propagation phenomenon in any algorithms and numerical calculations. The comparison of the graphical plots that have been obtained on the base of measurements, producers’ data specification notes and modelling results confirms the model correctness.
APA, Harvard, Vancouver, ISO, and other styles
19

Java, Oskars. "THE SPECIFICATION OF HYDROLOGICAL MODEL REQUIREMENTS FOR BOG RESTORATION." SOCIETY. TECHNOLOGY. SOLUTIONS. Proceedings of the International Scientific Conference 1 (April 17, 2019): 22. http://dx.doi.org/10.35363/via.sts.2019.18.

Full text
Abstract:
INTRODUCTION Within the scope of biodiversity and sustainable ecosystem development, the restoration of a bog’s ecosystem is important because by reducing the drainage effect on the bog, the negative impact on adjacent intact or relatively intact raised bog and other wetland hydrological regimes is lowered. Degraded bogs are mires with a disturbed natural hydrological regime, or those partly exploited for peat extraction. However, the hydrological regime can be restored and peat formation is expected within 30 years. The restoration of a bog’s hydrological regime can be accelerated by filling up the drainage ditches. In the course of researching scientific literature, the author has found no evidence of a system dynamics model developed to simulate tree cutting intensity in degraded bogs after filling the drainage ditches for the purpose of speeding up the restoration of the hydrological regime. Thus, this approach is an innovative way of solving the problem. Bog hydrological systems are complex systems with many components, thus an interdisciplinary approach must be applied which combines hydrology, biology, geography and meteorology with computer sciences. Specification requirement technique is a useful tool for determining the elements that shape a bog’s hydrological system and interact with each other, thus providing the design for a simulation model. MATERIAL AND METHODS In the opinion of the author, the most suitable specification requirement tool to determine components forming the bog hydrological system is (OOAD), because it is applicable both in system dynamics and object modelling systems. Based on OOAD, it will be able to build system dynamics models in STELLA system dynamics and the GEOframe NewAGE modelling system, which is based on an object modelling system framework. OOAD principles are fundamentally based on real world objects (Powell-Morse, 2017) - in this case, the elements forming a bog’s hydrological system. OOAD combines all behaviours, characteristics and states into one analysis process, rather than splitting them up into separate stages, as many other methodologies would do (Powell-Morse, 2017). OOAD can be divided in two parts – Object-Oriented Analysis (OOA), and Object-Oriented Design (OOD). The products of OOA serve as models from which we may start an object-oriented design; the products of OOD can then be used as blueprints for completely implementing a system using object-oriented programming methods (Booch, 1998). In the study of the boundaries of the bog hydrological model, theoretical methods such as case study and content analysis were mainly used - specifically evaluative, explorative and instrumental review methods. RESULTS This study helped to understand complex interrelationships that exist between different elements within a bog’s hydrological system. The bog hydrological system boundaries were clarified, and the simulation model specification requirements were determined. DISCUSSION The next step is to develop simulation models in STELLA system dynamics and the GEOframe NewAGE modelling system and compare the performance. These simulation models will be made to represent water movement in a bog’s hydrological system from water input by means of precipitation to water output through interception, sublimation, evaporation, transpiration, lake outflow and overland flow. The input data will be loaded manually from the QGIS Open Source Geographic Information System and Excel databases. It will be possible to generate output data in the form of frequency tables, graphical analysis, review tables, GIS raster files and others. CONCLUSION The determination of tree thinning intensity in degraded bogs using modelling is a new innovative approach which should allow the water level of ecosystems to be restored faster and more efficiently, thus increasing natural diversity, improving the quality of life of local people and promoting bog recreational ability.
APA, Harvard, Vancouver, ISO, and other styles
20

Busso, Thierry, and Luc Thomas. "Using Mathematical Modeling in Training Planning." International Journal of Sports Physiology and Performance 1, no. 4 (December 2006): 400–405. http://dx.doi.org/10.1123/ijspp.1.4.400.

Full text
Abstract:
This report aims to discuss the strengths and weaknesses of the application of systems modeling to analyze the effects of training on performance. The simplifications inherent to the modeling approach are outlined to question the relevance of the models to predict athletes’ responses to training. These simplifications include the selection of the variables assigned to the system’s input and output, the specification of model structure, the collection of data to estimate the model parameters, and the use of identified models and parameters to predict responses. Despite the gain in insight to understand the effects of an intensification or reduction of training, the existing models would not be accurate enough to make predictions for a particular athlete in order to monitor his or her training.
APA, Harvard, Vancouver, ISO, and other styles
21

Gnatenko, Anton Romanovich, and Vladimir Anatolyevich Zakharov. "On the Model Checking Problem for Some Extension of CTL*." Modeling and Analysis of Information Systems 27, no. 4 (December 20, 2020): 428–41. http://dx.doi.org/10.18255/1818-1015-2020-4-428-441.

Full text
Abstract:
Sequential reactive systems include programs and devices that work with two streams of data and convert input streams of data into output streams. Such information processing systems include controllers, device drivers, computer interpreters. The result of the operation of such computing systems are infinite sequences of pairs of events of the request-response type, and, therefore, finite transducers are most often used as formal models for them. The behavior of transducers is represented by binary relations on infinite sequences, and so, traditional applied temporal logics (like HML, LTL, CTL, mu-calculus) are poorly suited as specification languages, since omega-languages, not binary relations on omega-words are used for interpretation of their formulae. To provide temporal logics with the ability to define properties of transformations that characterize the behavior ofreactive systems, we introduced new extensions ofthese logics, which have two distinctive features: 1) temporal operators are parameterized, and languages in the input alphabet oftransducers are used as parameters; 2) languages in the output alphabet oftransducers are used as basic predicates. Previously, we studied the expressive power ofnew extensions Reg-LTL and Reg-CTL ofthe well-known temporal logics oflinear and branching time LTL and CTL, in which it was allowed to use only regular languages for parameterization of temporal operators and basic predicates. We discovered that such a parameterization increases the expressive capabilities oftemporal logic, but preserves the decidability of the model checking problem. For the logics mentioned above, we have developed algorithms for the verification of finite transducers. At the next stage of our research on the new extensions of temporal logic designed for the specification and verification of sequential reactive systems, we studied the verification problem for these systems using the temporal logic Reg-CTL*, which is an extension ofthe Generalized Computational Tree Logics CTL*. In this paper we present an algorithm for checking the satisfiability of Reg-CTL* formulae on models of finite state transducers and show that this problem belongs to the complexity class ExpSpace.
APA, Harvard, Vancouver, ISO, and other styles
22

Konstantinidis, Stavros, Nelma Moreira, and Rogério Reis. "Randomized generation of error control codes with automata and transducers." RAIRO - Theoretical Informatics and Applications 52, no. 2-3-4 (April 2018): 169–84. http://dx.doi.org/10.1051/ita/2018015.

Full text
Abstract:
We introduce the concept of an -maximal error-detecting block code, for some parameter in (0,1), in order to formalize the situation where a block code is close to maximal with respect to being error-detecting. Our motivation for this is that it is computationally hard to decide whether an error-detecting block code is maximal. We present an output-polynomial time randomized algorithm that takes as input two positive integers N, ℓ and a specification of the errors permitted in some application, and generates an error-detecting, or error-correcting, block code of length ℓ that is 99%-maximal, or contains N words with a high likelihood. We model error specifications as (nondeterministic) transducers, which allow one to represent any rational combination of substitution and synchronization errors. We also present some elements of our implementation of various error-detecting properties and their associated methods. Then, we show several tests of the implemented randomized algorithm on various error specifications. A methodological contribution is the presentation of how various desirable error combinations can be expressed formally and processed algorithmically.
APA, Harvard, Vancouver, ISO, and other styles
23

UNNO, HIROSHI, NAOSHI TABUCHI, and NAOKI KOBAYASHI. "Verification of tree-processing programs via higher-order mode checking." Mathematical Structures in Computer Science 25, no. 4 (November 10, 2014): 841–66. http://dx.doi.org/10.1017/s0960129513000054.

Full text
Abstract:
We propose a new method to verify that a higher-order, tree-processing functional program conforms to an input/output specification. Our method reduces the verification problem to multiple verification problems for higher-order multi-tree transducers, which are then transformed into higher-order recursion schemes and model-checked. Unlike previous methods, our new method can deal with arbitrary higher-order functional programs manipulating algebraic data structures, as long as certain invariants on intermediate data structures are provided by a programmer. We have proved the soundness of the method and implemented a prototype verifier.
APA, Harvard, Vancouver, ISO, and other styles
24

Shan, Yan, Mingbin Huang, Paul Harris, and Lianhai Wu. "A Sensitivity Analysis of the SPACSYS Model." Agriculture 11, no. 7 (July 3, 2021): 624. http://dx.doi.org/10.3390/agriculture11070624.

Full text
Abstract:
A sensitivity analysis is critical for determining the relative importance of model parameters to their influence on the simulated outputs from a process-based model. In this study, a sensitivity analysis for the SPACSYS model, first published in Ecological Modelling (Wu, et al., 2007), was conducted with respect to changes in 61 input parameters and their influence on 27 output variables. Parameter sensitivity was conducted in a ‘one at a time’ manner and objectively assessed through a single statistical diagnostic (normalized root mean square deviation) which ranked parameters according to their influence of each output variable in turn. A winter wheat field experiment provided the case study data. Two sets of weather elements to represent different climatic conditions and four different soil types were specified, where results indicated little influence on these specifications for the identification of the most sensitive parameters. Soil conditions and management were found to affect the ranking of parameter sensitivities more strongly than weather conditions for the selected outputs. Parameters related to drainage were strongly influential for simulations of soil water dynamics, yield and biomass of wheat, runoff, and leaching from soil during individual and consecutive growing years. Wheat yield and biomass simulations were sensitive to the ‘ammonium immobilised fraction’ parameter that related to soil mineralization and immobilisation. Simulations of CO2 release from the soil and soil nutrient pool changes were most sensitive to external nutrient inputs and the process of denitrification, mineralization, and decomposition. This study provides important evidence of which SPACSYS parameters require the most care in their specification. Moving forward, this evidence can help direct efficient sampling and lab analyses for increased accuracy of such parameters. Results provide a useful reference for model users on which parameters are most influential for different simulation goals, which in turn provides better informed decision making for farmers and government policy alike.
APA, Harvard, Vancouver, ISO, and other styles
25

Brody, Dorje C. "Modelling election dynamics and the impact of disinformation." Information Geometry 2, no. 2 (October 7, 2019): 209–30. http://dx.doi.org/10.1007/s41884-019-00021-2.

Full text
Abstract:
Abstract Complex dynamical systems driven by the unravelling of information can be modelled effectively by treating the underlying flow of information as the model input. Complicated dynamical behaviour of the system is then derived as an output. Such an information-based approach is in sharp contrast to the conventional mathematical modelling of information-driven systems whereby one attempts to come up with essentially ad hoc models for the outputs. Here, dynamics of electoral competition is modelled by the specification of the flow of information relevant to election. The seemingly random evolution of the election poll statistics are then derived as model outputs, which in turn are used to study election prediction, impact of disinformation, and the optimal strategy for information management in an election campaign.
APA, Harvard, Vancouver, ISO, and other styles
26

Ivester, R., and K. Danai. "Automatic Tuning and Regulation of Injection Molding by the Virtual Search Method." Journal of Manufacturing Science and Engineering 120, no. 2 (May 1, 1998): 323–29. http://dx.doi.org/10.1115/1.2830130.

Full text
Abstract:
Methodical specification of process inputs for injection molding is hindered by the absence of accurate analytical models. For these processes, the input variables are assigned either by trial and error, based on heuristic knowledge of an experienced operator, or by statistical Design of Experiments (DOE) methods which construct a comprehensive empirical model between the inputs and part quality attributes. In this paper, an iterative method of input selection (tuning) referred to as the Virtual Search Method (VSM) is introduced that conducts most of the search for appropriate machine inputs in a ‘virtual’ environment provided by an approximate input-output (I-O) model. VSM applies the inputs to the process only when it has exhausted the search based on the current I-O model. It evaluates the quality of inputs from the search and updates the I-O model for the next round of search based on measurements of part quality attributes (e.g., size tolerances and surface integrity) after each process iteration. According to this strategy, VSM updates the model only when needed, and thus selectively develops the model as required for tuning the process. This approach has been shown to lead to shorter tuning sessions than required by DOE methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Duan, Xiaofang, Jinhe Zhang, Ping Sun, Honglei Zhang, Chang Wang, Ya-Yen Sun, Manfred Lenzen, Arunima Malik, Shanshan Cao, and Yue Kan. "Carbon Emissions of the Tourism Telecoupling System: Theoretical Framework, Model Specification and Synthesis Effects." International Journal of Environmental Research and Public Health 19, no. 10 (May 14, 2022): 5984. http://dx.doi.org/10.3390/ijerph19105984.

Full text
Abstract:
The flows of people and material attributed to international tourism exert a major impact on the global environment. Tourism carbon emissions is the main indicator in this context. However, previous studies focused on estimating the emissions of destinations, ignoring the embodied emissions in tourists’ origins and other areas. This study provides a comprehensive framework of a tourism telecoupling system. Taking China’s international tourism as an example, we estimate the carbon emissions of its tourism telecoupling system based on the Tourism Satellite Account and input–output model. We find that (1) the proposal of a tourism telecoupling system provides a new perspective for analyzing the carbon emissions of a tourism system. The sending system (origins) and indirect spillover system (resource suppliers) have been ignored in previous studies. (2) In the telecoupling system of China’s international tourism, the emission reduction effect of the sending system is significant. (3) The direct spillover system (transit) and indirect spillover system’s spatial transfer effects of environment responsibility are remarkable. (4) There is a large carbon trade implied in international tourism. This study makes us pay attention to the carbon emissions of tourists’ origins and the implied carbon trading in tourism flows.
APA, Harvard, Vancouver, ISO, and other styles
28

JEUSFELD, MANFRED A., and UWE A. JOHNEN. "AN EXECUTABLE META MODEL FOR RE-ENGINEERING OF DATABASE SCHEMAS." International Journal of Cooperative Information Systems 04, no. 02n03 (June 1995): 237–58. http://dx.doi.org/10.1142/s021884309500010x.

Full text
Abstract:
A logical database schema, e.g. a relational one, is the implementation of a specification, e.g. an entity-relationship diagram. Upcoming new data models require a cost-effective method for mapping from one data model to the other. We present an approach where the mapping process is divided into three parts. The first part reformulates the source and target data models into a so-called meta model. The second part classifies the input schema into the meta model, yielding a data model-independent representation. The third part synthesizes the output schema in terms of the target data model. The meta model, the data models as well as the schemas are all represented in the logic-based formalism of O-Telos. Its ability to quantify across data model concepts is the key to classifying schema elements independently of their data model. A prototype has been implemented on top of the deductive object base manager ConceptBase for the mapping of relational schemas to entity-relationship diagrams. From this, a C++-based tool has been derived as part of a commercial CASE environment for database applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Christen, Marc, Perry Bartelt, and Julia Kowalski. "Back calculation of the In den Arelen avalanche with RAMMS: interpretation of model results." Annals of Glaciology 51, no. 54 (2010): 161–68. http://dx.doi.org/10.3189/172756410791386553.

Full text
Abstract:
AbstractTwo- and three-dimensional avalanche dynamics models are being increasingly used in hazard-mitigation studies. These models can provide improved and more accurate results for hazard mapping than the simple one-dimensional models presently used in practice. However, two- and three-dimensional models generate an extensive amount of output data, making the interpretation of simulation results more difficult. To perform a simulation in three-dimensional terrain, numerical models require a digital elevation model, specification of avalanche release areas (spatial extent and volume), selection of solution methods, finding an adequate calculation resolution and, finally, the choice of friction parameters. In this paper, the importance and difficulty of correctly setting up and analysing the results of a numerical avalanche dynamics simulation is discussed. We apply the two-dimensional simulation program RAMMS to the 1968 extreme avalanche event In den Arelen. We show the effect of model input variations on simulation results and the dangers and complexities in their interpretation.
APA, Harvard, Vancouver, ISO, and other styles
30

Sziray, József. "A Test Model for Hardware and Software Systems." Journal of Advanced Computational Intelligence and Intelligent Informatics 8, no. 5 (September 20, 2004): 523–29. http://dx.doi.org/10.20965/jaciii.2004.p0523.

Full text
Abstract:
The paper is concerned with the general aspects of testing complex hardware and software systems. First a mapping scheme as a test model is presented for an arbitrary given system. This scheme serves for describing the one-to-one correspondence between the input and output domains of the system, where the test inputs and fault classes are also involved. The presented test model incorporates both the verification and the validation schemes for hardware and software. The significance of the model is that it alleviates the clear differentiation between verification and validation tests, which is important and useful in the process of test design and evaluation. On the other hand, this model provides a clear overview on the various purpose test sets, which helps in organizing and applying these sets. The second part of the paper examines the case when the hardware and software are designed by using formal specification. Here the consequences and problems of formal methods, and their impacts on verification and validation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Banda, Peter, Christof Teuscher, and Matthew R. Lakin. "Online Learning in a Chemical Perceptron." Artificial Life 19, no. 2 (April 2013): 195–219. http://dx.doi.org/10.1162/artl_a_00105.

Full text
Abstract:
Autonomous learning implemented purely by means of a synthetic chemical system has not been previously realized. Learning promotes reusability and minimizes the system design to simple input-output specification. In this article we introduce a chemical perceptron, the first full-featured implementation of a perceptron in an artificial (simulated) chemistry. A perceptron is the simplest system capable of learning, inspired by the functioning of a biological neuron. Our artificial chemistry is deterministic and discrete-time, and follows Michaelis-Menten kinetics. We present two models, the weight-loop perceptron and the weight-race perceptron, which represent two possible strategies for a chemical implementation of linear integration and threshold. Both chemical perceptrons can successfully identify all 14 linearly separable two-input logic functions and maintain high robustness against rate-constant perturbations. We suggest that DNA strand displacement could, in principle, provide an implementation substrate for our model, allowing the chemical perceptron to perform reusable, programmable, and adaptable wet biochemical computing.
APA, Harvard, Vancouver, ISO, and other styles
32

Hoffman, Lesa. "On the Interpretation of Parameters in Multivariate Multilevel Models Across Different Combinations of Model Specification and Estimation." Advances in Methods and Practices in Psychological Science 2, no. 3 (August 2, 2019): 288–311. http://dx.doi.org/10.1177/2515245919842770.

Full text
Abstract:
The increasing availability of software with which to estimate multivariate multilevel models (also called multilevel structural equation models) makes it easier than ever before to leverage these powerful techniques to answer research questions at multiple levels of analysis simultaneously. However, interpretation can be tricky given that different choices for centering model predictors can lead to different versions of what appear to be the same parameters; this is especially the case when the predictors are latent variables created through model-estimated variance components. A further complication is a recent change to Mplus (Version 8.1), a popular software program for estimating multivariate multilevel models, in which the selection of Bayesian estimation instead of maximum likelihood results in different lower-level predictors when random slopes are requested. This article provides a detailed explication of how the parameters of multilevel models differ as a function of the analyst’s decisions regarding centering and the form of lower-level predictors (i.e., observed or latent), the method of estimation, and the variant of program syntax used. After explaining how different methods of centering lower-level observed predictor variables result in different higher-level effects within univariate multilevel models, this article uses simulated data to demonstrate how these same concepts apply in specifying multivariate multilevel models with latent lower-level predictor variables. Complete data, input, and output files for all of the example models have been made available online to further aid readers in accurately translating these central tenets of multivariate multilevel modeling into practice.
APA, Harvard, Vancouver, ISO, and other styles
33

Firlej, M., and W. Kresse. "JAVA-LIBRARY FOR THE ACCESS, STORAGE AND EDITING OF CALIBRATION METADATA OF OPTICAL SENSORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 2, 2016): 3–7. http://dx.doi.org/10.5194/isprs-archives-xli-b1-3-2016.

Full text
Abstract:
The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 “Calibration and validation of remote sensing imagery sensors and data – Part 1: Optical sensors”. <br><br> This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-<i>schema</i> that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-<i>schema</i>. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. <br><br> The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.
APA, Harvard, Vancouver, ISO, and other styles
34

Klein, Lawrence R. "Financial Options for Economic Development (The Quaid-i-Azam Lecture)." Pakistan Development Review 30, no. 4I (December 1, 1991): 369–93. http://dx.doi.org/10.30541/v30i4ipp.369-393.

Full text
Abstract:
As a model builder, I feel comfortable in analyzing economic development through the construction and use of 2-gap mathematical-statistical models. This serves as a paradigm for the modelling of developing countries.l All systems have a core, and although analysis of developing economies must take many interrelated processes into account simultaneously, the more complex systems can usually be reduced to a simplified core of broad macroeconomic relationships. The 2-gap model is, of course, only a starting point because the analysis must deal with such sectors as demographics, family budgets, and the formation of market prices - possibly only relative or real prices. Such a system looks at the economic development issues in physical terms, with some real (relative) prices for allocation theory. A great deal of interesting material can be prepared along these lines for guidance in the development process. The building blocks are: (i) Production functions for introducing technological constraints, perhaps extended to include an input-output component; (ii) Conditions of marginal productivity, i.e., optimality in reaching production decisions both for output and input; (iii) Population dynamics and more general demographic processes extending to labour supply, immigration, emigration, and distribution of income/wealth; (iv) The conditions for consumer choice, generating ultimately large-scale demand systems, starting with family budget analysis. As in the case of production analysis, optimality decisions guide model specification; and (v) Trade systems showing how exportable surpluses are created and offset
APA, Harvard, Vancouver, ISO, and other styles
35

Firlej, M., and W. Kresse. "JAVA-LIBRARY FOR THE ACCESS, STORAGE AND EDITING OF CALIBRATION METADATA OF OPTICAL SENSORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (June 2, 2016): 3–7. http://dx.doi.org/10.5194/isprsarchives-xli-b1-3-2016.

Full text
Abstract:
The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 “Calibration and validation of remote sensing imagery sensors and data – Part 1: Optical sensors”. &lt;br&gt;&lt;br&gt; This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-&lt;i&gt;schema&lt;/i&gt; that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-&lt;i&gt;schema&lt;/i&gt;. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. &lt;br&gt;&lt;br&gt; The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.
APA, Harvard, Vancouver, ISO, and other styles
36

Lawrence, A. J., and C. J. Harris. "Auto-Tuning of Low-Order Controllers by Direct Manipulation of Closed-Loop Time Domain Measures." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 210, no. 1 (February 1996): 13–30. http://dx.doi.org/10.1243/pime_proc_1996_210_433_02.

Full text
Abstract:
Many controller tuners are based on linear models of both the controller and process. Desired performance is often predetermined or adjusted in a manner that is not directly related to the desired response. All physical processes contain non-linearities, commonly of the actuator saturating type, and many controllers contain heuristics for implementation in real systems, such as anti-integral wind-up in PID (proportional integral differential) controllers. For different processes a range of closed-loop response shapes are desired, often described by features of the response such as rise time, overshoot and settling time. This paper investigates the possibility of basing controller tuning on closed-loop system response data such that desired performance is incorporated directly in terms of familiar time domain features or labels, thus eliminating the need for a mathematical process model and repeated tuning reformulations to achieve the desired performance. A controller tuning method named label-based neuro-tuning (LBNT) is developed and analysed by application to PID controller tuning for process models indicative of real process behaviour. Simulations and numerical investigation indicate that LBNT is a viable technique for the tuning of low-order SISO (single-input-single- output) controllers. Tuning is straightforward, flexible and copes well with process parametric changes and performance specification reformulation. The drawbacks are a complicated pretune phase, a limited selection of suitable labels and a difficulty in defining general classes of tuning problems for its application. The technique is not based on the assumption of process linearity, but due to the inability to characterize classes of input signals and operating points the types of process non-linearity are restricted. The controller may be non-linear, but must be structurally predetermined, and an input/output process model of arbitrary structure is required.
APA, Harvard, Vancouver, ISO, and other styles
37

Bădin, Luiza, and Léopold Simar. "A BIAS-CORRECTED NONPARAMETRIC ENVELOPMENT ESTIMATOR OF FRONTIERS." Econometric Theory 25, no. 5 (October 2009): 1289–318. http://dx.doi.org/10.1017/s0266466609090513.

Full text
Abstract:
In efficiency analysis, the production frontier is defined as the set of the most efficient alternatives among all possible combinations in the input-output space. The nonparametric envelopment estimators rely on the assumption that all the observations fall on the same side of the frontier. The free disposal hull (FDH) estimator of the attainable set is the smallest free disposal set covering all the observations. By construction, the FDH estimator is an inward-biased estimator of the frontier.The univariate extreme values representation of the FDH allows us to derive a bias-corrected estimator for the frontier. The presentation is based on a probabilistic formulation where the input-output pairs are realizations of independent random variables drawn from a joint distribution whose support is the production set. The bias-corrected estimator shares the asymptotic properties of the FDH estimator. But in finite samples, Monte Carlo experiments indicate that our bias-corrected estimator reduces significantly not only the bias of the FDH estimator but also its mean squared error, with no computational cost. The method is also illustrated with a real data example. A comparison with the parametric stochastic frontier indicates that the parametric approach can easily fail under wrong specification of the model.
APA, Harvard, Vancouver, ISO, and other styles
38

Anderson, Robert, Zhou Wei, Ian Cox, Malcolm Moore, and Florence Kussener. "Monte Carlo Simulation Experiments for Engineering Optimisation." Studies in Engineering and Technology 2, no. 1 (July 22, 2015): 97. http://dx.doi.org/10.11114/set.v2i1.901.

Full text
Abstract:
Design of Experiments (DoE) is widely used in design, manufacturing and quality management. The resulting data is usually analysed with multiple linear regression to generate polynomial equations that describe the relationship between process inputs and outputs. These equations enable us to understand how input values affect the predicted value of one or more outputs and find good set points for the inputs. However, to develop robust manufacturing processes, we also need to understand how variation in these inputs appears as variation in the output. This understanding allows us to define set points and control tolerances for the inputs that will keep the outputs within their required specification windows. Tolerance analysis provides a powerful way of finding input settings and ranges that minimise output variation to produce a process that is robust. In many practical applications, tolerance analysis exploits Monte Carlo simulation of the polynomial model generated from DoE’s. This paper briefly describes tolerance analysis and then shows how Monte Carlo simulation experiments using space-filling designs can be used to find the input settings that result in a robust process. Using this approach, engineers can quickly and easily identify the key inputs responsible for transferring undesired variation to their process outputs and identify the set points and ranges that make their process as robust as possible. If the process is not sufficiently robust, they can rationally investigate different strategies to improve it. A case study approach is used to aid explanation and understanding.
APA, Harvard, Vancouver, ISO, and other styles
39

Adjir, Noureddine, Pierre de Saqui-Sannes, and Kamel Mustapha Rahmouni. "Conformance Testing of Preemptive Real-Time Systems." International Journal of Embedded and Real-Time Communication Systems 4, no. 4 (October 2013): 1–26. http://dx.doi.org/10.4018/ijertcs.2013100101.

Full text
Abstract:
The paper presents an approach for model-based black-box conformance testing of preemptive real-time systems using Labeled Prioritized Time Petri Nets with Stopwatches (LPrSwTPN). These models not only specify system/environment interactions and time constraints. They further enable modelling of suspend/resume operations in real-time systems. The test specification used to generate test primitives, to check the correctness of system responses and to draw test verdicts is an LPrSwTPN made up of two concurrent sub-nets that respectively specify the system under test and its environment. The algorithms used in the TINA model analyzer have been extended to support concurrent composed subnets. Relativized stopwatch timed input/output conformance serves as the notion of implementation correctness, essentially timed trace inclusion taking environment assumptions into account. Assuming the modelled systems are non deterministic and partially observable, the paper proposes a test generation and execution algorithm which is based on symbolic techniques and implements an online testing policy and outputs test results for the (part of the) selected environment.
APA, Harvard, Vancouver, ISO, and other styles
40

Kaino, Toshihiro, Ken Urata, Shinichi Yoshida, and Kaoru Hirota. "Improved Debt Rating Model Using Choquet Integral." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 6 (November 20, 2005): 615–21. http://dx.doi.org/10.20965/jaciii.2005.p0615.

Full text
Abstract:
Improved long-term debt rating model using Choquet integral is proposed, where the input is qualitative and quantitative data of the corporations, and the output is the Moody's long-term debt ratings. The fuzzy measure, which is given as the importance of each qualitative and quantitative data, is derived from a neural net method. Moreover, differentiation of the Choquet integral is applied to the long-term debt ratings, where this differentiation indicates how much evaluation of each specification influence to the rating of the corporation. A long-term debt rating model using Choquet integral was proposed by Kaino and Hirota [1]. Under the ambiguous information which couldn't be expressed by the statistics model, this Kaino and Hirota model [1] enabled analysis of the amount of influences of a specific variable, and showed the new possibility in the field of credit risk measurement. However, in order to develop a practical system for small and medium-sized corporations with many needs, this model must be improved so that it may correspond to the changing market or many types of industry. Moreover, this model is modified by the implementation of actual rating provider's similar process to raise the relevance ratio. The advanced model proposed herein corporations than the model is more precise than conventional model using 2-layer type neural network model.
APA, Harvard, Vancouver, ISO, and other styles
41

McCarthy, William E. "The REA Modeling Approach to Teaching Accounting Information Systems." Issues in Accounting Education 18, no. 4 (November 1, 2003): 427–41. http://dx.doi.org/10.2308/iace.2003.18.4.427.

Full text
Abstract:
The REA model was first conceptualized in a paper for the 1982 The Accounting Review as a framework for building accounting systems in a shared data environment, both within enterprises and between enterprises. The model's core feature was an object pattern consisting of two mirror-image constellations that represented semantically the input and output components of a business process. The REA acronym derives from that pattern's structure, which consisted of economic Resources, economic Events, and economic Agents. Simultaneous with its research publication, REA began to be used as a framework for teaching accounting information systems (AIS), originally at Michigan State University and then gradually at other colleges and universities. In its extended form, the REA model integrates the teaching of accounting transaction structures, commitment and business policy specification, business process engineering, and enterprise value chain construction. As of 2003, REA modeling is used in a variety of AIS courses and featured in a variety of AIS textbooks, both in the United States and internationally.
APA, Harvard, Vancouver, ISO, and other styles
42

Susetyo, D. B., Y. A. Lumban-Gaol, and I. Sofian. "PROTOTYPE OF NATIONAL DIGITAL ELEVATION MODEL IN INDONESIA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4 (September 19, 2018): 609–13. http://dx.doi.org/10.5194/isprs-archives-xlii-4-609-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Although medium scale mapping has been done for the entire territory of Indonesia, there has never been a DEM unification to produce seamless national DEM. The main problem to generate national DEM is multi-data-sources. Each of them has their own specification, so unification of data becomes not easy to do. This research aims to generate global DEM database in Indonesia. Because its coverage covers only one country, it is called National DEM. This study will be focused on the northern part of Sumatra island, precisely in the boundaries between Aceh and North Sumatra province. The principle method in this study was to rebuild DEM data by considering the height difference between ground elevation from masspoints and surface elevation from DSM. By setting up a certain threshold value, the filtering process was then performed. The output was generated by gridding process. Validation in this research was done by two methods: visual inspection dan statistical analysis. From visual inspection, the National DEM data becomes smoother than the input data, the reality of the data is maintained, and still shows the landscape of the DSM input. From statistical analysis, compared with 142<span class="thinspace"></span>GCPs, it is obtained that Root Mean Square Error is 2.237<span class="thinspace"></span>m, and vertical accuracy based on Indonesian Mapping Accuracy Standard is 3.679<span class="thinspace"></span>m. The result is good for medium scale base map and, based on the standards in Indonesia, the data can be used for 1<span class="thinspace"></span>:<span class="thinspace"></span>25,000 scale mapping.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Gong, W., Q. Duan, J. Li, C. Wang, Z. Di, Y. Dai, A. Ye, and C. Miao. "Multi-objective parameter optimization of common land model using adaptive surrogate modelling." Hydrology and Earth System Sciences Discussions 11, no. 6 (June 24, 2014): 6715–51. http://dx.doi.org/10.5194/hessd-11-6715-2014.

Full text
Abstract:
Abstract. Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20–100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) regional LSMs are expensive to run, while conventional multi-objective optimization methods needs a huge number of model runs (typically 105~106). It makes parameter optimization computationally prohibitive. An uncertainty qualification framework was developed to meet the aforementioned challenges: (1) use parameter screening to reduce the number of adjustable parameters; (2) use surrogate models to emulate the response of dynamic models to the variation of adjustable parameters; (3) use an adaptive strategy to promote the efficiency of surrogate modeling based optimization; (4) use a weighting function to transfer multi-objective optimization to single objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column case study of a land surface model – Common Land Model (CoLM) and evaluate the effectiveness and efficiency of the proposed framework. The result indicated that this framework can achieve optimal parameter set using totally 411 model runs, and worth to be extended to other large complex dynamic models, such as regional land surface models, atmospheric models and climate models.
APA, Harvard, Vancouver, ISO, and other styles
44

Suebsomran, Anan. "Heading Control of a RC Helicopter by Fuzzy Logic." Applied Mechanics and Materials 278-280 (January 2013): 1466–72. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.1466.

Full text
Abstract:
This research aims to control the heading of a small RC helicopter based on fuzzy logic control approach. Control of such a vehicle is normally nonlinear system. To be linear system, the mathematical model of this system would be archived and its parameters. From such difficulty of obtaining mathematical model and parameters, thus in this propose we develop the heading control of a small RC helicopter by using fuzzy logic control, which is overcome the unmodeling of such vehicle, for objective of developed control system. The performance of system is specified to control in time domain design specification. The rule based construction for designing the fuzzy system is related to input and output membership function for inference fuzzy system. Second order response time is specified, and error e and delta error (e) are conditionally assigned for constructing the rule of fuzzy inference system using Mamdani reasoning scheme. Defuzzification method applies the center of gravity (COG) method to computation the output to compensate the error and delta error signal. Finally the result of experiment reveals that the tracking of control of desire heading command and feedback signal is archived via the desired control performance in heading control of a small RC helicopter system.
APA, Harvard, Vancouver, ISO, and other styles
45

Sungheetha, Akey, and Rajesh Sharma R. "Design of Effective Smart Communication System for Impaired People." December 2020 2, no. 4 (March 8, 2021): 181–94. http://dx.doi.org/10.36548/jeea.2020.4.006.

Full text
Abstract:
In communication medium, sharing a conversation dialogue between the normal person and deaf and dumb person is one of the challenging tasks still. The dumb person can practice hand gesture language in their community but not to others. This research article focuses to minimize the difficulty level between these two communities with smart glove devices. Besides, the author believes that result of the proposed model provides a good impact on the dump community. The smart glove contains input, control, and output module to get, process, and display the data respectively. Our proposed model is used to help these communities to interact with each other continuously without any error. The proposed model is constructed with good specification flex sensors. Little change of resistance in flex sensor is providing changes in their gesture language. So this orientation direction is calculated well and gives better results over existing methods. The wireless set can be made with Bluetooth technologies here. Here the gestures are assigned based on the alphabet letter. The sign language performs and gives audible output in the display section of the proposed model. It gives good results in our experimental setup. This research work focuses on good recognition rate, accuracy, and efficiency. The good recognition rate shows the continuous conversation between the two persons. Moreover, this research article compares the recognition rate, accuracy, and efficiency of the proposed model with an existing model.
APA, Harvard, Vancouver, ISO, and other styles
46

Lee, Edward T., and Fred Y. Wu. "Algorithms for simple object reconstruction using the largest possible object approach." Robotica 10, no. 4 (July 1992): 377–81. http://dx.doi.org/10.1017/s0263574700008213.

Full text
Abstract:
SUMMARYRecently, three-dimensional motion analysis and shape recovery have attracted growing attention as promising avenues of approach to image understanding, object reconstruction as well as computer vision for robotic Systems. The image generation problem and the model generation problem are presented. More specifically, the inputs to the image generator are an old image, object model, motion specification, and hidden line and hidden surface algorithms. The output is a new image. Since the object model is given, the top-down approach is usually used. On the other hand, for the model generation problem, the input is an image sequence while the output is an object model. Since the object model is not given, and bottom-up approach is usually used.In this paper, the largest possible object approach is proposed and the advantages of this approach are stated. They are:1. This approach may be applicable to objects with planar surfaces as well as nonplanar surfaces.2. This approach may be applicable to the case that there are more than one face change per frame.3. This approach may be applicable when the camera is moving.4. This approach may be applicable when the object is measured by several measuring stations.By using this approach, algorithms for simple object reconstruction given a sequence of pictures are presented together with illustrative examples. The relevance and importance of this work are discussed.The results of this paper may have useful applications in object reconstruction, pictorial data reduction and computer vision for robotic Systems.
APA, Harvard, Vancouver, ISO, and other styles
47

Souilem, Malek, Jai Narayan Tripathi, Rui Melicio, Wael Dghais, Hamdi Belgacem, and Eduardo M. G. Rodrigues. "Neural-Network Based Modeling of I/O Buffer Predriver under Power/Ground Supply Voltage Variations." Sensors 21, no. 18 (September 10, 2021): 6074. http://dx.doi.org/10.3390/s21186074.

Full text
Abstract:
This paper presents a neural-network based nonlinear behavioral modelling of I/O buffer that accounts for timing distortion introduced by nonlinear switching behavior of the predriver electrical circuit under power and ground supply voltage (PGSV) variations. Model structure and I/O device characterization along with extraction procedure were described. The last stage of the I/O buffer is modelled as nonlinear current-voltage (I-V) and capacitance voltage (C-V) functions capturing the nonlinear dynamic impedances of the pull-up and pull-down transistors. The mathematical model structure of the predriver was derived from the analysis of the large-signal electrical circuit switching behavior. Accordingly, a generic and surrogate multilayer neural network (NN) structure was considered in this work. Timing series data which reflects the nonlinear switching behavior of the multistage predriver’s circuit PGSV variations, were used to train the NN model. The proposed model was implemented in the time-domain solver and validated against the reference transistor level (TL) model and the state-of-the-art input-output buffer information specification (IBIS) behavioral model under different scenarios. The analysis of jitter was performed using the eye diagrams plotted at different metrics values.
APA, Harvard, Vancouver, ISO, and other styles
48

Loong, Y. T., M. Dahari, H. J. Yap, and H. Y. Chong. "Modeling and Simulation of Solar Powered Hydrogen System." Applied Mechanics and Materials 315 (April 2013): 128–35. http://dx.doi.org/10.4028/www.scientific.net/amm.315.128.

Full text
Abstract:
Solar energy is a natural resource and can be harnessed to provide clean electricity to hydrogen production system. However, control problems still remain due to the large variety of PV output power with different solar radiation levels. To better investigate about this problem, a model that the PV power plants are integrated with production and storage system such as hydrogen generator, electrolyzer, PV system and hydrogen tank is needed. The focus of the present paper is to develop an integrated simulation model that can be effective and user friendly to predict the dynamic behaviors of a solar powered hydrogen system. The solar irradiance and temperature are the independent variables that act as the input variables to the model. The analyzed results could be used for analyzing the system performance, and also for sizing and designing the system. The integrated model is created by using Matlab Simulink simulation platform which allows the users to define the system specification. The particular information of the system such as power generated by the solar system, volumetric production & etc. can be clearly shown and determined in the simulation platform.
APA, Harvard, Vancouver, ISO, and other styles
49

Sulaiha Sulaiman, Nurul, Khairiyah Mohd-Yusof, and Asngari Mohd-Saion. "Quality Prediction Modeling of Palm Oil Refining Plant in Malaysia Using Artificial Neural Network Models." International Journal of Engineering & Technology 7, no. 3.26 (August 14, 2018): 19. http://dx.doi.org/10.14419/ijet.v7i3.26.17454.

Full text
Abstract:
Malaysia is currently one of the biggest producers and exporters of palm oil and palm oil products. In the growth of palm oil industry in Malaysia, quality of the refined oil is a major concern where off-specification products will be rejected thus causing a great loss in profit. In this paper, predictive modeling of refined palm oil quality in one palm oil refining plant in Malaysia is proposed for online quality monitoring purposes. The color of the crude oil, Free Fatty acid (FFA) content, bleaching earth dosage, citric acid dosage, activated carbon dosage, deodorizer pressure and deodorizer temperature were studied in this paper. The industrial palm oil refinery data were used as input and output to the Artificial Neural Network (ANN) model. Various trials were examined for training all three ANN models using number of nodes in the hidden layer varying from 10 to 25. All three models were trained and tested reasonably well to predict FFA content, red and yellow color quality of the refined palm oil efficiently with small error. Therefore, the models can be further implemented in palm oil refinery plant as online prediction system.
APA, Harvard, Vancouver, ISO, and other styles
50

Kouhalvandi, Lida. "Directly Matching an MMIC Amplifier Integrated with MIMO Antenna through DNNs for Future Networks." Sensors 22, no. 18 (September 19, 2022): 7068. http://dx.doi.org/10.3390/s22187068.

Full text
Abstract:
Due to the exponential growth of data communications, linearity specification is deteriorating and, in high frequency systems, impedance transformation leading to power delivering from power amplifiers (PAs) to antennas is becoming an increasingly important concept. Intelligent-based optimization methods can be a suitable solution for enhancing this characteristic in the transceiver systems. Herein, to tackle the problems of linearity and impedance transformations, deep neural network (DNN)-based optimizations are employed. In the first phase, the antenna is modeled through the DNN with using the long short-term memory (LSTM) leading to forecast the load impedances in the a wide frequency band. Afterwards, the PA is modeled and optimized through another LSTM-based DNN using Multivariate Newton’s Method where the optimal drain impedances are predicted from the first DNN (i.e., modeled antenna). The whole optimization methodology is executed automatically leading to enhance linearity specification of the whole system. For proving the novelty of the proposed method, monolithic microwave integrated circuit (MMIC) along with the multiple-input multiple-output (MIMO) antenna is designed, modeled, and optimized concurrently in the frequency band from 7.49 GHz to 12.44 GHz. The proposed method leads to enhancing the linearity of the transceiver in an effective way where DNN-based PA model gives rise to a solution for achieving the most optimal drain impedance through the modeled DNN-based antenna.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography