Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Constraint networks.

Статті в журналах з теми "Constraint networks"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Constraint networks".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Brosowsky, Mathis, Florian Keck, Olaf Dünkel, and Marius Zöllner. "Sample-Specific Output Constraints for Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6812–21. http://dx.doi.org/10.1609/aaai.v35i8.16841.

Повний текст джерела
Анотація:
It is common practice to constrain the output space of a neural network with the final layer to a problem-specific value range. However, for many tasks it is desired to restrict the output space for each input independently to a different subdomain with a non-trivial geometry, e.g. in safety-critical applications, to exclude hazardous outputs sample-wise. We propose ConstraintNet—a scalable neural network architecture which constrains the output space in each forward pass independently. Contrary to prior approaches, which perform a projection in the final layer, ConstraintNet applies an input-dependent parametrization of the constrained output space. Thereby, the complete interior of the constrained region is covered and computational costs are reduced significantly. For constraints in form of convex polytopes, we leverage the vertex representation to specify the parametrization. The second modification consists of adding an auxiliary input in form of a tensor description of the constraint to enable the handling of multiple constraints for the same sample. Finally, ConstraintNet is end-to-end trainable with almost no overhead in the forward and backward pass. We demonstrate ConstraintNet on two regression tasks: First, we modify a CNN and construct several constraints for facial landmark detection tasks. Second, we demonstrate the application to a follow object controller for vehicles and accomplish safe reinforcement learning in this case. In both experiments, ConstraintNet improves performance and we conclude that our approach is promising for applying neural networks in safety-critical environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rong, Zihao, Shaofan Wang, Dehui Kong, and Baocai Yin. "Improving object detection quality with structural constraints." PLOS ONE 17, no. 5 (May 18, 2022): e0267863. http://dx.doi.org/10.1371/journal.pone.0267863.

Повний текст джерела
Анотація:
Recent researches revealed object detection networks using the simple “classification loss + localization loss” training objective are not effectively optimized in many cases, while providing additional constraints on network features could effectively improve object detection quality. Specifically, some works used constraints on training sample relations to successfully learn discriminative network features. Based on these observations, we propose Structural Constraint for improving object detection quality. Structural constraint supervises feature learning in classification and localization network branches with Fisher Loss and Equi-proportion Loss respectively, by requiring feature similarities of training sample pairs to be consistent with corresponding ground truth label similarities. Structural constraint could be applied to all object detection network architectures with the assist of our Proxy Feature design. Our experiment results showed that structural constraint mechanism is able to optimize object class instances’ distribution in network feature space, and consequently detection results. Evaluations on MSCOCO2017 and KITTI datasets showed that our structural constraint mechanism is able to assist baseline networks to outperform modern counterpart detectors in terms of object detection quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Zhang, Y., and R. H. C. Yap. "Set Intersection and Consistency in Constraint Networks." Journal of Artificial Intelligence Research 27 (December 13, 2006): 441–64. http://dx.doi.org/10.1613/jair.2058.

Повний текст джерела
Анотація:
In this paper, we show that there is a close relation between consistency in a constraint network and set intersection. A proof schema is provided as a generic way to obtain consistency properties from properties on set intersection. This approach not only simplifies the understanding of and unifies many existing consistency results, but also directs the study of consistency to that of set intersection properties in many situations, as demonstrated by the results on the convexity and tightness of constraints in this paper. Specifically, we identify a new class of tree convex constraints where local consistency ensures global consistency. This generalizes row convex constraints. Various consistency results are also obtained on constraint networks where only some, in contrast to all in the existing work,constraints are tight.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kharroubi, Idris, Thomas Lim, and Xavier Warin. "Discretization and machine learning approximation of BSDEs with a constraint on the Gains-process." Monte Carlo Methods and Applications 27, no. 1 (January 15, 2021): 27–55. http://dx.doi.org/10.1515/mcma-2020-2080.

Повний текст джерела
Анотація:
Abstract We study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Dechter, Rina, Itay Meiri, and Judea Pearl. "Temporal constraint networks." Artificial Intelligence 49, no. 1-3 (May 1991): 61–95. http://dx.doi.org/10.1016/0004-3702(91)90006-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Msaaf, Mohammed, and Fouad Belmajdoub. "Diagnosis of Discrete Event Systems under Temporal Constraints Using Neural Network." International Journal of Engineering Research in Africa 49 (June 2020): 198–205. http://dx.doi.org/10.4028/www.scientific.net/jera.49.198.

Повний текст джерела
Анотація:
The good functioning of a discrete event system is related to how much the temporal constraints are respected. This paper gives a new approach, based on a statistical model and neural network, that allows the verification of temporal constraints in DES. We will perform an online temporal constraint checking which can detect in real time any abnormal functioning related to the violation of a temporal constraint. In the first phase, the construction of temporal constraints from statistical model is shown and after that neural networks are involved in dealing with the online temporal constraint checking.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wang, Xiao Fei, Xi Zhang, Yue Bing Chen, Lei Zhang, and Chao Jing Tang. "Spectrum Assignment Algorithm Based on Clonal Selection in Cognitive Radio Networks." Advanced Materials Research 457-458 (January 2012): 931–39. http://dx.doi.org/10.4028/www.scientific.net/amr.457-458.931.

Повний текст джерела
Анотація:
An improved-immune-clonal-selection based spectrum assignment algorithm (IICSA) in cognitive radio networks is proposed, combing graph theory and immune optimization. It uses constraint satisfaction operation to make encoded antibody population satisfy constraints, and realizes the global optimization. The random-constraint satisfaction operator and fair-constraint satisfaction operator are designed to guarantee efficiency and fairness, respectively. Simulations are performed for performance comparison between the IICSA and the color-sensitive graph coloring algorithm. The results indicate that the proposed algorithm increases network utilization, and efficiently improves the fairness.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Buscema, Massimo. "Constraint Satisfaction Neural Networks." Substance Use & Misuse 33, no. 2 (January 1998): 389–408. http://dx.doi.org/10.3109/10826089809115873.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Suter, D. "Constraint networks in vision." IEEE Transactions on Computers 40, no. 12 (1991): 1359–67. http://dx.doi.org/10.1109/12.106221.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Gottlob, Georg. "On minimal constraint networks." Artificial Intelligence 191-192 (November 2012): 42–60. http://dx.doi.org/10.1016/j.artint.2012.07.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Gröflin, Heinz. "On node constraint networks." Networks 15, no. 4 (1985): 469–75. http://dx.doi.org/10.1002/net.3230150408.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

ABIDI, Bahae, Abdelillah JILBAB, and Mohamed EL HAZITI. "Security in wireless sensor networks." International Journal of Informatics and Communication Technology (IJ-ICT) 8, no. 1 (April 1, 2019): 13. http://dx.doi.org/10.11591/ijict.v8i1.pp13-18.

Повний текст джерела
Анотація:
Even in difficult places to reach, the new networking technique allows the easy deployment of sensor networks, although these wireless sensor networks confront a lot of constraints. The major constraint is related to the quality of information sent by the network. The wireless sensor networks use different methods to achieve data to the base station. Data aggregation is an important one, used by these wireless sensor networks. But this aggregated data can be subject to several types of attacks and provides security is necessary to resist against malicious attacks, secure communication between severely resource constrained sensor nodes while maintaining the flexibility of the topology changes. Recently, several secure data aggregation schemes have been proposed for wireless sensor networks, it provides better security compared with traditional aggregation. In this paper, we try to focus on giving a brief statement of the various approaches used for the purpose of secure data aggregation in wireless sensor networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Yang, Weijun, Xianxian Zeng, and Guanyu Lai. "A Guaranteed Approximation Algorithm for QoS Anypath Routing in WMNs." Mathematics 10, no. 23 (December 1, 2022): 4557. http://dx.doi.org/10.3390/math10234557.

Повний текст джерела
Анотація:
Anypath routing is a hot research topic for QoS guarantee in wireless mesh networks (WMNs). According to time-varying characteristics of WMNs and the idea of anypath routing, a system network modeling method is proposed to address the multiple constrained optimization anypath problem. It focuses on the application of WMNs; under various QoS constraints, it satisfies a specific constraint and approaches other QoS constraints from an approximate perspective. A heuristic multi-constrained anypath algorithm with a time complexity as Dijkstra is proposed for the problem, and the algorithm is proved to be a K-1 approximation algorithm. The feasibility of the algorithm is verified, then its computational efficiency and performance are evaluated through simulation experiments, respectively. According to the application characteristics of wireless networks, the algorithm is suitable for WMNs and has good compatibility with existing routing protocols.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Hoernle, Nick, Rafael Michael Karampatsis, Vaishak Belle, and Kobi Gal. "MultiplexNet: Towards Fully Satisfied Logical Constraints in Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5700–5709. http://dx.doi.org/10.1609/aaai.v36i5.20512.

Повний текст джерела
Анотація:
We propose a novel way to incorporate expert knowledge into the training of deep neural networks. Many approaches encode domain constraints directly into the network architecture, requiring non-trivial or domain-specific engineering. In contrast, our approach, called MultiplexNet, represents domain knowledge as a quantifier-free logical formula in disjunctive normal form (DNF) which is easy to encode and to elicit from human experts. It introduces a latent Categorical variable that learns to choose which constraint term optimizes the error function of the network and it compiles the constraints directly into the output of existing learning algorithms. We demonstrate the efficacy of this approach empirically on several classical deep learning tasks, such as density estimation and classification in both supervised and unsupervised settings where prior knowledge about the domains was expressed as logical constraints. Our results show that the MultiplexNet approach learned to approximate unknown distributions well, often requiring fewer data samples than the alternative approaches. In some cases, MultiplexNet finds better solutions than the baselines; or solutions that could not be achieved with the alternative approaches. Our contribution is in encoding domain knowledge in a way that facilitates inference. We specifically focus on quantifier-free logical formulae that are specified over the output domain of a network. We show that this approach is both efficient and general; and critically, our approach guarantees 100% constraint satisfaction in a network's output.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dunbar, R. I. M. "Structure and function in human and primate social networks: implications for diffusion, network stability and health." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2240 (August 2020): 20200446. http://dx.doi.org/10.1098/rspa.2020.0446.

Повний текст джерела
Анотація:
The human social world is orders of magnitude smaller than our highly urbanized world might lead us to suppose. In addition, human social networks have a very distinct fractal structure similar to that observed in other primates. In part, this reflects a cognitive constraint, and in part a time constraint, on the capacity for interaction. Structured networks of this kind have a significant effect on the rates of transmission of both disease and information. Because the cognitive mechanism underpinning network structure is based on trust, internal and external threats that undermine trust or constrain interaction inevitably result in the fragmentation and restructuring of networks. In contexts where network sizes are smaller, this is likely to have significant impacts on psychological and physical health risks.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Xiang, Yang, Younis Mohamed, and Wanling Zhang. "Distributed constraint satisfaction with multiply sectioned constraint networks." International Journal of Information and Decision Sciences 6, no. 2 (2014): 127. http://dx.doi.org/10.1504/ijids.2014.061771.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Rutishauser, Ueli, Jean-Jacques Slotine, and Rodney J. Douglas. "Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks." Neural Computation 30, no. 5 (May 2018): 1359–93. http://dx.doi.org/10.1162/neco_a_01074.

Повний текст джерела
Анотація:
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

LALLOUET, ARNAUD, and ANDREI LEGTCHENKO. "BUILDING CONSISTENCIES FOR PARTIALLY DEFINED CONSTRAINTS WITH DECISION TREES AND NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 16, no. 04 (August 2007): 683–706. http://dx.doi.org/10.1142/s0218213007003503.

Повний текст джерела
Анотація:
Partially Defined Constraints can be used to model the incomplete knowledge of a concept or a relation. Instead of only computing with the known part of the constraint, we propose to complete its definition by using Machine Learning techniques. Since constraints are actively used during solving for pruning domains, building a classifier for instances is not enough: we need a solver able to reduce variable domains. Our technique is composed of two steps: first we learn a classifier for each constraint projections and then we transform the classifiers into a propagator. The first contribution is a generic meta-technique for classifier improvement showing performances comparable to boosting. The second lies in the ability of using the learned concept in constraint-based decision or optimization problems. We presents results using Decision Trees and Artificial Neural Networks for constraint learning and propagation. It opens a new way of integrating Machine Learning in Decision Support Systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ma, Lei, and Dapeng Li. "Adaptive Neural Networks Control Using Barrier Lyapunov Functions for DC Motor System with Time-Varying State Constraints." Complexity 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/5082401.

Повний текст джерела
Анотація:
This paper proposes an adaptive neural network (NN) control approach for a direct-current (DC) system with full state constraints. To guarantee that state constraints always remain in the asymmetric time-varying constraint regions, the asymmetric time-varying Barrier Lyapunov Function (BLF) is employed to structure an adaptive NN controller. As we all know that the constant constraint is only a special case of the time-varying constraint, hence, the proposed control method is more general for dealing with constraint problem as compared with the existing works on DC systems. As far as we know, this system is the first studied situations with time-varying constraints. Using Lyapunov analysis, all signals in the closed-loop system are proved to be bounded and the constraints are not violated. In this paper, the effectiveness of the control method is demonstrated by simulation results.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

MONFROGLIO, ANGELO. "Neural Networks for Constraint Satisfaction." Connection Science 5, no. 2 (January 1993): 169–87. http://dx.doi.org/10.1080/09540099308915694.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Li, Daqing, Guanliang Li, Kosmas Kosmidis, H. E. Stanley, Armin Bunde, and Shlomo Havlin. "Percolation of spatially constraint networks." EPL (Europhysics Letters) 93, no. 6 (March 1, 2011): 68004. http://dx.doi.org/10.1209/0295-5075/93/68004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Monfroglio, Angelo. "Connectionist networks for constraint satisfaction." Neurocomputing 3, no. 1 (August 1991): 29–49. http://dx.doi.org/10.1016/0925-2312(91)90018-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Dechter, Rina, and Judea Pearl. "Tree clustering for constraint networks." Artificial Intelligence 38, no. 3 (April 1989): 353–66. http://dx.doi.org/10.1016/0004-3702(89)90037-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Meiri, Itay, Rina Dechter, and Judea Pearl. "Uncovering trees in constraint networks." Artificial Intelligence 86, no. 2 (October 1996): 245–67. http://dx.doi.org/10.1016/0004-3702(95)00102-6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Khandker, M. R., and B. Sultana. "Upper Bound on Blocking Probability for Vertically Stacked Optical Banyan Networks with Link Failures and Given Crosstalk Constraint." Journal of Scientific Research 1, no. 3 (August 29, 2009): 484–94. http://dx.doi.org/10.3329/jsr.v1i3.1191.

Повний текст джерела
Анотація:
Vertical stacking of multiple copies of an optical banyan network is a novel scheme for building nonblocking optical switching networks. The resulting network, namely vertically stacked optical banyan (VSOB) network, preserves all the properties of the banyan network, but increases the hardware cost significantly under first order crosstalk-free constraint. However, stringent crosstalk constraint may not always be necessary. Considering the fact that some designer may want to compromise the blocking probability and crosstalk constraint to a certain degree with the hardware cost, blocking behaviour of such VSOB networks have been analyzed to studying network performance and finding a graceful compromise between hardware costs and blocking probability under or without crosstalk constraint. In this paper, we present the simulation results for upper bound on blocking probability of VSOB networks with link failures and given degree of crosstalk constraint. We show how crosstalk adds a new dimension to the performance analysis on a VSOB networks. The simulation results presented in this paper can guide network designer in finding the trade-off among the blocking probability, the degree of crosstalk and the hardware cost in terms of vertical copies of banyan network in the presence of link failures.Keywords: Banyan networks; Blocking probability; Vertical stacking; Link-failures; Crosstalk.© 2009 JSR Publications. ISSN: 2070-0237 (Print); 2070-0245 (Online). All rights reserved.DOI: 10.3329/jsr.v1i3.1191 J. Sci. Res. 1 (3), 484-494 (2009)
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Xu, Ying, Kun Wang, Changhui Jiang, Zeyu Li, Cheng Yang, Dun Liu, and Haiping Zhang. "Motion-Constrained GNSS/INS Integrated Navigation Method Based on BP Neural Network." Remote Sensing 15, no. 1 (December 27, 2022): 154. http://dx.doi.org/10.3390/rs15010154.

Повний текст джерела
Анотація:
The global navigation satellite system (GNSS) and inertial navigation system (INS) integrated navigation system have been widely used in Intelligent Transportation Systems (ITSs). However, the positioning error of integrated navigation systems is rapidly divergent when GNSS outages occur. Motion constraint and back propagation (BP) neural networks can provide additional knowledge to solve this issue. However, the predictions of a neural network have outliers and motion constraint is difficult to adapt according to the motion states of vehicles and boats. Therefore, this paper fused a BP neural network with motion constraints, and proposed a motion-constrained GNSS/INS integrated navigation method based on a BP neural network (MC-BP method). The pseudo-measurement of the GNSS was predicted using a fitting model trained by the BP neural network. At the same time, the prediction outliers were detected and corrected using motion constraint. To assess the performance of the proposed method, simulated and real data experiments were conducted with a vehicle on land and a boat offshore. A classical GNSS/INS integration algorithm, a motion-constrained GNSS/INS algorithm, and the proposed method were compared through data processing. Compared with the classical GNSS/INS integration algorithm and the motion-constrained GNSS/INS algorithm, the positioning accuracies of the proposed method were improved by 90% and 64%, respectively, in the vehicle land experiment. Similar performances were found in the offshore boat experiment. Using the proposed MC-BP method, improved meter-level-positioning results can be achieved with the GNSS/INS integration algorithm when GNSS outages occur.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wen, Longfei, Lingguo Cui, Senchun Chai, and Baihai Zhang. "Neighbor Constraint Assisted Distributed Localization for Wireless Sensor Networks." Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/143938.

Повний текст джерела
Анотація:
Localization is one of the most significant technologies in wireless sensor networks (WSNs) since it plays a critical role in many applications. The main idea in most localization methods is to estimate the sensor-anchor distances that are used by sensors to locate themselves. However, the distance information is always imprecise due to the measurement or estimation errors. In this work, a novel algorithm called neighbor constraint assisted distributed localization (NCA-DL) is proposed, which introduces the application of geometric constraints to these distances within the algorithm. For example, in the case presented here, the assistance provided by a neighbor will consist in formulating a linear equality constraint. These constraints can be further used to formulate optimization problems for distance estimation. Then through some optimization methods, the imprecise distances can be refined and the localization precision is improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chen, Yourong, Zhangquan Wang, Tiaojuan Ren, Yaolin Liu, and Hexin Lv. "Maximizing Lifetime of Wireless Sensor Networks with Mobile Sink Nodes." Mathematical Problems in Engineering 2014 (2014): 1–13. http://dx.doi.org/10.1155/2014/762979.

Повний текст джерела
Анотація:
In order to maximize network lifetime and balance energy consumption when sink nodes can move, maximizing lifetime of wireless sensor networks with mobile sink nodes (MLMS) is researched. The movement path selection method of sink nodes is proposed. Modified subtractive clustering method, k-means method, and nearest neighbor interpolation method are used to obtain the movement paths. The lifetime optimization model is established under flow constraint, energy consumption constraint, link transmission constraint, and other constraints. The model is solved from the perspective of static and mobile data gathering of sink nodes. Subgradient method is used to solve the lifetime optimization model when one sink node stays at one anchor location. Geometric method is used to evaluate the amount of gathering data when sink nodes are moving. Finally, all sensor nodes transmit data according to the optimal data transmission scheme. Sink nodes gather the data along the shortest movement paths. Simulation results show that MLMS can prolong network lifetime, balance node energy consumption, and reduce data gathering latency under appropriate parameters. Under certain conditions, it outperforms Ratio_w, TPGF, RCC, and GRND.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

SUYKENS, J. A. K., and J. VANDEWALLE. "CHAOS SYNCHRONIZATION: A LAGRANGE PROGRAMMING NETWORK APPROACH." International Journal of Bifurcation and Chaos 10, no. 04 (April 2000): 797–810. http://dx.doi.org/10.1142/s0218127400000566.

Повний текст джерела
Анотація:
In this paper we interpret chaos synchronization schemes within the framework of Lagrange programming networks, which form a class of continuous-time optimization methods for solving constrained nonlinear optimization problems. From this study it follows that standard synchronization schemes can be regarded as a Lagrange programming network with soft constraining, where synchronization between state vectors is defined as a constraint to the dynamical systems. New schemes are proposed then which implement synchronization by hard and soft constraints within Lagrange programming networks. A version is derived which takes into account synchronization errors within the problem formulation. Furthermore Lagrange programming networks for achieving partial and generalized synchronization are given. The methods assume the existence of potential functions for the given systems. The proposed Lagrange programming networks with hard and soft constraining show improved performance on many simulation examples for identical and nonidentical chaotic systems. The schemes are illustrated on Chua's circuit, Lorenz attractor and n-scroll circuits.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

LEE, J. H. M., and V. W. L. TAM. "A FRAMEWORK FOR INTEGRATING ARTIFICIAL NEURAL NETWORKS AND LOGIC PROGRAMMING." International Journal on Artificial Intelligence Tools 04, no. 01n02 (June 1995): 3–32. http://dx.doi.org/10.1142/s0218213095000024.

Повний текст джерела
Анотація:
Many real-life problems belong to the class of constraint satisfaction problems (CSP’s), which are NP-complete, and some NP-hard, in general. When the problem size grows, it becomes difficult to program solutions and to execute the solution in a timely manner. In this paper, we present a general framework for integrating artificial neural networks and logic programming so as to provide an efficient and yet expressive programming environment for solving CSP’s. To realize this framework, we propose PROCLANN, a novel constraint logic programming language. The PROCLANN language retains the simple and elegant declarative semantics of constraint logic programming. Operationally, PROCLANN uses the standard goal reduction strategy in the frontend to generate constraints, and an efficient backend constraint-solver based on artificial neural networks. Its operational semantics is probabilistic in nature. We show that PROCLANN is sound and weakly complete. A novelty of PROCLANN is that while it is a committed-choice language, PROCLANN supports non-determinism, allowing the generation of multiple answers to a query. An initial prototype implementation of PROCLANN is constructed and demonstrates empirically that PROCLANN out-performs the state of art in constraint logic programming implementation on certain hard instances of CSP’s.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ho, Khuong Van, Son Que Vo, Tra Thanh Luu, and Lien Hong Pham. "On the performance of cooperative cognitive networks with selection combining and proactive relay selection." Science and Technology Development Journal 18, no. 3 (August 30, 2015): 29–38. http://dx.doi.org/10.32508/stdj.v18i3.882.

Повний текст джерела
Анотація:
This paper proposes an outage analysis framework for cooperative cognitive networks with proactive relay selection and selection combining (SC) under licensed outage constraint, maximum transmit power constraint, independent non-identical (i.n.i) fading distributions, erroneous channel information, and licensed users’ interference. Towards this end, we firstly suggest power allocation for unlicensed transmitters to satisfy power constraints and account for erroneous channel information and licensed users’ interference. Then, we propose an exact closed-form outage probability formula for the unlicensed network to promptly evaluate system performance and provide useful insights into performance limits. Multiple results show performance trade-off between the unlicensed network and the licensed network, error floor in the unlicensed network, considerable system performance degradation owing to erroneous channel information and licensed users’ interference, and significant performance enhancement due to the increase in the number of relays.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

ZUO, JING, XUEFEN CHI, LIN GUAN, HONGXIA LI, and IRFAN AWAN. "DESIGN OF FUZZY BASED MULTI-CONSTRAINED ROUTING PROTOCOL AND THE PERFORMANCE EVALUATION." Journal of Interconnection Networks 09, no. 04 (December 2008): 369–87. http://dx.doi.org/10.1142/s0219265908002333.

Повний текст джерела
Анотація:
Single-constrained QoS routing protocols have inherent defects when applied into wireless ad hoc networks. Due to a single constraint parameter is only considered, they can't always cope with the problems caused by the uncertainty of ad hoc networks well. They are not robust enough. In order to overcome the drawbacks of single-constrained QoS routing protocols and improve the Quality of Service (QoS) of ad hoc networks, this paper proposed a multi-constrained QoS routing protocol based on fuzzy logic. It is developed from Dynamic Source Routing (DSR). The proposed protocol is service-aware in the sense that it considers the QoS required by different types of services and takes different network state parameters as the constraint conditions for fuzzy based routing system. New route discovery procedure and novel route maintenance mechanism are designed to support corresponding QoS requirements. Speed of packets sending is also adjusted adaptively referring to the outputs of the proposed fuzzy system. Performance of the fuzzy based DSR protocol is measured and evaluated under different conditions. Simulation results show that the improved protocol has better QoS guarantee capabilities compared to single-constrained QoS routing protocols for large-scale networks in terms of lower delay, smoother delay variation and lower packet loss rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Zhang, Yu, and Zhiming Hu. "Parameter Learning of Bayesian Network with Multiplicative Synergistic Constraints." Symmetry 14, no. 7 (July 18, 2022): 1469. http://dx.doi.org/10.3390/sym14071469.

Повний текст джерела
Анотація:
Learning the conditional probability table (CPT) parameters of Bayesian networks (BNs) is a key challenge in real-world decision support applications, especially when there are limited data available. The traditional approach to this challenge is introducing domain knowledge/expert judgments that are encoded as qualitative parameter constraints. In this paper, we focus on multiplicative synergistic constraints. The negative multiplicative synergy constraint and positive multiplicative synergy constraint in this paper are symmetric. In order to integrate multiplicative synergistic constraints into the learning process of Bayesian Network parameters, we propose four methods to deal with the multiplicative synergistic constraints based on the idea of classical isotonic regression algorithm. The four methods are simulated by using the lawn moist model and Asia network, and we compared them with the maximum likelihood estimation (MLE) algorithm. Simulation results show that the proposed methods are superior to the MLE algorithm in the accuracy of parameter learning, which can improve the results of the MLE algorithm to obtain more accurate estimators of the parameters. The proposed methods can reduce the dependence of parameter learning on expert experiences. Combining these constraint methods with Bayesian estimation can improve the accuracy of parameter learning under small sample conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Ay, Nihat. "Locality of Global Stochastic Interaction in Directed Acyclic Networks." Neural Computation 14, no. 12 (December 1, 2002): 2959–80. http://dx.doi.org/10.1162/089976602760805368.

Повний текст джерела
Анотація:
The hypothesis of invariant maximization of interaction (IMI) is formulated within the setting of random fields. According to this hypothesis, learning processes maximize the stochastic interaction of the neurons subject to constraints. We consider the extrinsic constraint in terms of a fixed input distribution on the periphery of the network. Our main intrinsic constraint is given by a directed acyclic network structure. First mathematical results about the strong relation of the local information flow and the global interaction are stated in order to investigate the possibility of controlling IMI optimization in a completely local way. Furthermore, we discuss some relations of this approach to the optimization according to Linsker's Infomax principle.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Liu, Dongsheng, Sai Zhao, Quanzhong Li, and Jiayin Qin. "Rate Maximization for Suspicious Multicast Communication Networks with Full-Duplex Proactive Monitoring." Mobile Information Systems 2020 (July 10, 2020): 1–6. http://dx.doi.org/10.1155/2020/7847623.

Повний текст джерела
Анотація:
In this paper, we investigate the optimization of the monitoring rate for a suspicious multicast communication network with a legitimate full-duplex (FD) monitor, where the FD monitor is proactive to jam suspicious receivers and eavesdrop from the suspicious transmitter simultaneously. To effectively monitor the suspicious communication over multicast networks, we maximize the monitoring rate under the outage probability constraint of the suspicious multicast communication network and the jamming power constraint at the FD monitor. The formulated optimization problem is nonconvex, and its global optimal solution is hard to obtain. Thus, we propose a constrained concave convex procedure- (CCCP-) based iterative algorithm, which is able to achieve a local optimal solution. Simulation results demonstrate that the proposed proactive eavesdropping scheme with optimal jamming power outperforms the conventional passive eavesdropping scheme.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Watts, J. W. W., G. C. C. Fleming, and Q. Lu. "Determination of Active Constraints in a Network." SPE Journal 17, no. 02 (March 29, 2012): 441–54. http://dx.doi.org/10.2118/118877-pa.

Повний текст джерела
Анотація:
Summary There is increasing interest in modeling networks of wells, including subsurface components of complex wells and surface facilities. Such modeling requires setting constraints at various points in the network. Typical constraints are maximum phase flow rates and minimum flowing pressures. A major difficulty in network calculations is determining which of these constraints is active. This paper presents a method that uses slack variables in determining active constraints. The linearized equations of interest generally come in pairs, with each pair consisting of a base equation and a constraint equation. The base equation is the equation that normally applies. The constraint equation replaces it if the constraint is active. Normally, only one of these two equations can be satisfied. The slack variable provides a way to ensure that both are satisfied, regardless of which is active. If the constraint is inactive, the slack variable is added to the constraint equation and accounts for the slack, which by definition is the amount by which the inactive equation is not satisfied. On the other hand, if the constraint is active, the slack variable is instead added to the base equation, and the constraint equation as originally written is satisfied. To obtain this behavior, we define a parameter w and add w times the slack variable to the base equation and (1 − w) times the slack variable to the constraint equation. Thus, if w = 1, the slack variable is added to the base equation, and the constraint is active. On the other hand, if w = 0, the slack variable is added to the constraint equation, and the base equation is active. The slack is always in the inactive equation. There is a w associated with each slack variable. Determining the parameter w is an iterative process. The efficiency of the process is improved by manipulating the network matrix such that we can create a Schur complement that has the slack variables as its unknowns and contains the only references to the w's To determine the slack variables, we need only to work with this matrix, which typically is much smaller than the network matrix. The resulting method is implemented within a general purpose reservoir simulator. Testing of the method in more than 700 cases has shown it to be much more robust than an earlier heuristic procedure.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Yang, Liu, Hongbin Zhu, Haifeng Wang, Kai Kang, and Hua Qian. "Data censoring with network lifetime constraint in wireless sensor networks." Digital Signal Processing 92 (September 2019): 73–81. http://dx.doi.org/10.1016/j.dsp.2019.05.004.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Liu, Dong, Yan Ru, Qinpeng Li, Shibin Wang, and Jianwei Niu. "Semisupervised Community Preserving Network Embedding with Pairwise Constraints." Complexity 2020 (November 10, 2020): 1–14. http://dx.doi.org/10.1155/2020/7953758.

Повний текст джерела
Анотація:
Network embedding aims to learn the low-dimensional representations of nodes in networks. It preserves the structure and internal attributes of the networks while representing nodes as low-dimensional dense real-valued vectors. These vectors are used as inputs of machine learning algorithms for network analysis tasks such as node clustering, classification, link prediction, and network visualization. The network embedding algorithms, which considered the community structure, impose a higher level of constraint on the similarity of nodes, and they make the learned node embedding results more discriminative. However, the existing network representation learning algorithms are mostly unsupervised models; the pairwise constraint information, which represents community membership, is not effectively utilized to obtain node embedding results that are more consistent with prior knowledge. This paper proposes a semisupervised modularized nonnegative matrix factorization model, SMNMF, while preserving the community structure for network embedding; the pairwise constraints (must-link and cannot-link) information are effectively fused with the adjacency matrix and node similarity matrix of the network so that the node representations learned by the model are more interpretable. Experimental results on eight real network datasets show that, comparing with the representative network embedding methods, the node representations learned after incorporating the pairwise constraints can obtain higher accuracy in node clustering task and the results of link prediction, and network visualization tasks indicate that the semisupervised model SMNMF is more discriminative than unsupervised ones.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Davies, T. H. "Freedom and Constraint in Coupling Networks." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 220, no. 7 (July 1, 2006): 989–1010. http://dx.doi.org/10.1243/09544062c09105.

Повний текст джерела
Анотація:
Kirchhoff's circulation law has previously been adapted for studies of under-constraint. Here Kirchhoff's cutset law is also adapted for studies of overconstraint. The two procedures are presented in tandem to emphasise their duality. The same tasks are then undertaken using dual virtual power methods. One is a new procedure for underconstraint that introduces the concepts of cutset motion and virtual action, dual with previously reported circuit action and virtual motion. Of the four methods described, the superior ones for each task are identified.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Guangleng Xiong and Tao Li. "Robust design based on constraint networks." IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 32, no. 5 (September 2002): 596–604. http://dx.doi.org/10.1109/tsmca.2002.804814.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kruschke, John K. "State transitions in constraint satisfaction networks." Behavioral and Brain Sciences 12, no. 3 (September 1989): 407–8. http://dx.doi.org/10.1017/s0140525x00056910.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Bettini, Claudio, X. Sean Wang, and Sushil Jajodia. "Solving multi-granularity temporal constraint networks." Artificial Intelligence 140, no. 1-2 (September 2002): 107–52. http://dx.doi.org/10.1016/s0004-3702(02)00223-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Schwalb, Eddie, and Rina Dechter. "Processing disjunctions in temporal constraint networks." Artificial Intelligence 93, no. 1-2 (June 1997): 29–61. http://dx.doi.org/10.1016/s0004-3702(97)00009-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Marín, R., M. A. Cárdenas, M. Balsa, and J. L. Sanchez. "Obtaining solutions in fuzzy constraint networks." International Journal of Approximate Reasoning 16, no. 3-4 (April 1997): 261–88. http://dx.doi.org/10.1016/s0888-613x(96)00125-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Monfroglio, Angelo. "Backpropagation networks for logic constraint solving." Neurocomputing 6, no. 1 (February 1994): 67–98. http://dx.doi.org/10.1016/0925-2312(94)90035-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Monfroglio, Angelo. "Neural networks for finite constraint satisfaction." Neural Computing & Applications 3, no. 2 (June 1995): 78–100. http://dx.doi.org/10.1007/bf01421960.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Condotta, Jean-François, Souhila Kaci, and Yakoub Salhi. "Optimization in temporal qualitative constraint networks." Acta Informatica 53, no. 2 (April 16, 2015): 149–70. http://dx.doi.org/10.1007/s00236-015-0228-z.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Meisels, Amnon, Jihad Ell-sana, and Ehud Gudes. "Decomposing and Solving Timetabling Constraint Networks." Computational Intelligence 13, no. 4 (November 1997): 486–505. http://dx.doi.org/10.1111/0824-7935.00049.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

HAMADI, YOUSSEF. "INTERLEAVED BACKTRACKING IN DISTRIBUTED CONSTRAINT NETWORKS." International Journal on Artificial Intelligence Tools 11, no. 02 (June 2002): 167–88. http://dx.doi.org/10.1142/s0218213002000836.

Повний текст джерела
Анотація:
The adaptation of software technology to distributed environments is an important challenge today. In this work we combine parallel and distributed search. By this way we add the potential speed-up of a parallel exploration in the processing of distributed problems. This paper extends DIBT, a distributed search procedure operating in distributed constraint networks.11 The extension is threefold. First, the ordered hierarchies used during backtracking are extended to remove partial orders. Second the procedure is updated to face delayed information problems upcoming in heterogeneous (We would like to thank here M. Yokoo for rising this potential problem.)Third, the search is extended to simultaneously explore independent parts of a distributed search tree. The first and third points were first presented in 1999.7 The third improvement introduces parallelism into distributed search, which brings to Interleaved Distributed Intelligent BackTracking (IDIBT). Our results show that 1) insoluble problems do not greatly degrade performance over DIBT and 2) super-linear speed-up can be achieved when the distribution of solutions is non-uniform.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Hassani Bijarbooneh, Farshid. "Constraint programming for wireless sensor networks." Constraints 20, no. 4 (October 2015): 508. http://dx.doi.org/10.1007/s10601-015-9228-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії