Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Learning from Constraints.

Статті в журналах з теми "Learning from Constraints"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Learning from Constraints".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Cropper, Andrew, and Rolf Morel. "Learning programs by learning from failures." Machine Learning 110, no. 4 (February 19, 2021): 801–56. http://dx.doi.org/10.1007/s10994-020-05934-z.

Повний текст джерела
Анотація:
AbstractWe describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chou, Glen, Dmitry Berenson, and Necmiye Ozay. "Learning constraints from demonstrations with grid and parametric representations." International Journal of Robotics Research 40, no. 10-11 (August 13, 2021): 1255–83. http://dx.doi.org/10.1177/02783649211035177.

Повний текст джерела
Анотація:
We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving an integer program. Our method generalizes across system dynamics and learns a guaranteed subset of the constraint. In addition, by leveraging a known parameterization of the constraint, we modify our method to learn parametric constraints in high dimensions. We also provide theoretical analysis on what subset of the constraint and safe set can be learnable from safe demonstrations. We demonstrate our method on linear and nonlinear system dynamics, show that it can be modified to work with suboptimal demonstrations, and that it can also be used to learn constraints in a feature space.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Okabe, Masayuki, and Seiji Yamada. "Learning Similarity Matrix from Constraints of Relational Neighbors." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 4 (May 20, 2010): 402–7. http://dx.doi.org/10.20965/jaciii.2010.p0402.

Повний текст джерела
Анотація:
This paper describes a method of learning similarity matrix from pairwise constraints assumed used under the situation such as interactive clustering, where we can expect little user feedback. With the small number of pairwise constraints used, our method attempts to use additional constraints induced by the affinity relationship between constrained data and their neighbors. The similarity matrix is learned by solving an optimization problem formalized as semidefinite programming. Additional constraints are used as complementary in the optimization problem. Results of experiments confirmed the effectiveness of our proposed method in several clustering tasks and that our method is a promising approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mueller, Carl L. "Abstract Constraints for Safe and Robust Robot Learning from Demonstration." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 3, 2020): 13728–29. http://dx.doi.org/10.1609/aaai.v34i10.7136.

Повний текст джерела
Анотація:
My thesis research incorporates high-level abstract behavioral requirements, called ‘conceptual constraints’, into the modeling processes of robot Learning from Demonstration (LfD) techniques. My most recent work introduces an LfD algorithm called Concept Constrained Learning from Demonstration. This algorithm encodes motion planning constraints as temporal Boolean operators that enforce high-level constraints over portions of the robot's motion plan during learned skill execution. This results in more easily trained, more robust, and safer learned skills. Future work will incorporate conceptual constraints into human-aware motion planning algorithms. Additionally, my research will investigate how these concept constrained algorithms and models are best incorporated into effective interfaces for end-users.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kato, Tsuyoshi, Wataru Fujibuchi, and Kiyoshi Asai. "Learning Kernels from Distance Constraints." IPSJ Digital Courier 2 (2006): 441–51. http://dx.doi.org/10.2197/ipsjdc.2.441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Farina, Francesco, Stefano Melacci, Andrea Garulli, and Antonio Giannitrapani. "Asynchronous Distributed Learning From Constraints." IEEE Transactions on Neural Networks and Learning Systems 31, no. 10 (October 2020): 4367–73. http://dx.doi.org/10.1109/tnnls.2019.2947740.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Hammer, Rubi, Tomer Hertz, Shaul Hochstein, and Daphna Weinshall. "Category learning from equivalence constraints." Cognitive Processing 10, no. 3 (December 3, 2008): 211–32. http://dx.doi.org/10.1007/s10339-008-0243-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Armesto, Leopoldo, João Moura, Vladimir Ivan, Mustafa Suphi Erden, Antonio Sala, and Sethu Vijayakumar. "Constraint-aware learning of policies by demonstration." International Journal of Robotics Research 37, no. 13-14 (July 26, 2018): 1673–89. http://dx.doi.org/10.1177/0278364918784354.

Повний текст джерела
Анотація:
Many practical tasks in robotic systems, such as cleaning windows, writing, or grasping, are inherently constrained. Learning policies subject to constraints is a challenging problem. In this paper, we propose a method of constraint-aware learning that solves the policy learning problem using redundant robots that execute a policy that is acting in the null space of a constraint. In particular, we are interested in generalizing learned null-space policies across constraints that were not known during the training. We split the combined problem of learning constraints and policies into two: first estimating the constraint, and then estimating a null-space policy using the remaining degrees of freedom. For a linear parametrization, we provide a closed-form solution of the problem. We also define a metric for comparing the similarity of estimated constraints, which is useful to pre-process the trajectories recorded in the demonstrations. We have validated our method by learning a wiping task from human demonstration on flat surfaces and reproducing it on an unknown curved surface using a force- or torque-based controller to achieve tool alignment. We show that, despite the differences between the training and validation scenarios, we learn a policy that still provides the desired wiping motion.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hewing, Lukas, Kim P. Wabersich, Marcel Menner, and Melanie N. Zeilinger. "Learning-Based Model Predictive Control: Toward Safe Learning in Control." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (May 3, 2020): 269–96. http://dx.doi.org/10.1146/annurev-control-090419-075625.

Повний текст джерела
Анотація:
Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC with learning methods, for which we consider three main categories. Most of the research addresses learning for automatic improvement of the prediction model from recorded data. There is, however, also an increasing interest in techniques to infer the parameterization of the MPC controller, i.e., the cost and constraints, that lead to the best closed-loop performance. Finally, we discuss concepts that leverage MPC to augment learning-based controllers with constraint satisfaction properties.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wu, Xintao, and Daniel Barbará. "Learning missing values from summary constraints." ACM SIGKDD Explorations Newsletter 4, no. 1 (June 2002): 21–30. http://dx.doi.org/10.1145/568574.568579.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Ren, Hongyu, Russell Stewart, Jiaming Song, Volodymyr Kuleshov, and Stefano Ermon. "Learning with Weak Supervision from Physics and Data-Driven Constraints." AI Magazine 39, no. 1 (March 27, 2018): 27–38. http://dx.doi.org/10.1609/aimag.v39i1.2776.

Повний текст джерела
Анотація:
In many applications of machine learning, labeled data is scarce and obtaining additional labels is expensive. We introduce a new approach to supervising learning algorithms without labels by enforcing a small number of domain-specific constraints over the algorithms’ outputs. The constraints can be provided explicitly based on prior knowledge — e.g. we may require that objects detected in videos satisfy the laws of physics — or implicitly extracted from data using a novel framework inspired by adversarial training. We demonstrate the effectiveness of constraint-based learning on a variety of tasks — including tracking, object detection, and human pose estimation — and we find that algorithms supervised with constraints achieve high accuracies with only a small amount of labels, or with no labels at all in some cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Xu, Haoran, Xianyuan Zhan, and Xiangyu Zhu. "Constraints Penalized Q-learning for Safe Offline Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 8753–60. http://dx.doi.org/10.1609/aaai.v36i8.20855.

Повний текст джерела
Анотація:
We study the problem of safe offline reinforcement learning (RL), the goal is to learn a policy that maximizes long-term reward while satisfying safety constraints given only offline data, without further interaction with the environment. This problem is more appealing for real world RL applications, in which data collection is costly or dangerous. Enforcing constraint satisfaction is non-trivial, especially in offline settings, as there is a potential large discrepancy between the policy distribution and the data distribution, causing errors in estimating the value of safety constraints. We show that naïve approaches that combine techniques from safe RL and offline RL can only learn sub-optimal solutions. We thus develop a simple yet effective algorithm, Constraints Penalized Q-Learning (CPQ), to solve the problem. Our method admits the use of data generated by mixed behavior policies. We present a theoretical analysis and demonstrate empirically that our approach can learn robustly across a variety of benchmark control tasks, outperforming several baselines.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Onishi, K. "Learning phonotactic constraints from brief auditory experience." Cognition 83, no. 1 (February 2002): B13—B23. http://dx.doi.org/10.1016/s0010-0277(01)00165-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Suraweera, Pramuditha, Geoffrey I. Webb, Ian Evans, and Mark Wallace. "Learning crew scheduling constraints from historical schedules." Transportation Research Part C: Emerging Technologies 26 (January 2013): 214–32. http://dx.doi.org/10.1016/j.trc.2012.08.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Moon, In-Ho, and Kevin Harer. "Learning from Constraints for Formal Property Checking." Journal of Electronic Testing 26, no. 2 (February 5, 2010): 243–59. http://dx.doi.org/10.1007/s10836-010-5143-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Ciravegna, Gabriele, Francesco Giannini, Stefano Melacci, Marco Maggini, and Marco Gori. "A Constraint-Based Approach to Learning and Explanation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3658–65. http://dx.doi.org/10.1609/aaai.v34i04.5774.

Повний текст джерела
Анотація:
In the last few years we have seen a remarkable progress from the cultivation of the idea of expressing domain knowledge by the mathematical notion of constraint. However, the progress has mostly involved the process of providing consistent solutions with a given set of constraints, whereas learning “new” constraints, that express new knowledge, is still an open challenge. In this paper we propose a novel approach to learning of constraints which is based on information theoretic principles. The basic idea consists in maximizing the transfer of information between task functions and a set of learnable constraints, implemented using neural networks subject to L1 regularization. This process leads to the unsupervised development of new constraints that are fulfilled in different sub-portions of the input domain. In addition, we define a simple procedure that can explain the behaviour of the newly devised constraints in terms of First-Order Logic formulas, thus extracting novel knowledge on the relationships between the original tasks. An experimental evaluation is provided to support the proposed approach, in which we also explore the regularization effects introduced by the proposed Information-Based Learning of Constraint (IBLC) algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Guo, Yufan, Roi Reichart, and Anna Korhonen. "Unsupervised Declarative Knowledge Induction for Constraint-Based Learning of Information Structure in Scientific Documents." Transactions of the Association for Computational Linguistics 3 (December 2015): 131–43. http://dx.doi.org/10.1162/tacl_a_00128.

Повний текст джерела
Анотація:
Inferring the information structure of scientific documents is useful for many NLP applications. Existing approaches to this task require substantial human effort. We propose a framework for constraint learning that reduces human involvement considerably. Our model uses topic models to identify latent topics and their key linguistic features in input documents, induces constraints from this information and maps sentences to their dominant information structure categories through a constrained unsupervised model. When the induced constraints are combined with a fully unsupervised model, the resulting model challenges existing lightly supervised feature-based models as well as unsupervised models that use manually constructed declarative knowledge. Our results demonstrate that useful declarative knowledge can be learned from data with very limited human involvement.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Alderete, John, Paul Tupper, and Stefan A. Frisch. "Phonological constraint induction in a connectionist network: learning OCP-Place constraints from data." Language Sciences 37 (May 2013): 52–69. http://dx.doi.org/10.1016/j.langsci.2012.10.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ahmed, Kareem, Tao Li, Thy Ton, Quan Guo, Kai-Wei Chang, Parisa Kordjamshidi, Vivek Srikumar, Guy Van den Broeck, and Sameer Singh. "PYLON: A PyTorch Framework for Learning with Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 13152–54. http://dx.doi.org/10.1609/aaai.v36i11.21711.

Повний текст джерела
Анотація:
Deep learning excels at learning task information from large amounts of data, but struggles with learning from declarative high-level knowledge that can be more succinctly expressed directly. In this work, we introduce PYLON, a neuro-symbolic training framework that builds on PyTorch to augment procedurally trained models with declaratively specified knowledge. PYLON lets users programmatically specify constraints as Python functions and compiles them into a differentiable loss, thus training predictive models that fit the data whilst satisfying the specified constraints. PYLON includes both exact as well as approximate compilers to efficiently compute the loss, employing fuzzy logic, sampling methods, and circuits, ensuring scalability even to complex models and constraints. Crucially, a guiding principle in designing PYLON is the ease with which any existing deep learning codebase can be extended to learn from constraints in a few lines code: a function that expresses the constraint, and a single line to compile it into a loss. Our demo comprises of models in NLP, computer vision, logical games, and knowledge graphs that can be interactively trained using constraints as supervision.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Gnecco, Giorgio, Marco Gori, Stefano Melacci, and Marcello Sanguineti. "Foundations of Support Constraint Machines." Neural Computation 27, no. 2 (February 2015): 388–480. http://dx.doi.org/10.1162/neco_a_00686.

Повний текст джерела
Анотація:
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Ji, Chuanyi, Robert R. Snapp, and Demetri Psaltis. "Generalizing Smoothness Constraints from Discrete Samples." Neural Computation 2, no. 2 (June 1990): 188–97. http://dx.doi.org/10.1162/neco.1990.2.2.188.

Повний текст джерела
Анотація:
We study how certain smoothness constraints, for example, piecewise continuity, can be generalized from a discrete set of analog-valued data, by modifying the error backpropagation, learning algorithm. Numerical simulations demonstrate that by imposing two heuristic objectives — (1) reducing the number of hidden units, and (2) minimizing the magnitudes of the weights in the network — during the learning process, one obtains a network with a response function that smoothly interpolates between the training data.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Howard, Matthew, Stefan Klanke, Michael Gienger, Christian Goerick, and Sethu Vijayakumar. "Behaviour Generation in Humanoids by Learning Potential-Based Policies from Constrained Motion." Applied Bionics and Biomechanics 5, no. 4 (2008): 195–211. http://dx.doi.org/10.1155/2008/316371.

Повний текст джерела
Анотація:
Movement generation that is consistent with observed or demonstrated behaviour is an efficient way to seed movement planning in complex, high-dimensional movement systems like humanoid robots. We present a method for learning potential-based policies from constrained motion data. In contrast to previous approaches to direct policy learning, our method can combine observations from a variety of contexts where different constraints are in force, to learn the underlying unconstrained policy in form of its potential function. This allows us to generalise and predict behaviour where novel constraints apply. We demonstrate our approach on systems of varying complexity, including kinematic data from the ASIMO humanoid robot with 22 degrees of freedom.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Maggini, Marco, Stefano Melacci, and Lorenzo Sarti. "Learning from pairwise constraints by Similarity Neural Networks." Neural Networks 26 (February 2012): 141–58. http://dx.doi.org/10.1016/j.neunet.2011.10.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Hüllermeier, Eyke. "Flexible constraints for regularization in learning from data." International Journal of Intelligent Systems 19, no. 6 (April 23, 2004): 525–41. http://dx.doi.org/10.1002/int.20010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yang, Qisong, Thiago D. Simão, Simon H. Tindemans, and Matthijs T. J. Spaan. "WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (May 18, 2021): 10639–46. http://dx.doi.org/10.1609/aaai.v35i12.17272.

Повний текст джерела
Анотація:
Safe exploration is regarded as a key priority area for reinforcement learning research. With separate reward and safety signals, it is natural to cast it as constrained reinforcement learning, where expected long-term costs of policies are constrained. However, it can be hazardous to set constraints on the expected safety signal without considering the tail of the distribution. For instance, in safety-critical domains, worst-case analysis is required to avoid disastrous results. We present a novel reinforcement learning algorithm called Worst-Case Soft Actor Critic, which extends the Soft Actor Critic algorithm with a safety critic to achieve risk control. More specifically, a certain level of conditional Value-at-Risk from the distribution is regarded as a safety measure to judge the constraint satisfaction, which guides the change of adaptive safety weights to achieve a trade-off between reward and safety. As a result, we can optimize policies under the premise that their worst-case performance satisfies the constraints. The empirical analysis shows that our algorithm attains better risk control compared to expectation-based methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Smith, Jennifer L. "From experiment results to a constraint hierarchy with the ‘Rank Centrality’ algorithm." Proceedings of the Linguistic Society of America 5, no. 1 (March 23, 2020): 144. http://dx.doi.org/10.3765/plsa.v51.4694.

Повний текст джерела
Анотація:
Rank Centrality (RC; Negahban, Oh, & Shah 2017) is a rank-aggregation algorithm that computes a total ranking of elements from noisy pairwise ranking information. I test RC as an alternative to incremental error-driven learning algorithms such as GLA-MaxEnt (Boersma & Hayes 2001; Jäger 2007) for modeling a constraint hierarchy on the basis of two-alternative forced-choice experiment results. For the case study examined here, RC agrees well with GLA-MaxEnt on the ordering of the constraints, but differs somewhat on the distance between constraints; in particular, RC assigns more extreme (low) positions to constraints at the bottom of the hierarchy than GLA-MaxEnt does. Overall, these initial results are promising, and RC merits further investigation as a constraint-ranking method in experimental linguistics.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Smith, Jennifer L. "From experiment results to a constraint hierarchy with the ‘Rank Centrality’ algorithm." Proceedings of the Linguistic Society of America 5, no. 1 (March 23, 2020): 144. http://dx.doi.org/10.3765/plsa.v5i1.4694.

Повний текст джерела
Анотація:
Rank Centrality (RC; Negahban, Oh, & Shah 2017) is a rank-aggregation algorithm that computes a total ranking of elements from noisy pairwise ranking information. I test RC as an alternative to incremental error-driven learning algorithms such as GLA-MaxEnt (Boersma & Hayes 2001; Jäger 2007) for modeling a constraint hierarchy on the basis of two-alternative forced-choice experiment results. For the case study examined here, RC agrees well with GLA-MaxEnt on the ordering of the constraints, but differs somewhat on the distance between constraints; in particular, RC assigns more extreme (low) positions to constraints at the bottom of the hierarchy than GLA-MaxEnt does. Overall, these initial results are promising, and RC merits further investigation as a constraint-ranking method in experimental linguistics.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Diallo, Aïssatou, and Johannes Fürnkranz. "Learning Ordinal Embedding from Sets." Entropy 23, no. 8 (July 27, 2021): 964. http://dx.doi.org/10.3390/e23080964.

Повний текст джерела
Анотація:
Ordinal embedding is the task of computing a meaningful multidimensional representation of objects, for which only qualitative constraints on their distance functions are known. In particular, we consider comparisons of the form “Which object from the pair (j,k) is more similar to object i?”. In this paper, we generalize this framework to the case where the ordinal constraints are not given at the level of individual points, but at the level of sets, and propose a distributional triplet embedding approach in a scalable learning framework. We show that the query complexity of our approach is on par with the single-item approach. Without having access to features of the items to be embedded, we show the applicability of our model on toy datasets for the task of reconstruction and demonstrate the validity of the obtained embeddings in experiments on synthetic and real-world datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Wah, B. W. "Population-based learning: a method for learning from examples under resource constraints." IEEE Transactions on Knowledge and Data Engineering 4, no. 5 (1992): 454–74. http://dx.doi.org/10.1109/69.166988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

NEUMANN, KLAUS, MATTHIAS ROLF, and JOCHEN JAKOB STEIL. "RELIABLE INTEGRATION OF CONTINUOUS CONSTRAINTS INTO EXTREME LEARNING MACHINES." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 21, supp02 (October 31, 2013): 35–50. http://dx.doi.org/10.1142/s021848851340014x.

Повний текст джерела
Анотація:
The application of machine learning methods in the engineering of intelligent technical systems often requires the integration of continuous constraints like positivity, monotonicity, or bounded curvature in the learned function to guarantee a reliable performance. We show that the extreme learning machine is particularly well suited for this task. Constraints involving arbitrary derivatives of the learned function are effectively implemented through quadratic optimization because the learned function is linear in its parameters, and derivatives can be derived analytically. We further provide a constructive approach to verify that discretely sampled constraints are generalized to continuous regions and show how local violations of the constraint can be rectified by iterative re-learning. We demonstrate the approach on a practical and challenging control problem from robotics, illustrating also how the proposed method enables learning from few data samples if additional prior knowledge about the problem is available.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Warker, Jill A., Gary S. Dell, Christine A. Whalen, and Samantha Gereg. "Limits on learning phonotactic constraints from recent production experience." Journal of Experimental Psychology: Learning, Memory, and Cognition 34, no. 5 (2008): 1289–95. http://dx.doi.org/10.1037/a0013033.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

O'Toole, Alice J. "Structure from Stereo by Associative Learning of the Constraints." Perception 18, no. 6 (December 1989): 767–82. http://dx.doi.org/10.1068/p180767.

Повний текст джерела
Анотація:
A computational model of structure from stereo that develops smoothness constraints naturally by associative learning of a large number of example mappings from disparity data to surface depth data is proposed. Banks of disparity-selective graded response units at all spatial locations in the visual field were the input data. These cells responded to matches of luminance change at convergent, divergent, or zero offsets in the left and right ‘retina’ samples. Surfaces were created by means of a pseudo-Markov process. From these surfaces, shaded marked and ummarked surfaces were created, along with random-dot versions of the same surfaces. Learning of these example shaded and shaded marked surfaces allowed the system to solve stereo mappings both for the surfaces it had learned and for surfaces it had not learned but which had been created by the same pseudo-Markov process. Further, the model was able to solve some random-dot versions of the surfaces when the surfaces had been learned as shaded marked surfaces.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Gao, Shan, Chen Zu, and Daoqiang Zhang. "Learning mid-perpendicular hyperplane similarity from cannot-link constraints." Neurocomputing 113 (August 2013): 195–203. http://dx.doi.org/10.1016/j.neucom.2013.01.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Egilmez, Hilmi E., Eduardo Pavez, and Antonio Ortega. "Graph Learning From Data Under Laplacian and Structural Constraints." IEEE Journal of Selected Topics in Signal Processing 11, no. 6 (September 2017): 825–41. http://dx.doi.org/10.1109/jstsp.2017.2726975.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Puchkov, N. P. "Digital Didactics under Distance Learning Constraints." Voprosy sovremennoj nauki i praktiki. Universitet imeni V.I. Vernadskogo, no. 4(82) (2021): 154–64. http://dx.doi.org/10.17277/voprosy.2021.04.pp.154-164.

Повний текст джерела
Анотація:
The article considers methodological approaches to the process of eliminating the problems of digitalization of education using the example of the academic disciplines of mathematics and computer science. It is shown that the use of specially designed complex mathematical tasks provides a harmonious combination of analytical research inherent in classical mathematics and constantly progressing methods of numerical analysis and computer modeling. The substantive filling of educational tasks with elements of production situations from the future profession of students or from the process of their training on the principles of a contextual approach has been substantiated. The essence of the ongoing process of digitalization of education and its effective use in the context of the limitations of contact work with students is considered from a constructive point of view.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Hayes, Bruce, and Colin Wilson. "A Maximum Entropy Model of Phonotactics and Phonotactic Learning." Linguistic Inquiry 39, no. 3 (July 2008): 379–440. http://dx.doi.org/10.1162/ling.2008.39.3.379.

Повний текст джерела
Анотація:
The study of phonotactics is a central topic in phonology. We propose a theory of phonotactic grammars and a learning algorithm that constructs such grammars from positive evidence. Our grammars consist of constraints that are assigned numerical weights according to the principle of maximum entropy. The grammars assess possible words on the basis of the weighted sum of their constraint violations. The learning algorithm yields grammars that can capture both categorical and gradient phonotactic patterns. The algorithm is not provided with constraints in advance, but uses its own resources to form constraints and weight them. A baseline model, in which Universal Grammar is reduced to a feature set and an SPE-style constraint format, suffices to learn many phonotactic phenomena. In order for the model to learn nonlocal phenomena such as stress and vowel harmony, it must be augmented with autosegmental tiers and metrical grids. Our results thus offer novel, learning-theoretic support for such representations. We apply the model in a variety of learning simulations, showing that the learned grammars capture the distributional generalizations of these languages and accurately predict the findings of a phonotactic experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Qin, Xingli, Lingli Zhao, Jie Yang, Pingxiang Li, Bingfang Wu, Kaimin Sun, and Yubin Xu. "Active Pairwise Constraint Learning in Constrained Time-Series Clustering for Crop Mapping from Airborne SAR Imagery." Remote Sensing 14, no. 23 (November 30, 2022): 6073. http://dx.doi.org/10.3390/rs14236073.

Повний текст джерела
Анотація:
Airborne SAR is an important data source for crop mapping and has important applications in agricultural monitoring and food safety. However, the incidence-angle effects of airborne SAR imagery decrease the crop mapping accuracy. An active pairwise constraint learning method (APCL) is proposed for constrained time-series clustering to address this problem. APCL constructs two types of instance-level pairwise constraints based on the incidence angles of the samples and a non-iterative batch-mode active selection scheme: the must-link constraint, which links two objects of the same crop type with large differences in backscattering coefficients and the shapes of time-series curves; the cannot-link constraint, which links two objects of different crop types with only small differences in the values of backscattering coefficients. Experiments were conducted using 12 time-series images with incidence angles ranging from 21.2° to 64.3°, and the experimental results prove the effectiveness of APCL in improving crop mapping accuracy. More specifically, when using dynamic time warping (DTW) as the similarity measure, the kappa coefficient obtained by APCL was increased by 9.5%, 8.7%, and 5.2% compared to the results of the three other methods. It provides a new solution for reducing the incidence-angle effects in the crop mapping of airborne SAR time-series images.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Burness, Phillip, and Kevin McMullin. "Post-nasal voicing in Japanese classifiers as exceptional triggering: implications for Indexed Constraint Theory." Canadian Journal of Linguistics/Revue canadienne de linguistique 65, no. 4 (November 9, 2020): 471–95. http://dx.doi.org/10.1017/cnj.2020.26.

Повний текст джерела
Анотація:
AbstractIndexed constraints are often used in constraint-based phonological frameworks to account for exceptions to generalizations. A point of contention in the literature on constraint indexation revolves around indexed markedness constraints. While some researchers argue that only faithfulness constraints should be indexed, others argue that markedness constraints should be eligible for indexation as well. This article presents data from Japanese for which a complete synchronic analysis requires indexed markedness constraints but argues that such constraints are only necessary in cases where a phonological repair applies across a morpheme boundary. We then demonstrate that algorithms for learning grammars with indexed constraints can be augmented with a bias towards faithfulness indexation and discuss the advantages of incorporating such a bias, as well as its implications for the debate over the permissibility of indexed markedness constraints.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

O'Sullivan, Barry. "Automated Modelling and Solving in Constraint Programming." Proceedings of the AAAI Conference on Artificial Intelligence 24, no. 1 (July 5, 2010): 1493–97. http://dx.doi.org/10.1609/aaai.v24i1.7530.

Повний текст джерела
Анотація:
Constraint programming can be divided very crudely into modeling and solving. Modeling defines the problem, in terms of variables that can take on different values, subject to restrictions (constraints) on which combinations of variables are allowed. Solving finds values for all the variables that simultaneously satisfy all the constraints. However, the impact of constraint programming has been constrained by a lack of "user-friendliness''. Constraint programming has a major "declarative" aspect, in that a problem model can be handed off for solution to a variety of standard solving methods. These methods are embedded in algorithms, libraries, or specialized constraint programming languages. To fully exploit this declarative opportunity however, we must provide more assistance and automation in the modeling process, as well as in the design of application-specific problem solvers. Automated modelling and solving in constraint programming presents a major challenge for the artificial intelligence community. Artificial intelligence, and in particular machine learning, is a natural field in which to explore opportunities for moving more of the burden of constraint programming from the user to the machine. This paper presents technical challenges in the areas of constraint model acquisition, formulation and reformulation, synthesis of filtering algorithms for global constraints, and automated solving. We also present the metrics by which success and progress can be measured.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Ma, Yecheng Jason, Andrew Shen, Osbert Bastani, and Jayaraman Dinesh. "Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (June 28, 2022): 5404–12. http://dx.doi.org/10.1609/aaai.v36i5.20478.

Повний текст джерела
Анотація:
Reinforcement Learning (RL) agents in the real world must satisfy safety constraints in addition to maximizing a reward objective. Model-based RL algorithms hold promise for reducing unsafe real-world actions: they may synthesize policies that obey all constraints using simulated samples from a learned model. However, imperfect models can result in real-world constraint violations even for actions that are predicted to satisfy all constraints. We propose Conservative and Adaptive Penalty (CAP), a model-based safe RL framework that accounts for potential modeling errors by capturing model uncertainty and adaptively exploiting it to balance the reward and the cost objectives. First, CAP inflates predicted costs using an uncertainty-based penalty. Theoretically, we show that policies that satisfy this conservative cost constraint are guaranteed to also be feasible in the true environment. We further show that this guarantees the safety of all intermediate solutions during RL training. Further, CAP adaptively tunes this penalty during training using true cost feedback from the environment. We evaluate this conservative and adaptive penalty-based approach for model-based safe RL extensively on state and image-based environments. Our results demonstrate substantial gains in sample-efficiency while incurring fewer violations than prior safe RL algorithms. Code is available at: https://github.com/Redrew/CAP
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Bai, Wenjun, Changqin Quan, and Zhi-Wei Luo. "Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders." Applied Sciences 9, no. 12 (June 21, 2019): 2551. http://dx.doi.org/10.3390/app9122551.

Повний текст джерела
Анотація:
Learning latent representations of observed data that can favour both discriminative and generative tasks remains a challenging task in artificial-intelligence (AI) research. Previous attempts that ranged from the convex binding of discriminative and generative models to the semisupervised learning paradigm could hardly yield optimal performance on both generative and discriminative tasks. To this end, in this research, we harness the power of two neuroscience-inspired learning constraints, that is, dependence minimisation and regularisation constraints, to improve generative and discriminative modelling performance of a deep generative model. To demonstrate the usage of these learning constraints, we introduce a novel deep generative model: encapsulated variational autoencoders (EVAEs) to stack two different variational autoencoders together with their learning algorithm. Using the MNIST digits dataset as a demonstration, the generative modelling performance of EVAEs was improved with the imposed dependence-minimisation constraint, encouraging our derived deep generative model to produce various patterns of MNIST-like digits. Using CIFAR-10(4K) as an example, a semisupervised EVAE with an imposed regularisation learning constraint was able to achieve competitive discriminative performance on the classification benchmark, even in the face of state-of-the-art semisupervised learning approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

DODARO, CARMINE, THOMAS EITER, PAUL OGRIS, and KONSTANTIN SCHEKOTIHIN. "Managing caching strategies for stream reasoning with reinforcement learning." Theory and Practice of Logic Programming 20, no. 5 (September 2020): 625–40. http://dx.doi.org/10.1017/s147106842000037x.

Повний текст джерела
Анотація:
AbstractEfficient decision-making over continuously changing data is essential for many application domains such as cyber-physical systems, industry digitalization, etc. Modern stream reasoning frameworks allow one to model and solve various real-world problems using incremental and continuous evaluation of programs as new data arrives in the stream. Applied techniques use, e.g., Datalog-like materialization or truth maintenance algorithms to avoid costly re-computations, thus ensuring low latency and high throughput of a stream reasoner. However, the expressiveness of existing approaches is quite limited and, e.g., they cannot be used to encode problems with constraints, which often appear in practice. In this paper, we suggest a novel approach that uses the Conflict-Driven Constraint Learning (CDCL) to efficiently update legacy solutions by using intelligent management of learned constraints. In particular, we study the applicability of reinforcement learning to continuously assess the utility of learned constraints computed in previous invocations of the solving algorithm for the current one. Evaluations conducted on real-world reconfiguration problems show that providing a CDCL algorithm with relevant learned constraints from previous iterations results in significant performance improvements of the algorithm in stream reasoning scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Hong, Junyuan, Haotao Wang, Zhangyang Wang, and Jiayu Zhou. "Learning Model-Based Privacy Protection under Budget Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (May 18, 2021): 7702–10. http://dx.doi.org/10.1609/aaai.v35i9.16941.

Повний текст джерела
Анотація:
Protecting privacy in gradient-based learning has become increasingly critical as more sensitive information is being used. Many existing solutions seek to protect the sensitive gradients by constraining the overall privacy cost within a constant budget, where the protection is hand-designed and empirically calibrated to boost the utility of the resulting model. However, it remains challenging to choose the proper protection adapted for specific constraints so that the utility is maximized. To this end, we propose a novel Learning-to-Protect algorithm that automatically learns a model-based protector from a set of non-private learning tasks. The learned protector can be applied to private learning tasks to improve utility within the specific privacy budget constraint. Our empirical studies on both synthetic and real datasets demonstrate that the proposed algorithm can achieve a superior utility with a given privacy constraint and generalize well to new private datasets distributed differently as compared to the hand-designed competitors.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

VU, XUAN-HA, and BARRY O'SULLIVAN. "A UNIFYING FRAMEWORK FOR GENERALIZED CONSTRAINT ACQUISITION." International Journal on Artificial Intelligence Tools 17, no. 05 (October 2008): 803–33. http://dx.doi.org/10.1142/s0218213008004175.

Повний текст джерела
Анотація:
When a practical problem can be modeled as a constraint satisfaction problem (CSP), which is a set of constraints that need to be satisfied, it can be solved using many constraint programming techniques. In many practical applications, while users can recognize examples of where a CSP should be satisfied or violated, they cannot articulate the specification of the CSP itself. In these situations, it can be helpful if the computer can take an active role in learning the CSP from examples of its solutions and non-solutions. This is called constraint acquisition. This paper introduces a framework for constraint acquisition in which one can uniformly define and formulate constraint acquisition problems of different types as optimization problems. The difference between constraint acquisition problems within the framework is not only in the type of constraints that need to be acquired but also in the learning objective. The generic framework can be instantiated to obtain specific formulations for acquiring classical, fuzzy, weighted or probabilistic constraints. The paper shows as an example how recent techniques for acquiring classical constraints can be directly obtained from the framework. Specifically, the formulation obtained from the framework to acquire classical CSPs with the minimum number of violated examples is equivalent to a simple pseudo-boolean optimization problem, thus being efficiently solvable by using many available optimization tools. The paper also reports empirical results on constraint acquisition methods to show the utility of the framework.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Abed Alabaddi, Zaid Ahmad, Arwa Hisham Rahahleh, and Majd Mohammad Al-Omoush. "Blended E-Learning Constraints from the Viewpoint of Faculty Members." International Journal of Business and Management 11, no. 7 (June 21, 2016): 180. http://dx.doi.org/10.5539/ijbm.v11n7p180.

Повний текст джерела
Анотація:
<p>This research aims to identify obstacles ‎ to the use of blended e-learning in Al-Balqa Applied University through the viewpoint of faculty members. This research also aims at finding out the constraints that this type of e-learning and finding appropriate solutions for these constraints in the future. The results of this research will also offer proposals and recommendations that will increase the effectiveness of this type of e-learning. Furthermore, the research also aims to find out the best method of training faculty members on how to use blended e-learning.</p><p>The study used a descriptive analytical through the review of the literature on the subject of the study to determine the factors influencing the phenomenon of study. A questioner was then developed to collect the necessary data. After analysis, the results showed that the constraints relating to the University support cited the most relating to faculty members. This was followed by constraints involving students and finally constraints related to infrastructure were ranked last. Training and workshops were shown to be the best methods to develop skills for e-learning systems for faculty members.</p>One of the main recommendations of this study was there needs to be attractive incentives for faculty members to be motivated and provide introductory course in e-learning for students. and provide adequate support for content development and the involvement of faculty members in designing the content and Exchange of experience between faculty members in the University with the support of the Ministry of higher education, and Increase the number of laboratories dedicated to blended e-learning that is available to students outside lecture times.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chou, Glen, Necmiye Ozay, and Dmitry Berenson. "Learning Constraints From Locally-Optimal Demonstrations Under Cost Function Uncertainty." IEEE Robotics and Automation Letters 5, no. 2 (April 2020): 3682–90. http://dx.doi.org/10.1109/lra.2020.2974427.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Salih, Majid Mohammed, Usra Ahmed Jarjis, and Nidal Ali Suleiman. "E-learning, application constraints and remedies." Journal of University of Human Development 2, no. 4 (December 31, 2016): 290. http://dx.doi.org/10.21928/juhd.v2n4y2016.pp290-317.

Повний текст джерела
Анотація:
We seek through the current study to take up the subject of a great deal of importance and the growing popularity in recent years by many of the students and educated as an opportunity for many individuals for the purpose of obtaining scientific certificates Aubramj training conditions they went through may be withheld from them this opportunity, but it is education mail modern educational means were produced by the knowledge revolution and the development of electronic technologies and renewable constantly. The study focused on highlighting the most important obstacles that hinder the proper application of e-learning and actress constraints related to each of the (administration, infrastructure, curriculums faculty,authority member and the student) and the adoption of the form questionnaire with a five-year measure was answers and attitudes of analysis (50) Ferdamn faculty members University surveyed, it reached a number of conclusions and recommendations of the conclusions Almezrh ways that support the application of e-learning in the surveyed university.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zhan, Shanhua, Weijun Sun, and Peipei Kang. "Robust Latent Common Subspace Learning for Transferable Feature Representation." Electronics 11, no. 5 (March 4, 2022): 810. http://dx.doi.org/10.3390/electronics11050810.

Повний текст джерела
Анотація:
This paper proposes a novel robust latent common subspace learning (RLCSL) method by integrating low-rank and sparse constraints into a joint learning framework. Specifically, we transform the data from source and target domains into a latent common subspace to perform the data reconstruction, i.e., the transformed source data is used to reconstruct the transformed target data. We impose joint low-rank and sparse constraints on the reconstruction coefficient matrix which can achieve following objectives: (1) the data from different domains can be interlaced by using the low-rank constraint; (2) the data from different domains but with the same label can be aligned together by using the sparse constraint. In this way, the new feature representation in the latent common subspace is discriminative and transferable. To learn a suitable classifier, we also integrate the classifier learning and feature representation learning into a unified objective and thus the high-level semantics label (data label) is fully used to guide the learning process of these two tasks. Experiments are conducted on diverse data sets for image, object, and document classifications, and encouraging experimental results show that the proposed method outperforms some state-of-the-arts methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Xue, Hansheng, Vijini Mallawaarachchi, Yujia Zhang, Vaibhav Rajan, and Yu Lin. "RepBin: Constraint-Based Graph Representation Learning for Metagenomic Binning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 4 (June 28, 2022): 4637–45. http://dx.doi.org/10.1609/aaai.v36i4.20388.

Повний текст джерела
Анотація:
Mixed communities of organisms are found in many environments -- from the human gut to marine ecosystems -- and can have profound impact on human health and the environment. Metagenomics studies the genomic material of such communities through high-throughput sequencing that yields DNA subsequences for subsequent analysis. A fundamental problem in the standard workflow, called binning, is to discover clusters, of genomic subsequences, associated with the constituent organisms. Inherent noise in the subsequences, various biological constraints that need to be imposed on them and the skewed cluster size distribution exacerbate the difficulty of this unsupervised learning problem. In this paper, we present a new formulation using a graph where the nodes are subsequences and edges represent homophily information. In addition, we model biological constraints providing heterophilous signal about nodes that cannot be clustered together. We solve the binning problem by developing new algorithms for (i) graph representation learning that preserves both homophily relations and heterophily constraints (ii) constraint-based graph clustering method that addresses the problems of skewed cluster size distribution. Extensive experiments, on real and synthetic datasets, demonstrate that our approach, called RepBin, outperforms a wide variety of competing methods. Our constraint-based graph representation learning and clustering methods, that may be useful in other domains as well, advance the state-of-the-art in both metagenomics binning and graph representation learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wassermann, Gilbert, and Mark Glickman. "Automated Harmonization of Bass Lines from Bach Chorales: A Hybrid Approach." Computer Music Journal 43, no. 2-3 (June 2020): 142–57. http://dx.doi.org/10.1162/comj_a_00523.

Повний текст джерела
Анотація:
In this article, a combination of two novel approaches to the harmonization of chorales in the style of J. S. Bach is proposed, implemented, and profiled. The first is the use of the bass line, as opposed to the melody, as the primary input into a chorale-harmonization algorithm. The second is a compromise between methods guided by music knowledge and by machine-learning techniques, designed to mimic the way a music student learns. Specifically, our approach involves learning harmonic structure through a hidden Markov model, and determining individual voice lines by optimizing a Boltzmann pseudolikelihood function incorporating musical constraints through a weighted linear combination of constraint indicators. Although previous generative models have focused only on codifying musical rules or on machine learning without any rule specification, by using a combination of musicologically sound constraints with weights estimated from chorales composed by Bach, we were able to produce musical output in a style that closely resembles Bach's chorale harmonizations. A group of test subjects was able to distinguish which chorales were computer generated only 51.3% of the time, a rate not significantly different from guessing.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії