Academic literature on the topic 'Minimal subset'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Minimal subset.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Minimal subset"

1

Fomin, Fedor V., Pinar Heggernes, Dieter Kratsch, Charis Papadopoulos, and Yngve Villanger. "Enumerating Minimal Subset Feedback Vertex Sets." Algorithmica 69, no. 1 (December 15, 2012): 216–31. http://dx.doi.org/10.1007/s00453-012-9731-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

SHAH, I. "DIRECT ALGORITHMS FOR FINDING MINIMAL UNSATISFIABLE SUBSETS IN OVER-CONSTRAINED CSPs." International Journal on Artificial Intelligence Tools 20, no. 01 (February 2011): 53–91. http://dx.doi.org/10.1142/s0218213011000036.

Full text
Abstract:
In many situations, an explanation of the reasons behind inconsistency in an overconstrained CSP is required. This explanation can be given in terms of minimal unsatisfiable subsets (MUSes) of constraints. This paper presents algorithms for finding minimal unsatisfiable subsets (MUSes) of constraints in overconstrained CSPs with finite domains and binary constraints. The approach followed is to generate subsets in the subset space, test them for consistency and record the inconsistent subsets found. We present three algorithms as variations of this basic approach. Each algorithm generates subsets in the subset space in a different order and curtails search by employing various search pruning mechanisms. The proposed algorithms are anytime algorithms: a time limit can be set on an algorithm's search and the algorithm can be made to find a subset of MUSes. Experimental evaluation of the proposed algorithms demonstrates that they perform two to three orders of magnitude better than the existing indirect algorithms. Furthermore, the algorithms are able to find MUSes in large CSP benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
3

Legut, Jerzy, and Maciej Wilczyński. "How to Obtain Maximal and Minimal Subranges of Two-Dimensional Vector Measures." Tatra Mountains Mathematical Publications 74, no. 1 (December 1, 2019): 85–90. http://dx.doi.org/10.2478/tmmp-2019-0022.

Full text
Abstract:
Abstract Let (X, ℱ) be a measurable space with a nonatomic vector measure µ =(µ1, µ2). Denote by R(Y) the subrange R(Y)= {µ(Z): Z ∈ ℱ, Z ⊆ Y }. For a given p ∈ µ(ℱ) consider a family of measurable subsets ℱp = {Z ∈ ℱ : µ(Z)= p}. Dai and Feinberg proved the existence of a maximal subset Z* ∈ Fp having the maximal subrange R(Z*) and also a minimal subset M* ∈ ℱp with the minimal subrange R(M*). We present a method of obtaining the maximal and the minimal subsets. Hence, we get simple proofs of the results of Dai and Feinberg.
APA, Harvard, Vancouver, ISO, and other styles
4

JOHNSON, WILL. "INTERPRETABLE SETS IN DENSE O-MINIMAL STRUCTURES." Journal of Symbolic Logic 83, no. 04 (December 2018): 1477–500. http://dx.doi.org/10.1017/jsl.2018.50.

Full text
Abstract:
AbstractWe give an example of a dense o-minimal structure in which there is a definable quotient that cannot be eliminated, even after naming parameters. Equivalently, there is an interpretable set which cannot be put in parametrically definable bijection with any definable set. This gives a negative answer to a question of Eleftheriou, Peterzil, and Ramakrishnan. Additionally, we show that interpretable sets in dense o-minimal structures admit definable topologies which are “tame” in several ways: (a) they are Hausdorff, (b) every point has a neighborhood which is definably homeomorphic to a definable set, (c) definable functions are piecewise continuous, (d) definable subsets have finitely many definably connected components, and (e) the frontier of a definable subset has lower dimension than the subset itself.
APA, Harvard, Vancouver, ISO, and other styles
5

Miao, Maoxuan, Jinran Wu, Fengjing Cai, and You-Gan Wang. "A Modified Memetic Algorithm with an Application to Gene Selection in a Sheep Body Weight Study." Animals 12, no. 2 (January 15, 2022): 201. http://dx.doi.org/10.3390/ani12020201.

Full text
Abstract:
Selecting the minimal best subset out of a huge number of factors for influencing the response is a fundamental and very challenging NP-hard problem because the presence of many redundant genes results in over-fitting easily while missing an important gene can more detrimental impact on predictions, and computation is prohibitive for exhaust search. We propose a modified memetic algorithm (MA) based on an improved splicing method to overcome the problems in the traditional genetic algorithm exploitation capability and dimension reduction in the predictor variables. The new algorithm accelerates the search in identifying the minimal best subset of genes by incorporating it into the new local search operator and hence improving the splicing method. The improvement is also due to another two novel aspects: (a) updating subsets of genes iteratively until the no more reduction in the loss function by splicing and increasing the probability of selecting the true subsets of genes; and (b) introducing add and del operators based on backward sacrifice into the splicing method to limit the size of gene subsets. Additionally, according to the experimental results, our proposed optimizer can obtain a better minimal subset of genes with a few iterations, compared with all considered algorithms. Moreover, the mutation operator is replaced by it to enhance exploitation capability and initial individuals are improved by it to enhance efficiency of search. A dataset of the body weight of Hu sheep was used to evaluate the superiority of the modified MA against the genetic algorithm. According to our experimental results, our proposed optimizer can obtain a better minimal subset of genes with a few iterations, compared with all considered algorithms including the most advanced adaptive best-subset selection algorithm.
APA, Harvard, Vancouver, ISO, and other styles
6

HE, QING, XIU-RONG ZHAO, and ZHONG-ZHI SHI. "MINIMAL CONSISTENT SUBSET FOR HYPER SURFACE CLASSIFICATION METHOD." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 01 (February 2008): 95–108. http://dx.doi.org/10.1142/s0218001408006132.

Full text
Abstract:
Hyper Surface Classification (HSC), which is based on Jordan Curve Theorem in Topology, has proven to be a simple and effective method for classifying a larger database in our previous work. To select a representative subset from the original sample set, the Minimal Consistent Subset (MCS) of HSC is studied in this paper. For HSC method, one of the most important features of MCS is that it has the same classification model as the entire sample dataset, and can totally reflect its classification ability. From this point of view, MCS is the best way of sampling from the original dataset for HSC. Furthermore, because of the minimum property of MCS, every single deletion or multiple deletions from it will lead to a reduction in generalization ability, which can be exactly predicted by the proposed formula in this paper.
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Dung, and Anthony J. Macula. "On the probability that subset sequences are minimal." Discrete Mathematics 207, no. 1-3 (September 1999): 285–89. http://dx.doi.org/10.1016/s0012-365x(99)00112-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fredriksson, Kimmo. "On building minimal automaton for subset matching queries." Information Processing Letters 110, no. 24 (November 2010): 1093–98. http://dx.doi.org/10.1016/j.ipl.2010.09.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Xiangfu, Dantong Ouyang, and Liming Zhang. "Computing all minimal hitting sets by subset recombination." Applied Intelligence 48, no. 2 (June 30, 2017): 257–70. http://dx.doi.org/10.1007/s10489-017-0971-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ikeda, Koichiro. "Minimal but not strongly minimal structures with arbitrary finite dimensions." Journal of Symbolic Logic 66, no. 1 (March 2001): 117–26. http://dx.doi.org/10.2307/2694913.

Full text
Abstract:
AbstractAn infinite structure is said to be minimal if each of its definable subset is finite or cofinite. Modifying Hrushovski's method we construct minimal, non strongly minimal structures with arbitrary finite dimensions. This answers negatively to a problem posed by B. I Zilber.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Minimal subset"

1

Barrus, Michael David. "A forbidden subgraph characterization problem and a minimal-element subset of universal graph classes /." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd374.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barrus, Michael D. "A Forbidden Subgraph Characterization Problem and a Minimal-Element Subset of Universal Graph Classes." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/125.

Full text
Abstract:
The direct sum of a finite number of graph classes H_1, ..., H_k is defined as the set of all graphs formed by taking the union of graphs from each of the H_i. The join of these graph classes is similarly defined as the set of all graphs formed by taking the join of graphs from each of the H_i. In this paper we show that if each H_i has a forbidden subgraph characterization then the direct sum and join of these H_i also have forbidden subgraph characterizations. We provide various results which in many cases allow us to exactly determine the minimal forbidden subgraphs for such characterizations. As we develop these results we are led to study the minimal graphs which are universal over a given list of graphs, or those which contain each graph in the list as an induced subgraph. As a direct application of our results we give an alternate proof of a theorem of Barrett and Loewy concerning a forbidden subgraph characterization problem.
APA, Harvard, Vancouver, ISO, and other styles
3

Papacchini, Fabio. "Minimal model reasoning for modal logic." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/minimal-model-reasoning-for-modal-logic(dbfeb158-f719-4640-9cc9-92abd26bd83e).html.

Full text
Abstract:
Model generation and minimal model generation are useful for tasks such as model checking, query answering and for debugging of logical specifications. Due to this variety of applications, several minimality criteria and model generation methods for classical logics have been studied. Minimal model generation for modal logics how ever did not receive the same attention from the research community. This thesis aims to fill this gap by investigating minimality criteria and designing minimal model generation procedures for all the sublogics of the multi-modal logic S5(m) and their extensions with universal modalities. All the procedures are minimal model sound and complete, in the sense that they generate all and only minimal models. The starting point of the investigation is the definition of a Herbrand semantics for modal logics on which a syntactic minimality criterion is devised. The syntactic nature of the minimality criterion allows for an efficient minimal model generation procedure, but, on the other hand, the resulting minimal models can be redundant or semantically non minimal with respect to each other. To overcome the syntactic limitations of the first minimality criterion, the thesis moves from minimal modal Herbrand models to semantic minimality criteria based on subset-simulation. At first, theoretical procedures for the generation of models minimal modulo subset-simulation are presented. These procedures for the generation of models minimal modulo subset-simulation are minimal model sound and complete, but they might not terminate. The minimality criterion and the procedures are then refined in such a way that termination can be ensured while preserving minimal model soundness and completeness.
APA, Harvard, Vancouver, ISO, and other styles
4

Nicol, Janet L., Andrew Barss, and Jason E. Barker. "Minimal Interference from Possessor Phrases in the Production of Subject-Verb Agreement." FRONTIERS MEDIA SA, 2016. http://hdl.handle.net/10150/615107.

Full text
Abstract:
We explore the language production process by eliciting subject-verb agreement errors. Participants were asked to create complete sentences from sentence beginnings such as The elf's/elves' house with the tiny window/windows and The statue in the eirs/elves' gardens. These are subject noun phrases containing a head noun and controller of agreement (statue), and two nonheads, a "local noun" (window(s)/garden(s)), and a possessor noun (elf's/elves'). Past research has shown that a plural nonhead noun (an "attractor") within a subject noun phrase triggers the production of verb agreement errors, and further, that the nearer the attractor to the head noun, the greater the interference. This effect can be interpreted in terms of relative hierarchical distance from the head noun, or via a processing window account, which claims that during production, there is a window in which the head and modifying material may be co-active, and an attractor must be active at the same time as the head to give rise to errors. Using possessors attached at different heights within the same window, we are able to empirically distinguish these accounts. Possessors also allow us to explore two additional issues. First, case marking of local nouns has been shown to reduce agreement errors in languages with "rich" inflectional systems, and we explore whether English speakers attend to case. Secondly, formal syntactic analyses differ regarding the structural position of the possessive marker, and we distinguish them empirically with the relative magnitude of errors produced by possessors and local nouns. Our results show that, across the board, plural possessors are significantly less disruptive to the agreement process than plural local nouns. Proximity to the head noun matters: a possessor directly modifying the head noun induce a significant number of errors, but a possessor within a modifying prepositional phrase did not, though the local noun did. These findings suggest that proximity to a head noun is independent of a "processing window" effect. They also support a noun phrase-internal, case-like analysis of the structural position of the possessive ending and show that even speakers of inflectionally impoverished languages like English are sensitive to morphophonological case-like marking.
APA, Harvard, Vancouver, ISO, and other styles
5

Bekkouche, Mohammed. "Combinaison des techniques de Bounded Model Checking et de programmation par contraintes pour l'aide à la localisation d'erreurs : exploration des capacités des CSP pour la localisation d'erreurs." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4096/document.

Full text
Abstract:
Un vérificateur de modèle peut produire une trace de contreexemple, pour un programme erroné, qui est souvent difficile à exploiter pour localiser les erreurs dans le code source. Dans ma thèse, nous avons proposé un algorithme de localisation d'erreurs à partir de contreexemples, nommé LocFaults, combinant les approches de Bounded Model Checking (BMC) avec un problème de satisfaction de contraintes (CSP). Cet algorithme analyse les chemins du CFG (Control Flow Graph) du programme erroné pour calculer les sous-ensembles d'instructions suspectes permettant de corriger le programme. En effet, nous générons un système de contraintes pour les chemins du graphe de flot de contrôle pour lesquels au plus k instructions conditionnelles peuvent être erronées. Ensuite, nous calculons les MCSs (Minimal Correction Sets) de taille limitée sur chacun de ces chemins. La suppression de l'un de ces ensembles de contraintes donne un sous-ensemble satisfiable maximal, en d'autres termes, un sous-ensemble maximal de contraintes satisfaisant la postcondition. Pour calculer les MCSs, nous étendons l'algorithme générique proposé par Liffiton et Sakallah dans le but de traiter des programmes avec des instructions numériques plus efficacement. Cette approche a été évaluée expérimentalement sur des programmes académiques et réalistes
A model checker can produce a trace of counter-example for erroneous program, which is often difficult to exploit to locate errors in source code. In my thesis, we proposed an error localization algorithm from counter-examples, named LocFaults, combining approaches of Bounded Model-Checking (BMC) with constraint satisfaction problem (CSP). This algorithm analyzes the paths of CFG (Control Flow Graph) of the erroneous program to calculate the subsets of suspicious instructions to correct the program. Indeed, we generate a system of constraints for paths of control flow graph for which at most k conditional statements can be wrong. Then we calculate the MCSs (Minimal Correction Sets) of limited size on each of these paths. Removal of one of these sets of constraints gives a maximal satisfiable subset, in other words, a maximal subset of constraints satisfying the postcondition. To calculate the MCSs, we extend the generic algorithm proposed by Liffiton and Sakallah in order to deal with programs with numerical instructions more efficiently. This approach has been experimentally evaluated on a set of academic and realistic programs
APA, Harvard, Vancouver, ISO, and other styles
6

Jonsson, Robin. "Optimal Linear Combinations of Portfolios Subject to Estimation Risk." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28524.

Full text
Abstract:
The combination of two or more portfolio rules is theoretically convex in return-risk space, which provides for a new class of portfolio rules that gives purpose to the Mean-Variance framework out-of-sample. The author investigates the performance loss from estimation risk between the unconstrained Mean-Variance portfolio and the out-of-sample Global Minimum Variance portfolio. A new two-fund rule is developed in a specific class of combined rules, between the equally weighted portfolio and a mean-variance portfolio with the covariance matrix being estimated by linear shrinkage. The study shows that this rule performs well out-of-sample when covariance estimation error and bias are balanced. The rule is performing at least as good as its peer group in this class of combined rules.
APA, Harvard, Vancouver, ISO, and other styles
7

Tomaras, Panagiotis J. "Decomposition of general queueing network models : an investigation into the implementation of hierarchical decomposition schemes of general closed queueing network models using the principle of minimum relative entropy subject to fully decomposable constraints." Thesis, University of Bradford, 1989. http://hdl.handle.net/10454/4212.

Full text
Abstract:
Decomposition methods based on the hierarchical partitioning of the state space of queueing network models offer powerful evaluation tools for the performance analysis of computer systems and communication networks. These methods being conventionally implemented capture the exact solution of separable queueing network models but their credibility differs when applied to general queueing networks. This thesis provides a universal information theoretic framework for the implementation of hierarchical decomposition schemes, based on the principle of minimum relative entropy given fully decomposable subset and aggregate utilization, mean queue length and flow-balance constraints. This principle is used, in conjuction with asymptotic connections to infinite capacity queues, to derive new closed form approximations for the conditional and marginal state probabilities of general queueing network models. The minimum relative entropy solutions are implemented iteratively at each decomposition level involving the generalized exponential (GE) distributional model in approximating the general service and asymptotic flow processes in the network. It is shown that the minimum relative entropy joint state probability, subject to mean queue length and flow-balance constraints, is identical to the exact product-form solution obtained as if the network was separable. An investigation into the effect of different couplings of the resource units on the relative accuracy of the approximation is carried out, based on an extensive experimentation. The credibility of the method is demonstrated with some illustrative examples involving first-come-first-served general queueing networks with single and multiple servers and favourable comparisons against exact solutions and other approximations are made.
APA, Harvard, Vancouver, ISO, and other styles
8

Tran, Quoc Huy. "Robust parameter estimation in computer vision: geometric fitting and deformable registration." Thesis, 2014. http://hdl.handle.net/2440/86270.

Full text
Abstract:
Parameter estimation plays an important role in computer vision. Many computer vision problems can be reduced to estimating the parameters of a mathematical model of interest from the observed data. Parameter estimation in computer vision is challenging, since vision data unavoidably have small-scale measurement noise and large-scale measurement errors (outliers) due to imperfect data acquisition and preprocessing. Traditional parameter estimation methods developed in the statistics literature mainly deal with noise and are very sensitive to outliers. Robust parameter estimation techniques are thus crucial for effectively removing outliers and accurately estimating the model parameters with vision data. The research conducted in this thesis focuses on single structure parameter estimation and makes a direct contribution to two specific branches under that topic: geometric fitting and deformable registration. In geometric fitting problems, a geometric model is used to represent the information of interest, such as a homography matrix in image stitching, or a fundamental matrix in three-dimensional reconstruction. Many robust techniques for geometric fitting involve sampling and testing a number of model hypotheses, where each hypothesis consists of a minimal subset of data for yielding a model estimate. It is commonly known that, due to the noise added to the true data (inliers), drawing a single all-inlier minimal subset is not sufficient to guarantee a good model estimate that fits the data well; the inliers therein should also have a large spatial extent. This thesis investigates a theoretical reasoning behind this long-standing principle, and shows a clear correlation between the span of data points used for estimation and the quality of model estimate. Based on this finding, the thesis explains why naive distance-based sampling fails as a strategy to maximise the span of all-inlier minimal subsets produced, and develops a novel sampling algorithm which, unlike previous approaches, consciously targets all-inlier minimal subsets with large span for robust geometric fitting. The second major contribution of this thesis relates to another computer vision problem which also requires the knowledge of robust parameter estimation: deformable registration. The goal of deformable registration is to align regions in two or more images corresponding to a common object that can deform nonrigidly such as a bending piece of paper or a waving flag. The information of interest is the nonlinear transformation that maps points from one image to another, and is represented by a deformable model, for example, a thin plate spline warp. Most of the previous approaches to outlier rejection in deformable registration rely on optimising fully deformable models in the presence of outliers due to the assumption of the highly nonlinear correspondence manifold which contains the inliers. This thesis makes an interesting observation that, for many realistic physical deformations, the scale of errors of the outliers usually dwarfs the nonlinear effects of the correspondence manifold on which the inliers lie. The finding suggests that standard robust techniques for geometric fitting are applicable to model the approximately linear correspondence manifold for outlier rejection. Moreover, the thesis develops two novel outlier rejection methods for deformable registration, which are based entirely on fitting simple linear models and shown to be considerably faster but at least as accurate as previous approaches.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2014
APA, Harvard, Vancouver, ISO, and other styles
9

Guestrin, Elias Daniel. "Remote, Non-contact Gaze Estimation with Minimal Subject Cooperation." Thesis, 2010. http://hdl.handle.net/1807/24349.

Full text
Abstract:
This thesis presents a novel system that estimates the point-of-gaze (where a person is looking at) remotely while allowing for free head movements and minimizing personal calibration requirements. The point-of-gaze is estimated from the pupil and corneal reflections (virtual images of infrared light sources that are formed by reflection on the front corneal surface, which acts as a convex mirror) extracted from eye images captured by video cameras. Based on the laws of geometrical optics, a detailed general mathematical model for point-of-gaze estimation using the pupil and corneal reflections is developed. Using this model, the full range of possible system configurations (from one camera and one light source to multiple cameras and light sources) is analyzed. This analysis shows that two cameras and two light sources is the simplest system configuration that can be used to reconstruct the optic axis of the eye in 3-D space, and therefore measure eye movements, without the need for personal calibration. To estimate the point-of-gaze, a simple single-point personal calibration procedure is needed. The performance of the point-of-gaze estimation depends on the geometrical arrangement of the cameras and light sources and the method used to reconstruct the optic axis of the eye. Using a comprehensive simulation framework developed from the mathematical model, the performance of several gaze estimation methods of varied complexity is investigated for different geometrical system setups in the presence of noise in the extracted eye features, deviation of the corneal shape from the ideal spherical shape and errors in system parameters. The results of this investigation indicate the method(s) and geometrical setup(s) that are optimal for different sets of conditions, thereby providing guidelines for system implementation. Experimental results with adults, obtained with a system that follows those guidelines, exhibit RMS point-of-gaze estimation errors of 0.4-0.6º of visual angle (comparable to the best commercially available systems, which require multiple-point personal calibration procedures). Preliminary results with infants demonstrate the ability of the proposed system to record infants' visual scanning patterns, enabling applications that are very difficult or impossible to carry out with previously existing technologies (e.g., study of infants' visual and oculomotor systems).
APA, Harvard, Vancouver, ISO, and other styles
10

Ondreka, David. "Construction of minimal gauge invariant subsets of Feynman diagrams with loops in gauge theories." Phd thesis, 2005. http://tuprints.ulb.tu-darmstadt.de/569/1/diss_ondreka.pdf.

Full text
Abstract:
In this work, we consider Feynman diagrams with loops in renormalizable gauge theories with and without spontaneous symmetry breaking. We demonstrate that the set of Feynman diagrams with a fixed number of loops, contributing to the expansion of a connected Green's function in a fixed order of perturbation theory, can be partitioned into minimal gauge invariant subsets by means of a set of graphical manipulations of Feynman diagrams, called gauge flips. To this end, we decompose the Slavnov-Taylor identities for the expansion of the Green's function in such a way that these identities can be defined for subsets of the set of all Feynman diagrams. We then prove, using diagrammatical methods, that the subsets constructed by means of gauge flips really constitute minimal gauge invariant subsets. Thereafter, we employ gauge flips in a classification of the minimal gauge invariant subsets of Feynman diagrams with loops in the Standard Model. We discuss in detail an explicit example, comparing it to the results of a computer program which has been developed in the context of the present work.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Minimal subset"

1

Bade, David W. Misinformation and meaning in library catalogs. Chicago: D.W. Bade, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bade, David W. Misinformation and meaning in library catalogs. Chicago: D.W. Bade, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

2002-2003 MAT: An exhaustive treatise on new millennium alternative tax : covering all present and possible issues with critical comments, precedents, and tests-- a veritable treasury of all that is to be known on the subject. 2nd ed. Mumbai: Snow White Publications, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

The minimal residual QR-factorization algorithm for reliably solving subset regression problems. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

The minimal residual QR-factorization algorithm for reliably solving subset regression problems. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

H, Verhaegen M., and Ames Research Center, eds. The minimal residual QR-factorization algorithm for reliably solving subset regression problems. Moffett Field, Calif: National Aeronautics and Space Administration, Ames Research Center, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hrushovski, Ehud, and François Loeser. Strongly stably dominated points. Princeton University Press, 2017. http://dx.doi.org/10.23943/princeton/9780691161686.003.0008.

Full text
Abstract:
This chapter focuses on the properties of strongly stably dominated types over valued fields bases. In this setting, strong stability corresponds to a strong form of the Abhyankar property for valuations: the transcendence degrees of the extension coincide with those of the residue field extension. The chapter proves a Bertini type result and shows that the strongly stable points form a strict ind-definable subset Vsuperscript Number Sign of unit vector V. It then proves a rigidity statement for iso-definable Γ‎-internal subsets of maximal o-minimal dimension of unit vector V, namely that they cannot be deformed by any homotopy leaving appropriate functions invariant. The chapter also describes the closure of iso-definable Γ‎-internal sets in Vsuperscript Number Sign and proves that Vsuperscript Number Sign is exactly the union of all skeleta.
APA, Harvard, Vancouver, ISO, and other styles
8

Karttunen, Lauri. Finite-State Technology. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0018.

Full text
Abstract:
The article introduces the basic concepts of finite-state language processing: regular languages and relations, finite-state automata, and regular expressions. Many basic steps in language processing, ranging from tokenization, to phonological and morphological analysis, disambiguation, spelling correction, and shallow parsing, can be performed efficiently by means of finite-state transducers. The article discusses examples of finite-state languages and relations. Finite-state networks can represent only a subset of all possible languages and relations; that is, only some languages are finite-state languages. Furthermore, this article introduces two types of complex regular expressions that have many linguistic applications, restriction and replacement. Finally, the article discusses the properties of finite-state automata. The three important properties of networks are: that they are epsilon free, deterministic, and minimal. If a network encodes a regular language and if it is epsilon free, deterministic, and minimal, the network is guaranteed to be the best encoding for that language.
APA, Harvard, Vancouver, ISO, and other styles
9

Strawson, Galen. The Minimal Subject. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199548019.003.0011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Georgalis, Nicholas. Mind, Language and Subjectivity: Minimal Content and the Theory of Thought. Taylor & Francis Group, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Minimal subset"

1

Salomaa, Arto. "Minimal Reaction Systems Defining Subset Functions." In Computing with New Resources, 436–46. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13350-8_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fomin, Fedor V., Pinar Heggernes, Dieter Kratsch, Charis Papadopoulos, and Yngve Villanger. "Enumerating Minimal Subset Feedback Vertex Sets." In Lecture Notes in Computer Science, 399–410. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22300-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bendík, Jaroslav, and Kuldeep S. Meel. "Counting Minimal Unsatisfiable Subsets." In Computer Aided Verification, 313–36. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_15.

Full text
Abstract:
AbstractGiven an unsatisfiable Boolean formula F in CNF, an unsatisfiable subset of clauses U of F is called Minimal Unsatisfiable Subset (MUS) if every proper subset of U is satisfiable. Since MUSes serve as explanations for the unsatisfiability of F, MUSes find applications in a wide variety of domains. The availability of efficient SAT solvers has aided the development of scalable techniques for finding and enumerating MUSes in the past two decades. Building on the recent developments in the design of scalable model counting techniques for SAT, Bendík and Meel initiated the study of MUS counting techniques. They succeeded in designing the first approximate MUS counter, $$\mathsf {AMUSIC}$$ AMUSIC , that does not rely on exhaustive MUS enumeration. $$\mathsf {AMUSIC}$$ AMUSIC , however, suffers from two shortcomings: the lack of exact estimates and limited scalability due to its reliance on 3-QBF solvers.In this work, we address the two shortcomings of $$\mathsf {AMUSIC}$$ AMUSIC by designing the first exact MUS counter, $$\mathsf {CountMUST}$$ CountMUST , that does not rely on exhaustive enumeration. $$\mathsf {CountMUST}$$ CountMUST circumvents the need for 3-QBF solvers by reducing the problem of MUS counting to projected model counting. While projected model counting is #NP-hard, the past few years have witnessed the development of scalable projected model counters. An extensive empirical evaluation demonstrates that $$\mathsf {CountMUST}$$ CountMUST successfully returns MUS count for 1500 instances while $$\mathsf {AMUSIC}$$ AMUSIC and enumeration-based techniques could only handle up to 833 instances.
APA, Harvard, Vancouver, ISO, and other styles
4

Papacchini, Fabio, and Renate A. Schmidt. "Computing Minimal Models Modulo Subset-Simulation for Modal Logics." In Frontiers of Combining Systems, 279–94. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-40885-4_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brandes, Ulrik, Michael Hamann, Luise Häuser, and Dorothea Wagner. "Skeleton-Based Clustering by Quasi-Threshold Editing." In Lecture Notes in Computer Science, 134–51. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-21534-6_7.

Full text
Abstract:
AbstractWe consider the problem of transforming a given graph into a quasi-threshold graph using a minimum number of edge additions and deletions. Building on the previously proposed heuristic Quasi-Threshold Mover (QTM), we present improvements both in terms of running time and quality. We propose a novel, linear-time algorithm that solves the inclusion-minimal variant of this problem, i.e., a set of edge edits such that no subset of them also transforms the given graph into a quasi-threshold graph. In an extensive experimental evaluation, we apply these algorithms to a large set of graphs from different applications and find that they lead QTM to find solutions with fewer edits. Although the inclusion-minimal algorithm needs significantly more edits on its own, it outperforms the initialization heuristic previously proposed for QTM.
APA, Harvard, Vancouver, ISO, and other styles
6

Milman, V. D. "Diameter of a minimal invariant subset of equivariant lipschitz actions on compact subsets of ℝ k." In Geometrical Aspects of Functional Analysis, 13–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 1987. http://dx.doi.org/10.1007/bfb0078133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cerverón, Vicente, and Ariadna Fuertes. "Parallel Random Search and Tabu Search for the Minimal Consistent Subset Selection Problem." In Randomization and Approximation Techniques in Computer Science, 248–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/3-540-49543-6_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jacq, Jean-José, and Christian Roux. "Automatic detection of articular surfaces in 3-D image through minimal subset random sampling." In Lecture Notes in Computer Science, 73–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/bfb0029226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bahl, Shilpa, and Sudhir Kumar Sharma. "A Minimal Subset of Features Using Correlation Feature Selection Model for Intrusion Detection System." In Advances in Intelligent Systems and Computing, 337–46. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2523-2_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Karimi, Amir-Hossein, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. "Towards Causal Algorithmic Recourse." In xxAI - Beyond Explainable AI, 139–66. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04083-2_8.

Full text
Abstract:
AbstractAlgorithmic recourse is concerned with aiding individuals who are unfavorably treated by automated decision-making systems to overcome their hardship, by offering recommendations that would result in a more favorable prediction when acted upon. Such recourse actions are typically obtained through solving an optimization problem that minimizes changes to the individual’s feature vector, subject to various plausibility, diversity, and sparsity constraints. Whereas previous works offer solutions to the optimization problem in a variety of settings, they critically overlook real-world considerations pertaining to the environment in which recourse actions are performed.The present work emphasizes that changes to a subset of the individual’s attributes may have consequential down-stream effects on other attributes, thus making recourse a fundamcausal problem. Here, we model such considerations using the framework of structural causal models, and highlight pitfalls of not considering causal relations through examples and theory. Such insights allow us to reformulate the optimization problem to directly optimize for minimally-costly recourse over a space of feasible actions (in the form of causal interventions) rather than optimizing for minimally-distant “counterfactual explanations”. We offer both the optimization formulations and solutions to deterministic and probabilistic recourse, on an individualized and sub-population level, overcoming the steep assumptive requirements of offering recourse in general settings. Finally, using synthetic and semi-synthetic experiments based on the German Credit dataset, we demonstrate how such methods can be applied in practice under minimal causal assumptions.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Minimal subset"

1

Kangkan, Kamonnat, and Boontee Kruatrachue. "Minimal Consistent Subset Selection as Integer Nonlinear Programming Problem." In 2006 International Symposium on Communications and Information Technologies. IEEE, 2006. http://dx.doi.org/10.1109/iscit.2006.339886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Terra-Neves, Miguel, Inês Lynce, and Vasco Manquinho. "Multi-Objective Optimization Through Pareto Minimal Correction Subsets." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/757.

Full text
Abstract:
A Minimal Correction Subset (MCS) of an unsatisfiable constraint set is a minimal subset of constraints that, if removed, makes the constraint set satisfiable. MCSs enjoy a wide range of applications, such as finding approximate solutions to constrained optimization problems. However, existing work on applying MCS enumeration to optimization problems focuses on the single-objective case. In this work, Pareto Minimal Correction Subsets (Pareto-MCSs) are proposed for approximating the Pareto-optimal solution set of multi-objective constrained optimization problems. We formalize and prove an equivalence relationship between Pareto-optimal solutions and Pareto-MCSs. Moreover, Pareto-MCSs and MCSs can be connected in such a way that existing state-of-the-art MCS enumeration algorithms can be used to enumerate Pareto-MCSs. Finally, experimental results on the multi-objective virtual machine consolidation problem show that the Pareto-MCS approach is competitive with state-of-the-art algorithms.
APA, Harvard, Vancouver, ISO, and other styles
3

He, Qing, Xiu-Rong Zhao, and Zhong-Zhi Shi. "Sampling Based on Minimal Consistent Subset for Hyper Surface Classification." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kruatrachue, Boontee, and Marut Hongsamart. "Prototype selection based on minimal consistent subset and genetic algorithms." In SICE 2008 - 47th Annual Conference of the Society of Instrument and Control Engineers of Japan. IEEE, 2008. http://dx.doi.org/10.1109/sice.2008.4654742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

GUTIERREZ, ANGEL, and ALFREDO SOMOLINOS. "COMPLETING A MINIMAL SUBSET OF JAVA FOR A FIRST PROGRAMMING COURSE." In Proceedings of the International Conference. WORLD SCIENTIFIC, 2001. http://dx.doi.org/10.1142/9789812810885_0027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kruatrachue, Boontee, and Teeratorn Choowong. "Prototype selection using Reinforcement Learning and Minimal Consistent Subset Identification guide." In 2010 International Conference on Control, Automation and Systems (ICCAS 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccas.2010.5669919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

GUTIERREZ, ANGEL, and ALFREDO SOMOLINOS. "BUILDING UP A MINIMAL SUBSET OF JAVA FOR A FIRST PROGRAMMING COURSE." In Proceedings of the International Conference. WORLD SCIENTIFIC, 2001. http://dx.doi.org/10.1142/9789812810885_0026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Thorup, Mikkel. "Bottom-k and priority sampling, set similarity and subset sums with minimal independence." In the 45th annual ACM symposium. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2488608.2488655.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yukawa, Masahiro, and Isao Yamada. "Minimal antenna-subset selection under capacity constraint for power-efficient MIMO systems: A relaxed ℓ1 minimization approach." In 2010 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2010. http://dx.doi.org/10.1109/icassp.2010.5496109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

de Colnet, Alexis, and Pierre Marquis. "On the Complexity of Enumerating Prime Implicants from Decision-DNNF Circuits." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/358.

Full text
Abstract:
We consider the problem Enum·IP of enumerating prime implicants of Boolean functions represented by decision decomposable negation normal form (dec-DNNF) circuits. We study Enum·IP from dec-DNNF within the framework of enumeration complexity and prove that it is in OutputP, the class of output polynomial enumeration problems, and more precisely in IncP, the class of polynomial incremental time enumeration problems. We then focus on two closely related, but seemingly harder, enumeration problems where further restrictions are put on the prime implicants to be generated. In the first problem, one is only interested in prime implicants representing subset-minimal abductive explanations, a notion much investigated in AI for more than thirty years. In the second problem, the target is prime implicants representing sufficient reasons, a recent yet important notion in the emerging field of eXplainable AI, since they aim to explain predictions achieved by machine learning classifiers. We provide evidence showing that enumerating specific prime implicants corresponding to subset-minimal abductive explanations or to sufficient reasons is not in OutputP.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Minimal subset"

1

Sertkaya, Barış. Some Computational Problems Related to Pseudo-intents. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.169.

Full text
Abstract:
We investigate the computational complexity of several decision, enumeration and counting problems related to pseudo-intents. We show that given a formal context and a set of its pseudo-intents, checking whether this context has an additional pseudo-intent is in conp and it is at least as hard as checking whether a given simple hypergraph is saturated. We also show that recognizing the set of pseudo-intents is also in conp and it is at least as hard as checking whether a given hypergraph is the transversal hypergraph of another given hypergraph. Moreover, we show that if any of these two problems turns out to be conp-hard, then unless p = np, pseudo-intents cannot be enumerated in output polynomial time. We also investigate the complexity of finding subsets of a given Duquenne-Guigues Base from which a given implication follows. We show that checking the existence of such a subset within a specified cardinality bound is np-complete, and counting all such minimal subsets is #p-complete.
APA, Harvard, Vancouver, ISO, and other styles
2

Kriegel, Francesco. Optimal Fixed-Premise Repairs of EL TBoxes (Extended Version). Technische Universität Dresden, 2022. http://dx.doi.org/10.25368/2022.321.

Full text
Abstract:
Reasoners can be used to derive implicit consequences from an ontology. Sometimes unwanted consequences are revealed, indicating errors or privacy-sensitive information, and the ontology needs to be appropriately repaired. The classical approach is to remove just enough axioms such that the unwanted consequences vanish. However, this is often too rough since mere axiom deletion also erases many other consequences that might actually be desired. The goal should not be to remove a minimal number of axioms but to modify the ontology such that only a minimal number of consequences is removed, including the unwanted ones. Specifically, a repair should rather be logically entailed by the input ontology, instead of being a subset. To this end, we introduce a framework for computing fixed-premise repairs of $\mathcal{EL}$ TBoxes. In the first variant the conclusions must be generalizations of those in the input TBox, while in the second variant no such restriction is imposed. In both variants, every repair is entailed by an optimal one and, up to equivalence, the set of all optimal repairs can be computed in exponential time. A prototypical implementation is provided. In addition, we show new complexity results regarding gentle repairs. This is an extended version of an article accepted at the 45th German Conference on Artificial Intelligence (KI 2022).
APA, Harvard, Vancouver, ISO, and other styles
3

Anderson, Andrew, and Mark Yacucci. Inventory and Statistical Characterization of Inorganic Soil Constituents in Illinois: Appendices. Illinois Center for Transportation, June 2021. http://dx.doi.org/10.36501/0197-9191/21-007.

Full text
Abstract:
This report presents detailed histograms of data from the Regulated Substances Library (RSL) developed by the Illinois Department of Transportation (IDOT). RSL data are provided for state and IDOT region, IDOT district, and county spatial subsets to examine the spatial variability and its relationship to thresholds defining natural background concentrations. The RSL is comprised of surficial soil chemistry data obtained from rights-of-way (ROW) subsurface soil sampling conducted for routine preliminary site investigations. A selection of 22 inorganic soil analytes are examined in this report: Al, Sb, As, Ba, Be, Cd, Ca, Cr, Co, Cu, Fe, Pb, Mg, Mn, Hg, Ni, K, Se, Na, Tl, V, and Zn. RSL database summary statistics, mean, median, minimum, maximum, 5th percentile, and 95th percentile, are determined for Illinois counties and for recognized environmental concern, non-recognized environmental concern, and de minimis site contamination classifications.
APA, Harvard, Vancouver, ISO, and other styles
4

Peñaloza, Rafael, and Barış Sertkaya. On the Complexity of Axiom Pinpointing in Description Logics. Technische Universität Dresden, 2009. http://dx.doi.org/10.25368/2022.173.

Full text
Abstract:
We investigate the computational complexity of axiom pinpointing in Description Logics, which is the task of finding minimal subsets of a knowledge base that have a given consequence. We consider the problems of enumerating such subsets with and without order, and show hardness results that already hold for the propositional Horn fragment, or for the Description Logic EL. We show complexity results for several other related decision and enumeration problems for these fragments that extend to more expressive logics. In particular we show that hardness of these problems depends not only on expressivity of the fragment but also on the shape of the axioms used.
APA, Harvard, Vancouver, ISO, and other styles
5

Baader, Franz, and Rafael Peñaloza. Axiom Pinpointing in General Tableaux. Aachen University of Technology, 2007. http://dx.doi.org/10.25368/2022.159.

Full text
Abstract:
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableaux algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
APA, Harvard, Vancouver, ISO, and other styles
6

Baader, Franz, and Rafael Peñaloza. Pinpointing in Terminating Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.166.

Full text
Abstract:
Axiom pinpointing has been introduced in description logics (DLs) to help the user to understand the reasons why consequences hold and to remove unwanted consequences by computing minimal (maximal) subsets of the knowledge base that have (do not have) the consequence in question. The pinpointing algorithms described in the DL literature are obtained as extensions of the standard tableau-based reasoning algorithms for computing consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL. The purpose of this paper is to develop a general approach for extending a tableau-based algorithm to a pinpointing algorithm. This approach is based on a general definition of „tableau algorithms,' which captures many of the known tableau-based algorithms employed in DLs, but also other kinds of reasoning procedures.
APA, Harvard, Vancouver, ISO, and other styles
7

Bingham-Koslowski, N., S. Zhang, and T. McCartney. Lower Paleozoic strata in the Labrador-Baffin Seaway (Canadian margin) and Baffin Island. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/321827.

Full text
Abstract:
Lower Paleozoic strata occur offshore Labrador (Middle to Upper Ordovician), offshore Baffin Island in western Davis Strait (Upper Ordovician), as well as onshore Baffin Island (Cambrian to Silurian). Paleozoic carbonate rocks (limestone and dolostone units) dominate with occurrences of siliciclastic strata found in the offshore Labrador subsurface (in the Freydis B-87 well) and in outcrop on Baffin Island. In the Labrador-Baffin Seaway, Lower Paleozoic strata primarily exist as isolated erosional remnants, where historically, minimal effort has been made to correlate Paleozoic outliers due to their lateral discontinuity coupled with inconsistent age data. The Lower Paleozoic of the Labrador-Baffin Seaway and Baffin Island can be viewed as two subsets that do not appear to be correlatable: the southern Lower Paleozoic of the Labrador margin and the northern Lower Paleozoic of the southeastern Baffin Shelf and onshore Baffin Island.
APA, Harvard, Vancouver, ISO, and other styles
8

Baader, Franz, and Rafael Peñaloza. Blocking and Pinpointing in Forest Tableaux. Technische Universität Dresden, 2008. http://dx.doi.org/10.25368/2022.165.

Full text
Abstract:
Axiom pinpointing has been introduced in description logics (DLs) to help the used understand the reasons why consequences hold by computing minimal subsets of the knowledge base that have the consequence in consideration. Several pinpointing algorithms have been described as extensions of the standard tableau-based reasoning algorithms for deciding consequences from DL knowledge bases. Although these extensions are based on similar ideas, they are all introduced for a particular tableau-based algorithm for a particular DL, using specific traits of them. In the past, we have developed a general approach for extending tableau-based algorithms into pinpointing algorithms. In this paper we explore some issues of termination of general tableaux and their pinpointing extensions. We also define a subclass of tableaux that allows the use of so-called blocking conditions, which stop the execution of the algorithm once a pattern is found, and adapt the pinpointing extensions accordingly, guaranteeing its correctness and termination.
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Sandy, Peter Gregory, Sarah O’Brien, Catriona McCallion, Ben Goodall, Chun-Han Chan, and Paul Nunn. Rapid Evidence Review 1 on the Critical Appraisal of Third-Party Evidence. Food Standards Agency, June 2021. http://dx.doi.org/10.46756/sci.fsa.elm525.

Full text
Abstract:
The Food Standards Agency (FSA) always seeks to ensure that itsrecommendations are made on the best-available evidence. Following a request from the FSA Chair, the Science Council have sought to provide a framework that can guide those seeking to submit uncommissioned evidence to the FSA on its scientific principles and standards.The Science Councils proposed framework is based on the principles of quality, trustand robustness. By being transparent about the FSA’s minimal expectations, we aim to help those who wish to submit evidence, typically in an effort to fill a perceived evidence gap orchange a relevant policy or legislation. The framework also seeks to provides assurance to others on the processes in place within the FSA to assess evidence it receives.When the FSA receives evidence, it will: be transparent about how the evidence is assessed and used to develop its evidence base, policy recommendations and risk communication; assess evidence in its proper context using the principles of quality, trust and robustness; seek to minimise bias in its assessments of evidence by using professional protocols, its SACs, peer review and/or multi-disciplinary teams be open and transparent about the conclusions it has reached about any evidence submitted to it.
APA, Harvard, Vancouver, ISO, and other styles
10

Ruosteenoja, Kimmo. Applicability of CMIP6 models for building climate projections for northern Europe. Finnish Meteorological Institute, September 2021. http://dx.doi.org/10.35614/isbn.9789523361416.

Full text
Abstract:
In this report, we have evaluated the performance of nearly 40 global climate models (GCMs) participating in Phase 6 of the Coupled Model Intercomparison Project (CMIP6). The focus is on the northern European area, but the ability to simulate southern European and global climate is discussed as well. Model evaluation was started with a technical control; completely unrealistic values in the GCM output files were identified by seeking the absolute minimum and maximum values. In this stage, one GCM was rejected totally, and furthermore individual output files from two other GCMs. In evaluating the remaining GCMs, the primary tool was the Model Climate Performance Index (MCPI) that combines RMS errors calculated for the different climate variables into one index. The index takes into account both the seasonal and spatial variations in climatological means. Here, MCPI was calculated for the period 1981—2010 by comparing GCM output with the ERA-Interim reanalyses. Climate variables explored in the evaluation were the surface air temperature, precipitation, sea level air pressure and incoming solar radiation at the surface. Besides MCPI, we studied RMS errors in the seasonal course of the spatial means by examining each climate variable separately. Furthermore, the evaluation procedure considered model performance in simulating past trends in the global-mean temperature, the compatibility of future responses to different greenhouse-gas scenarios and the number of available scenario runs. Daily minimum and maximum temperatures were likewise explored in a qualitative sense, but owing to the non-existence of data from multiple GCMs, these variables were not incorporated in the quantitative validation. Four of the 37 GCMs that had passed the initial technical check were regarded as wholly unusable for scenario calculations: in two GCMs the responses to the different greenhouse gas scenarios were contradictory and in two other GCMs data were missing from one of the four key climate variables. Moreover, to reduce inter-GCM dependencies, no more than two variants of any individual GCM were included; this led to an abandonment of one GCM. The remaining 32 GCMs were divided into three quality classes according to the assessed performance. The users of model data can utilize this grading to select a subset of GCMs to be used in elaborating climate projections for Finland or adjacent areas. Annual-mean temperature and precipitation projections for Finland proved to be nearly identical regardless of whether they were derived from the entire ensemble or by ignoring models that had obtained the lowest scores. Solar radiation projections were somewhat more sensitive.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography