Journal articles on the topic 'Approximations of practice'

To see the other types of publications on this topic, follow the link: Approximations of practice.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Approximations of practice.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Grossman, Pam, Christa Compton, Danielle Igra, Matthew Ronfeldt, Emily Shahan, and Peter W. Williamson. "Teaching Practice: A Cross-Professional Perspective." Teachers College Record: The Voice of Scholarship in Education 111, no. 9 (September 2009): 2055–100. http://dx.doi.org/10.1177/016146810911100905.

Full text
Abstract:
Background/Context This study investigates how people are prepared for professional practice in the clergy, teaching, and clinical psychology. The work is located within research on professional education, and research on the teaching and learning of practice. Purpose/Objective/Research Question/Focus of Study The purpose of the study is to develop a framework to describe and analyze the teaching of practice in professional education programs, specifically preparation for relational practices. Setting The research took place in eight professional education programs located in seminaries, schools of professional psychology, and universities across the country. Population/Participants/Subjects Our research participants include faculty members, students, and administrators at each of these eight programs. Research Design This research is a comparative case study of professional education across three different professions—the clergy, clinical psychology, and teaching. Our data include qualitative case studies of eight preparation programs: two teacher education programs, three seminaries, and three clinical psychology programs. Data Collection and Analysis For each institution, we conducted site visits that included interviews with administrators, faculty, and staff; observations of multiple classes and field-work; and focus groups with students who were either at the midpoint or at the end of their programs. Conclusions/Recommendations We have identified three key concepts for understanding the pedagogies of practice in professional education: representations, decomposition, and approximations of practice. Representations of practice comprise the different ways that practice is represented in professional education and what these various representations make visible to novices. Decomposition of practice involves breaking down practice into its constituent parts for the purposes of teaching and learning. Approximations of practice refer to opportunities to engage in practices that are more or less proximal to the practices of a profession. In this article, we define and provide examples of the representation, decomposition, and approximation of practice from our study of professional education in the clergy, clinical psychology, and teaching. We conclude that, in the program we studied, prospective teachers have fewer opportunities to engage in approximations that focus on contingent, interactive practice than do novices in the other two professions we studied.
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Matthew P., Erin E. Baldinger,, and Foster Graif. "Representing Student Voice in an Approximation of Practice: Using Planted Errors in Coached Rehearsals to Support Teacher Candidate Learning." Mathematics Teacher Educator 9, no. 1 (September 1, 2020): 23–49. http://dx.doi.org/10.5951/mte.2020.0005.

Full text
Abstract:
Approximations of practice provide opportunities for teacher candidates (TCs) to engage in the work of teaching in situations of reduced complexity. A problem of practice for teacher educators relates to how to represent student voice in approximations to engage TCs with interactive practices in meaningful ways. In this article, we share an analysis of our use of “planted errors” in coached rehearsals with secondary mathematics TCs focused on the practice of responding to errors in whole-class discussion. We highlight how different iterations of the planted errors affect the authenticity of how student voice was represented in the rehearsals and the resulting opportunities for TC learning. We offer design considerations for coached rehearsals and other approximations of practice.
APA, Harvard, Vancouver, ISO, and other styles
3

Beyer, Stephan, and Markus Chimani. "Strong Steiner Tree Approximations in Practice." ACM Journal of Experimental Algorithmics 24 (December 17, 2019): 1–33. http://dx.doi.org/10.1145/3299903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kavanagh, Sarah Schneider, Mike Metz, Mary Hauser, Brad Fogo, Megan Westwood Taylor, and Janet Carlson. "Practicing Responsiveness: Using Approximations of Teaching to Develop Teachers’ Responsiveness to Students’ Ideas." Journal of Teacher Education 71, no. 1 (April 18, 2019): 94–107. http://dx.doi.org/10.1177/0022487119841884.

Full text
Abstract:
As practice-based teacher education (PBTE) has become more prevalent, debates about its contribution have emerged. Critics of PBTE question whether emphasizing practice will support a technocratic approach to teacher education rather than promoting instruction that is responsive to students’ ideas. This qualitative case study was motivated by an interest in understanding whether and in what ways practice-based approaches to teacher learning can support teachers in practicing responsiveness as opposed to practicing decontextualized moves. To this end, we investigated how early-career teachers in a practice-based professional development program were supported to approximate teaching practices. We focused on the extent to which approximations of practice supported teachers to hone their skill at being responsive to students’ ideas. Findings revealed characteristics of approximations of practice that support teachers in developing their capacity to enact responsive instruction. These findings have implications for program design, teacher educator pedagogy, and future research.
APA, Harvard, Vancouver, ISO, and other styles
5

Negrea, Jeffrey, and Jeffrey S. Rosenthal. "Approximations of geometrically ergodic reversible markov chains." Advances in Applied Probability 53, no. 4 (November 22, 2021): 981–1022. http://dx.doi.org/10.1017/apr.2021.10.

Full text
Abstract:
AbstractA common tool in the practice of Markov chain Monte Carlo (MCMC) is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or is intractable. A limited set of quantitative tools exists to assess the relative accuracy and efficiency of such approximations. We derive a set of tools for such analysis based on the Hilbert space generated by the stationary distribution we intend to sample, $L_2(\pi)$. Our results apply to approximations of reversible chains which are geometrically ergodic, as is typically the case for applications to MCMC. The focus of our work is on determining whether the approximating kernel will preserve the geometric ergodicity of the exact chain, and whether the approximating stationary distribution will be close to the original stationary distribution. For reversible chains, our results extend the results of Johndrow et al. (2015) from the uniformly ergodic case to the geometrically ergodic case, under some additional regularity conditions. We then apply our results to a number of approximate MCMC algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Karakida, Ryo, and Kazuki Osawa. "Understanding approximate Fisher information for fast convergence of natural gradient descent in wide neural networks*." Journal of Statistical Mechanics: Theory and Experiment 2021, no. 12 (December 1, 2021): 124010. http://dx.doi.org/10.1088/1742-5468/ac3ae3.

Full text
Abstract:
Abstract Natural gradient descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher information are identical to those with the exact Fisher information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Glasserman, Paul, and Hui Wang. "Discretization of deflated bond prices." Advances in Applied Probability 32, no. 2 (June 2000): 540–63. http://dx.doi.org/10.1239/aap/1013540178.

Full text
Abstract:
This paper proposes and analyzes discrete-time approximations to a class of diffusions, with an emphasis on preserving certain important features of the continuous-time processes in the approximations. We start with multivariate diffusions having three features in particular: they are martingales, each of their components evolves within the unit interval, and the components are almost surely ordered. In the models of the term structure of interest rates that motivate our investigation, these properties have the important implications that the model is arbitrage-free and that interest rates remain positive. In practice, numerical work with such models often requires Monte Carlo simulation and thus entails replacing the original continuous-time model with a discrete-time approximation. It is desirable that the approximating processes preserve the three features of the original model just noted, though standard discretization methods do not. We introduce new discretizations based on first applying nonlinear transformations from the unit interval to the real line (in particular, the inverse normal and inverse logit), then using an Euler discretization, and finally applying a small adjustment to the drift in the Euler scheme. We verify that these methods enforce important features in the discretization with no loss in the order of convergence (weak or strong). Numerical results suggest that these methods can also yield a better approximation to the law of the continuous-time process than does a more standard discretization.
APA, Harvard, Vancouver, ISO, and other styles
8

Glasserman, Paul, and Hui Wang. "Discretization of deflated bond prices." Advances in Applied Probability 32, no. 02 (June 2000): 540–63. http://dx.doi.org/10.1017/s0001867800010077.

Full text
Abstract:
This paper proposes and analyzes discrete-time approximations to a class of diffusions, with an emphasis on preserving certain important features of the continuous-time processes in the approximations. We start with multivariate diffusions having three features in particular: they are martingales, each of their components evolves within the unit interval, and the components are almost surely ordered. In the models of the term structure of interest rates that motivate our investigation, these properties have the important implications that the model is arbitrage-free and that interest rates remain positive. In practice, numerical work with such models often requires Monte Carlo simulation and thus entails replacing the original continuous-time model with a discrete-time approximation. It is desirable that the approximating processes preserve the three features of the original model just noted, though standard discretization methods do not. We introduce new discretizations based on first applying nonlinear transformations from the unit interval to the real line (in particular, the inverse normal and inverse logit), then using an Euler discretization, and finally applying a small adjustment to the drift in the Euler scheme. We verify that these methods enforce important features in the discretization with no loss in the order of convergence (weak or strong). Numerical results suggest that these methods can also yield a better approximation to the law of the continuous-time process than does a more standard discretization.
APA, Harvard, Vancouver, ISO, and other styles
9

Majda, Andrew J., Boris Gershgorin, and Yuan Yuan. "Low-Frequency Climate Response and Fluctuation–Dissipation Theorems: Theory and Practice." Journal of the Atmospheric Sciences 67, no. 4 (April 1, 2010): 1186–201. http://dx.doi.org/10.1175/2009jas3264.1.

Full text
Abstract:
Abstract The low-frequency response to changes in external forcing or other parameters for various components of the climate system is a central problem of contemporary climate change science. The fluctuation–dissipation theorem (FDT) is an attractive way to assess climate change by utilizing statistics of the present climate; with systematic approximations, it has been shown recently to have high skill for suitable regimes of an atmospheric general circulation model (GCM). Further applications of FDT to low-frequency climate response require improved approximations for FDT on a reduced subspace of resolved variables. Here, systematic mathematical principles are utilized to develop new FDT approximations on reduced subspaces and to assess the small yet significant departures from Gaussianity in low-frequency variables on the FDT response. Simplified test models mimicking crucial features in GCMs are utilized here to elucidate these issues and various FDT approximations in an unambiguous fashion. Also, the shortcomings of alternative ad hoc procedures for FDT in the recent literature are discussed here. In particular, it is shown that linear regression stochastic models for the FDT response always have no skill for a general nonlinear system for the variance response and can have poor or moderate skill for the mean response depending on the regime of the Lorenz 40-model and the details of the regression strategy. New nonlinear stochastic FDT approximations for a reduced set of variables are introduced here with significant skill in capturing the effect of subtle departures from Gaussianity in the low-frequency response for a reduced set of variables.
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Yichen. "Matrix product state approximations: Bringing theory closer to practice." Quantum Views 3 (November 6, 2019): 26. http://dx.doi.org/10.22331/qv-2019-11-06-26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Orfanos, Stefanos C. "A Comparison of Macaulay Approximations." Risks 10, no. 8 (July 29, 2022): 153. http://dx.doi.org/10.3390/risks10080153.

Full text
Abstract:
We discuss several known formulas that use the Macaulay duration and convexity of commonly used cash flow streams to approximate their net present value, and compare them with a new approximation formula that involves hyperbolic functions. Our objective is to assess the reliability of each approximation formula under different scenarios. The results in this note should be of interest to actuarial candidates and educators as well as analysts working in all areas of actuarial practice.
APA, Harvard, Vancouver, ISO, and other styles
12

Gonsalves, Allison J., Emily Diane Sprowls, and Dawn Wiseman. "Teaching Novice Science Teachers Online: Considerations for Practice-Based Pedagogy." LEARNing Landscapes 14, no. 1 (June 24, 2021): 111–23. http://dx.doi.org/10.36510/learnland.v14i1.1049.

Full text
Abstract:
The COVID-19 pandemic has required educators at all levels to pivot instruction online. In this article, we consider methods we adopted to engage novice science teachers in approximations of teaching, online. We describe the principles of our science teacher education program and provide a rationale for the core feature of our science teaching methods course: practice-based pedagogy. We then discuss adaptations we have made to engage novices in ambitious science teaching practices online, and the affordances and constraints the virtual context posed to these practices. We conclude with a discussion of considerations for online practice-based pedagogy.
APA, Harvard, Vancouver, ISO, and other styles
13

Mahony, John D. "A mathematical approximation in the physical sciences." Mathematical Gazette 106, no. 566 (June 22, 2022): 220–32. http://dx.doi.org/10.1017/mag.2022.62.

Full text
Abstract:
The business of making mathematical approximations in the physical sciences has a long and noble history. For example, in the earliest days of pyramid construction in ancient Egypt it was necessary to approximate lengths required in construction, especially when they involved irrational numbers. Similarly, surveyors in early Greece seeking to lay out profiles of right-angle triangles or circles on the ground invariably ended up making approximations regarding measurements of required lengths, as indeed is the case today. Practitioners have always faced the problem of having to decide when parameters in theory have been met satisfactorily in the practice of measurement. Further, before the advent of hand-held calculators, students in schools in the UK would have been very familiar with the approximation 22/7 for the transcendental number π, obtained perhaps by comparing (as this author did) the measured circumferences of many laboriously drawn circles of different sizes with their diameters. Despite the advent of sophisticated calculating devices and facilities, such as computers and spreadsheets, the practice of making approximations is still much in evidence in theoretical work in fields associated with physical phenomena. Such approximations often result in formulae that are easy to use and remember, and moreover can produce theoretical results that support directly, or otherwise, results from measurements. In this respect, the practical mathematician does not have to seek results to many decimal places when measurement facilities allow for accuracy to only a few. The purpose of this Article is to illustrate this point by discussing an example drawn from the realms of antenna theory, relating to the performance of a dipole antenna. It is not the purpose here to delve into the derivation of dipole theory, but to extract the relevant information and show how useful mathematical approximations can be employed to simplify a relationship between parameters of interest to an antenna engineer. To this end, it will first be necessary to introduce some antenna concepts that might be new to the reader.
APA, Harvard, Vancouver, ISO, and other styles
14

Schutz, Kristine M., Katie A. Danielson, and Julie Cohen. "Approximations in English language arts: Scaffolding a shared teaching practice." Teaching and Teacher Education 81 (May 2019): 100–111. http://dx.doi.org/10.1016/j.tate.2019.01.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Alkhalifah, Tariq. "Prestack traveltime approximations." GEOPHYSICS 77, no. 3 (May 1, 2012): U31—U37. http://dx.doi.org/10.1190/geo2011-0465.1.

Full text
Abstract:
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation.
APA, Harvard, Vancouver, ISO, and other styles
16

Moss, Pamela A. "Analyzing the Teaching of Professional Practice." Teachers College Record: The Voice of Scholarship in Education 113, no. 12 (December 2011): 2878–96. http://dx.doi.org/10.1177/016146811111301202.

Full text
Abstract:
Background/Context Based on their case studies of preparation for professional practice in the clergy, teaching, and clinical psychology, Grossman and colleagues (2009) identified three key concepts for analyzing and comparing practice in professional education—representations, decomposition, and approximations—to support professional educators in learning from each other's practice. In this special issue, two teams of teacher educators (Kucan & Palincsar, and Boerst, Sleep, Ball, & Bass) put these concepts to work in representing their practice of preparing novice teachers to lead discussions with their students. Purpose/Objective/Research Questions/Focus of Study This analytic essay presents an argument for the importance of (a) adding a fourth key concept to the Grossman et al. framework—conceptions of quality—and (b) using these four concepts to trace novices’ learning opportunities as they unfold over time in order to serve the goal of facilitating instructive comparisons in professional education. Research Design In this analytic essay, I analyze the three articles to examine how conceptions of quality are already entailed in the characterizations of practice. My analysis focuses on the kinds of criteria or “qualities” that are foregrounded; the grain size of practice to which the conception of quality is applied; and the ways in which variations in criteria— what counts as more or less advanced—are represented. I then contrast the sequence of learning opportunities and assessments described in the articles on discussion leading in terms of these four concepts. Conclusions/Recommendations Even instructional practices that appeared quite similar when described through the lenses of approximations, decomposition, and representations looked quite different when conceptions of quality and learning opportunities and assessments were traced over time. Representing these “learning trajectories”—which entail an understanding of the evolving dialectical relationships between learning opportunities and (at least intended) learning outcomes—seems essential to understanding and learning from the teaching practice.
APA, Harvard, Vancouver, ISO, and other styles
17

Casey, Stephanie, and Joel Amidon. "Do You See What I See? Formative Assessment of Preservice Teachers’ Noticing of Students’ Mathematical Thinking." Mathematics Teacher Educator 8, no. 3 (June 2020): 88–104. http://dx.doi.org/10.5951/mte.2020.0009.

Full text
Abstract:
Developing expertise in professional noticing of students’ mathematical thinking takes time and meaningful learning experiences. We used the LessonSketch platform to create a learning experience for secondary preservice teachers (PSTs) involving an approximation of teaching practice to formatively assess PSTs’ noticing skills of students’ mathematical thinking. Our study showed that approximations of teaching practice embedded within platforms like LessonSketch can enable mathematics teacher educators (MTEs) to carry out effective formative assessment of PSTs’ professional noticing of students’ mathematical thinking that is meaningful for both PSTs and MTEs. The experience itself as well as its design features and framework used with the assessment can be applied in the work of MTEs who develop teachers’ professional noticing skills of students’ mathematical thinking.
APA, Harvard, Vancouver, ISO, and other styles
18

Ignjatovic, Aleksandar, Chamith Wijenayake, and Gabriele Keller. "Chromatic Derivatives and Approximations in Practice—Part I: A General Framework." IEEE Transactions on Signal Processing 66, no. 6 (March 15, 2018): 1498–512. http://dx.doi.org/10.1109/tsp.2017.2787127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Peters, Gareth, Rodrigo Targino, and Pavel Shevchenko. "Understanding operational risk capital approximations: First and second orders." Journal of Governance and Regulation 2, no. 3 (2013): 58–78. http://dx.doi.org/10.22495/jgr_v2_i3_p6.

Full text
Abstract:
We set the context for capital approximation within the framework of the Basel II / III regulatory capital accords. This is particularly topical as the Basel III accord is shortly due to take effect. In this regard, we provide a summary of the role of capital adequacy in the new accord, highlighting along the way the significant loss events that have been attributed to the Operational Risk class that was introduced in the Basel II and III accords. Then we provide a semi-tutorial discussion on the modelling aspects of capital estimation under a Loss Distributional Approach (LDA). Our emphasis is to focuss on the important loss processes with regard to those that contribute most to capital, the so called “high consequence, low frequency" loss processes. This leads us to provide a tutorial overview of heavy tailed loss process modelling in OpRisk under Basel III, with discussion on the implications of such tail assumptions for the severity model in an LDA structure. This provides practitioners with a clear understanding of the features that they may wish to consider when developing OpRisk severity models in practice. From this discussion on heavy tailed severity models, we then develop an understanding of the impact such models have on the right tail asymptotics of the compound loss process and we provide detailed presentation of what are known as first and second order tail approximations for the resulting heavy tailed loss process. From this we develop a tutorial on three key families of risk measures and their equivalent second order asymptotic approximations: Value-at-Risk (Basel III industry standard); Expected Shortfall (ES) and the Spectral Risk Measure. These then form the capital approximations. We then provide a few example case studies to illustrate the accuracy of these asymptotic captial approximations, the rate of the convergence of the assymptotic result as a function of the LDA frequency and severity model parameters, the sensitivity of the capital approximation to the model parameters and the sensitivity to model miss-specification.
APA, Harvard, Vancouver, ISO, and other styles
20

Davis, R. S. "Optimization of the Spatial Mesh for Numerical Solution of the Neutron Transport Equation in a Cluster-Type Lattice Cell." AECL Nuclear Review 1, no. 1 (June 1, 2012): 35–43. http://dx.doi.org/10.12943/anr.2012.00006.

Full text
Abstract:
For programs that solve the neutron transport equation with an approximation that the neutron flux is constant in each space in a user-defined mesh, optimization of that mesh yields benefits in computing time and attainable precision. The previous best practice does not optimize the mesh thoroughly, because a large number of test runs of the solving software would be necessary. The method presented here optimizes the mesh for a flux that is based on conventional approximations but is more informative, so that a minimal number of parameters, one per type of material, must be adjusted by test runs to achieve thorough optimization. For a 37 element, natural-uranium, CANDU® lattice cell, the present optimization yields 7 to 12 times (depending on the criterion) better precision than the previous best practice in 37% less computing time.
APA, Harvard, Vancouver, ISO, and other styles
21

Lubyshev, F. V., and M. E. Fairuzov. "Approximation of a mixed boundary value problem." Zhurnal Srednevolzhskogo Matematicheskogo Obshchestva 20, no. 4 (December 30, 2018): 429–38. http://dx.doi.org/10.15507/2079-6900.20.201804.429-438.

Full text
Abstract:
The mixed boundary value problem for the divergent-type elliptic equation with variable coefficients is considered. It is assumed that the integration domain has a sufficiently smooth boundary that is the union of two disjoint pieces. The Dirichlet boundary condition is given on the first piece, and the Neumann boundary condition is given on the other one. So the problem has discontinuous boundary condition. Such problems with mixed boundary conditions are the most common in practice when modeling processes and are of considerable interest in the development of methods for their solution. In particular, a number of problems in the theory of elasticity, theory of diffusion, filtration, geophysics, a number of problems of optimization in electro-heat and mass transfer in complex multielectrode electrochemical systems are reduced to the boundary value problems of this type. In this paper, we propose an approximation of the original mixed boundary value problem by the third boundary value problem with a parameter. The convergence of the proposed approximations is investigated. Estimates of the approximations’ convergence rate in Sobolev norms are established.
APA, Harvard, Vancouver, ISO, and other styles
22

Rice, Michael, and Vassilis Tsotras. "Bidirectional A* Search with Additive Approximation Bounds." Proceedings of the International Symposium on Combinatorial Search 3, no. 1 (August 20, 2021): 80–87. http://dx.doi.org/10.1609/socs.v3i1.18235.

Full text
Abstract:
In this paper, we present new theoretical and experimental results for bidirectional A* search. Unlike most previous research on this topic, our results do not require assumptions of either consistent or balanced heuristic functions for the search. Our theoretical work examines new results on the worst-case number of node expansions for inconsistent heuristic functions with bounded estimation errors. Additionally, we consider several alternative termination criteria in order to more quickly terminate the bidirectional search, and we provide worst-case approximation bounds for our suggested criteria. We prove that our approximation bounds are purely additive in nature (a general improvement over previous multiplicative approximations). Experimental evidence on large-scale road networks suggests that the errors introduced are truly quite negligible in practice, while the performance gains are significant.
APA, Harvard, Vancouver, ISO, and other styles
23

Skipper, Max. "A Pólya Approximation to the Poisson-Binomial Law." Journal of Applied Probability 49, no. 3 (September 2012): 745–57. http://dx.doi.org/10.1239/jap/1346955331.

Full text
Abstract:
Using Stein's method, we derive explicit upper bounds on the total variation distance between a Poisson-binomial law (the distribution of a sum of independent but not necessarily identically distributed Bernoulli random variables) and a Pólya distribution with the same support, mean, and variance; a nonuniform bound on the pointwise distance between the probability mass functions is also given. A numerical comparison of alternative distributional approximations on a somewhat representative collection of case studies is also exhibited. The evidence proves that no single one is uniformly most accurate, though it suggests that the Pólya approximation might be preferred in several parameter domains encountered in practice.
APA, Harvard, Vancouver, ISO, and other styles
24

Skipper, Max. "A Pólya Approximation to the Poisson-Binomial Law." Journal of Applied Probability 49, no. 03 (September 2012): 745–57. http://dx.doi.org/10.1017/s0021900200009517.

Full text
Abstract:
Using Stein's method, we derive explicit upper bounds on the total variation distance between a Poisson-binomial law (the distribution of a sum of independent but not necessarily identically distributed Bernoulli random variables) and a Pólya distribution with the same support, mean, and variance; a nonuniform bound on the pointwise distance between the probability mass functions is also given. A numerical comparison of alternative distributional approximations on a somewhat representative collection of case studies is also exhibited. The evidence proves that no single one is uniformly most accurate, though it suggests that the Pólya approximation might be preferred in several parameter domains encountered in practice.
APA, Harvard, Vancouver, ISO, and other styles
25

Tyminski, Andrew M., V. Serbay Zambak, Corey Drake, and Tonia J. Land. "Using representations, decomposition, and approximations of practices to support prospective elementary mathematics teachers’ practice of organizing discussions." Journal of Mathematics Teacher Education 17, no. 5 (December 27, 2013): 463–87. http://dx.doi.org/10.1007/s10857-013-9261-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Armas Romero, Ana, Mark Kaminski, Bernardo Cuenca Grau, and Ian Horrocks. "Module Extraction in Expressive Ontology Languages via Datalog Reasoning." Journal of Artificial Intelligence Research 55 (February 29, 2016): 499–564. http://dx.doi.org/10.1613/jair.4898.

Full text
Abstract:
Module extraction is the task of computing a (preferably small) fragment M of an ontology T that preserves a class of entailments over a signature of interest S. Extracting modules of minimal size is well-known to be computationally hard, and often algorithmically infeasible, especially for highly expressive ontology languages. Thus, practical techniques typically rely on approximations, where M provably captures the relevant entailments, but is not guaranteed to be minimal. Existing approximations ensure that M preserves all second-order entailments of T w.r.t. S, which is a stronger condition than is required in many applications, and may lead to unnecessarily large modules in practice. In this paper we propose a novel approach in which module extraction is reduced to a reasoning problem in datalog. Our approach generalises existing approximations in an elegant way. More importantly, it allows extraction of modules that are tailored to preserve only specific kinds of entailments, and thus are often significantly smaller. Our evaluation on a wide range of ontologies confirms the feasibility and benefits of our approach in practice.
APA, Harvard, Vancouver, ISO, and other styles
27

Polnikov, V. G. "The choice of optimal Discrete Interaction Approximation to the kinetic integral for ocean waves." Nonlinear Processes in Geophysics 10, no. 4/5 (October 31, 2003): 425–34. http://dx.doi.org/10.5194/npg-10-425-2003.

Full text
Abstract:
Abstract. A lot of discrete configurations for the four-wave nonlinear interaction processes have been calculated and tested by the method proposed earlier in the frame of the concept of Fast Discrete Interaction Approximation to the Hasselmann's kinetic integral (Polnikov and Farina, 2002). It was found that there are several simple configurations, which are more efficient than the one proposed originally in Hasselmann et al. (1985). Finally, the optimal multiple Discrete Interaction Approximation (DIA) to the kinetic integral for deep-water waves was found. Wave spectrum features have been intercompared for a number of different configurations of DIA, applied to a long-time solution of kinetic equation. On the basis of this intercomparison the better efficiency of the configurations proposed was confirmed. Certain recommendations were given for implementation of new approximations to the wave forecast practice.
APA, Harvard, Vancouver, ISO, and other styles
28

Błażejczyk, Krzysztof, and Magdalena Kuchcik. "UTCI applications in practice (methodological questions)." Geographia Polonica 94, no. 2 (2021): 153–65. http://dx.doi.org/10.7163/gpol.0198.

Full text
Abstract:
UTCI, although it was developed with the participation of scientists from 22 countries, it has shortcomings and people using it face various obstacles. The difficulties include wide range of issues: from different availability of meteorological data in individual countries, through the kind of air temperature which should be properly used in calculations, or the need of recalculation of wind speed. However the biggest subject concern algorithms for mean radiant temperature (Mrt) calculations, different models and programs which simplify calculations of this complex index though introduce different approximations and, as a result, many false results. The paper presents also wide range of UTCI applications in urban bioclimate studies and bioclimatic mapping, climate-human health researches and biometeorological forecasts which were the primary purpose of the index creation, but also applications in tourism and recreation or even in bioclimate change analysis.
APA, Harvard, Vancouver, ISO, and other styles
29

Howell, Heather, and Jamie N. Mikeska. "Approximations of practice as a framework for understanding authenticity in simulations of teaching." Journal of Research on Technology in Education 53, no. 1 (January 2, 2021): 8–20. http://dx.doi.org/10.1080/15391523.2020.1809033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Singer-Gabella, Marcy, Barbara Stengel, Emily Shahan, and Min-Joung Kim. "Learning to Leverage Student Thinking: What Novice Approximations Teach Us About Ambitious Practice." Elementary School Journal 116, no. 3 (March 2016): 411–36. http://dx.doi.org/10.1086/684944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Mohan, K. Aditya, Dilworth Y. Parkinson, and Jefferson A. Cuadra. "Constrained Non-Linear Phase Retrieval for Single Distance Xray Phase Contrast Tomography." Electronic Imaging 2020, no. 14 (January 26, 2020): 146–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.14.coimg-146.

Full text
Abstract:
X-ray phase contrast tomography (XPCT) is widely used for 3D imaging of objects with weak contrast in X-ray absorption index but strong contrast in refractive index decrement. To reconstruct an object imaged using XPCT, phase retrieval algorithms are first used to estimate the X-ray phase projections, which is the 2D projection of the refractive index decrement, at each view. Phase retrieval is followed by refractive index decrement reconstruction from the phase projections using an algorithm such as filtered back projection (FBP). In practice, phase retrieval is most commonly solved by approximating it as a linear inverse problem. However, this linear approximation often results in artifacts and blurring when the conditions for the approximation are violated. In this paper, we formulate phase retrieval as a non-linear inverse problem, where we solve for the transmission function, which is the negative exponential of the projections, from XPCT measurements. We use a constraint to enforce proportionality between phase and absorption projections. We do not use constraints such as large Fresnel number, slowly varying phase, or Born/Rytov approximations. Our approach also does not require any regularization parameter tuning since there is no explicit sparsity enforcing regularization function. We validate the performance of our non-linear phase retrieval (NLPR) method using both simulated and real synchrotron datasets. We compare NLPR with a popular linear phase retrieval (LPR) approach and show that NLPR achieves sharper reconstructions with higher quantitative accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhao, Zheng-Yu, Qing Fang, Wenqing Ouyang, Zheng Zhang, Ligang Liu, and Xiao-Ming Fu. "Developability-driven piecewise approximations for triangular meshes." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–13. http://dx.doi.org/10.1145/3528223.3530117.

Full text
Abstract:
We propose a novel method to compute a piecewise mesh with a few developable patches and a small approximation error for an input triangular mesh. Our key observation is that a deformed mesh after enforcing discrete developability is easily partitioned into nearly developable patches. To obtain the nearly developable mesh, we present a new edge-oriented notion of discrete developability to define a developability-encouraged deformation energy, which is further optimized by the block nonlinear Gauss-Seidel method. The key to successfully applying this optimizer is three types of auxiliary variables. Then, a coarse-to-fine segmentation technique is developed to partition the deformed mesh into a small set of nearly discrete developable patches. Finally, we refine the segmented mesh to reduce the discrete Gaussian curvature while keeping the patches smooth and the approximation error small. In practice, our algorithm achieves a favorable tradeoff between the number of developable patches and the approximation error. We demonstrate the feasibility and practicability of our method over various examples, including seventeen physical manufacturing models with paper.
APA, Harvard, Vancouver, ISO, and other styles
33

De Oliveira, Samir Adamoglu, Kleber Cuissi Canuto, and Fabricio Baron Mussi. "Praxis and its mediators in ‘Strategy as Practice’: the role of technology use consolidating the strategizing." REBRAE 8, no. 2 (July 27, 2015): 138. http://dx.doi.org/10.7213/rebrae.08.002.ao02.

Full text
Abstract:
Despite claims for more qualitative and quantitative approximations between fundamental areas of Organization Studies, so to unlock its explanatory potentials, there are still some theoretical gaps that hold such integrations back. An example regards Strategy and Technology themes, when the following question is considered: what is the role of technology use in the strategizing? Motivated by this issue, the essay aims at developing an analysis focused on the strategic purposes of the empirical studies conducted and portrayed by Orlikowski (1992) and Schultze and Orlikowski (2004), attempting to bridge Strategy and Technology topics from a practice-centered approach, capitalizing from epistemological, theoretical and methodological convergence of the 'Strategy as Practice' and the 'Technologies-in-Practice' approaches. The essay evidences that the technology use in organizations works as a mediator for the praxis of strategy practitioners concerning issues and activities of framing and enacting practices that sustain the organizational strategy, at the same time as this very technology tool-kit usage comes from the practitioner's strategic thinking and acting.
APA, Harvard, Vancouver, ISO, and other styles
34

Pineau, J., G. Gordon, and S. Thrun. "Anytime Point-Based Approximations for Large POMDPs." Journal of Artificial Intelligence Research 27 (November 26, 2006): 335–80. http://dx.doi.org/10.1613/jair.2078.

Full text
Abstract:
The Partially Observable Markov Decision Process has long been recognized as a rich framework for real-world planning and control problems, especially in robotics. However exact solutions in this framework are typically computationally intractable for all but the smallest problems. A well-known technique for speeding up POMDP solving involves performing value backups at specific belief points, rather than over the entire belief simplex. The efficiency of this approach, however, depends greatly on the selection of points. This paper presents a set of novel techniques for selecting informative belief points which work well in practice. The point selection procedure is combined with point-based value backups to form an effective anytime POMDP algorithm called Point-Based Value Iteration (PBVI). The first aim of this paper is to introduce this algorithm and present a theoretical analysis justifying the choice of belief selection technique. The second aim of this paper is to provide a thorough empirical comparison between PBVI and other state-of-the-art POMDP methods, in particular the Perseus algorithm, in an effort to highlight their similarities and differences. Evaluation is performed using both standard POMDP domains and realistic robotic tasks.
APA, Harvard, Vancouver, ISO, and other styles
35

Asprion, Jonas, Oscar Chinellato, and Lino Guzzella. "Partitioned Quasi-Newton Approximation for Direct Collocation Methods and Its Application to the Fuel-Optimal Control of a Diesel Engine." Journal of Applied Mathematics 2014 (2014): 1–6. http://dx.doi.org/10.1155/2014/341716.

Full text
Abstract:
The numerical solution of optimal control problems by direct collocation is a widely used approach. Quasi-Newton approximations of the Hessian of the Lagrangian of the resulting nonlinear program are also common practice. We illustrate that the transcribed problem is separable with respect to the primal variables and propose the application of dense quasi-Newton updates to the small diagonal blocks of the Hessian. This approach resolves memory limitations, preserves the correct sparsity pattern, and generates more accurate curvature information. The effectiveness of this improvement when applied to engineering problems is demonstrated. As an example, the fuel-optimal and emission-constrained control of a turbocharged diesel engine is considered. First results indicate a significantly faster convergence of the nonlinear program solver when the method proposed is used instead of the standard quasi-Newton approximation.
APA, Harvard, Vancouver, ISO, and other styles
36

Borodin, Dmitriy, Nils Hertl, G. Barratt Park, Michael Schwarzer, Jan Fingerhut, Yingqi Wang, Junxiang Zuo, et al. "Quantum effects in thermal reaction rates at metal surfaces." Science 377, no. 6604 (July 22, 2022): 394–98. http://dx.doi.org/10.1126/science.abq1414.

Full text
Abstract:
There is wide interest in developing accurate theories for predicting rates of chemical reactions that occur at metal surfaces, especially for applications in industrial catalysis. Conventional methods contain many approximations that lack experimental validation. In practice, there are few reactions where sufficiently accurate experimental data exist to even allow meaningful comparisons to theory. Here, we present experimentally derived thermal rate constants for hydrogen atom recombination on platinum single-crystal surfaces, which are accurate enough to test established theoretical approximations. A quantum rate model is also presented, making possible a direct evaluation of the accuracy of commonly used approximations to adsorbate entropy. We find that neglecting the wave nature of adsorbed hydrogen atoms and their electronic spin degeneracy leads to a 10× to 1000× overestimation of the rate constant for temperatures relevant to heterogeneous catalysis. These quantum effects are also found to be important for nanoparticle catalysts.
APA, Harvard, Vancouver, ISO, and other styles
37

AL BAYATI, Sahib Jasim Hasan. "THE PHILOSOPHICAL APPROXIMATIONS IN INTERNATIONAL CONTEMPORARY FORMATION." International Journal of Humanities and Educational Research 03, no. 04 (August 1, 2021): 352–64. http://dx.doi.org/10.47832/2757-5403.4-3.30.

Full text
Abstract:
The recent research deals with the philosophical approximations in international contemporary formation. The research deals with an introduction that includes the problem and significance of it. This significance has represented by consideration of art as a language that has many complicated intellectual processes. This type of language has been ascertained by the Phenomenological Interpretation through constructing an artwork by using the same perception , and making approximation with the space that express a visual language. The research aims at finding out the philosophical approximations in international contemporary formation. The limits of the research is restricted to the study of philosophical approximations in contemporary formation specially the conceptual art from 1965 to 1970. The research has four sections : The first section deals with the meaning of concept in philosophical thinking: The research focuses on tracing back the concept process in philosophy. It presents many possibilities concerning the concept, the imagination and the thought. The second section deals with the concept as an interpretation : It states the idea of art prosperity through its quality not quantity. The third section deals with the philosophy of conceptual art ( Applications and Interpretations) : The artistic phenomenon can not be achieved without practice. This is a feature of ontology which requires the discovery of any phenomenon through its ontological function. This means the melting of an artwork with the outer atmosphere and horizons.The fourth section has the conclusions that have been drawn by the researcher. They are as follow : 1. The concept of text in (The Minimal Edge Art) is regarded as a meeting of texts. The text with its performing and formative types has depended on outer codes that make transformation in meaning. This results from ambiguity and connotations that are connected with hypothetical spaces. There is a renewal of old texts in every period. 2. The Discourse level of the performing arts (Body Language) has decreased to the degree of Civilized Decline. Dealing with mixed references has its spread over participations. This results from inadequate translation of synonyms which have mysterious and symbolic space limits. The research has a list of references, and English abstract.
APA, Harvard, Vancouver, ISO, and other styles
38

Filippovich, G. A., and M. A. Yantsevich. "Flexible Approximation Functions for Broadband Matching." Journal of the Russian Universities. Radioelectronics 25, no. 2 (April 27, 2022): 6–15. http://dx.doi.org/10.32603/1993-8985-2022-25-2-6-15.

Full text
Abstract:
Introduction. Intensive use of broadband signals in RF devices for various purposes is associated with the need to develop broadband elements of RF systems. Iterative methods for designing such elements are frequently uninformative and ineffective, while analytical methods give solutions only for simple models. The problem is the small set of classical approximations, which impedes dealing with complex models of elements.Aim. Development of a wide-band matching technique based on generalized Darlington synthesis using flexible approximating functions (AF) for load models with zeros of transmission at infinity.Materials and methods. The paper is based on the generalized Darlington synthesis method. To extend the capabilities of the method, approximating functions with increased variation properties are used. In order to use the results in engineering practice, a synthesis algorithm was developed, which includes three stages: formation of the frequency response, control of analyticity of the used functions and limits of matching. The method is analytical and does not use iterative procedures. The mathematical apparatus of the method is based on the analysis of residues in the zeros of transfer function of load resistance.Results. Flexible approximating functions proved to be an effective tool for designing matching circuits with multiple transfer zeros in infinity. Variative properties of the function facilitate the realization of both smooth and wave frequency characteristics. A combination of both is also possible, ensuring the best properties of both. The proposed approximating functions allow a smooth change in the frequency response, while preserving the normalization characteristic of classical approximations. Application of such functions allowed us to virtually remove the limitations inherent in the classical AF on the minimum values of the load capacitance and more than 30 % of the limiting values of inductance in the above examples.Conclusion. The developed methodology makes the process of wideband matching physically transparent and can be applied to other classes of loads.
APA, Harvard, Vancouver, ISO, and other styles
39

Filippovich, G. A., and M. A. Yantsevich. "Flexible Approximation Functions for Broadband Matching." Journal of the Russian Universities. Radioelectronics 25, no. 2 (April 27, 2022): 6–15. http://dx.doi.org/10.32603/1993-8985-2022-25-2-6-15.

Full text
Abstract:
Introduction. Intensive use of broadband signals in RF devices for various purposes is associated with the need to develop broadband elements of RF systems. Iterative methods for designing such elements are frequently uninformative and ineffective, while analytical methods give solutions only for simple models. The problem is the small set of classical approximations, which impedes dealing with complex models of elements.Aim. Development of a wide-band matching technique based on generalized Darlington synthesis using flexible approximating functions (AF) for load models with zeros of transmission at infinity.Materials and methods. The paper is based on the generalized Darlington synthesis method. To extend the capabilities of the method, approximating functions with increased variation properties are used. In order to use the results in engineering practice, a synthesis algorithm was developed, which includes three stages: formation of the frequency response, control of analyticity of the used functions and limits of matching. The method is analytical and does not use iterative procedures. The mathematical apparatus of the method is based on the analysis of residues in the zeros of transfer function of load resistance.Results. Flexible approximating functions proved to be an effective tool for designing matching circuits with multiple transfer zeros in infinity. Variative properties of the function facilitate the realization of both smooth and wave frequency characteristics. A combination of both is also possible, ensuring the best properties of both. The proposed approximating functions allow a smooth change in the frequency response, while preserving the normalization characteristic of classical approximations. Application of such functions allowed us to virtually remove the limitations inherent in the classical AF on the minimum values of the load capacitance and more than 30 % of the limiting values of inductance in the above examples.Conclusion. The developed methodology makes the process of wideband matching physically transparent and can be applied to other classes of loads.
APA, Harvard, Vancouver, ISO, and other styles
40

Caetano, Aurea Afonso M., and Teresa Cristina Machado. "Archetype and epigenetics – approximations: contributions of epigenetics to the clinical practice of analytical psychology." Journal of Analytical Psychology 67, no. 2 (April 2022): 501–17. http://dx.doi.org/10.1111/1468-5922.12799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Trent, John. "From campus to classroom: a critical perspective on approximations of practice in teacher education." Research Papers in Education 28, no. 5 (November 2013): 571–94. http://dx.doi.org/10.1080/02671522.2012.710246.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Kabán, Ata. "Sufficient ensemble size for random matrix theory-based handling of singular covariance matrices." Analysis and Applications 18, no. 05 (July 17, 2020): 929–50. http://dx.doi.org/10.1142/s0219530520400072.

Full text
Abstract:
Singular covariance matrices are frequently encountered in both machine learning and optimization problems, most commonly due to high dimensionality of data and insufficient sample sizes. Among many methods of regularization, here we focus on a relatively recent random matrix-theoretic approach, the idea of which is to create well-conditioned approximations of a singular covariance matrix and its inverse by taking the expectation of its random projections. We are interested in the error of a Monte Carlo implementation of this approach, which allows subsequent parallel processing in low dimensions in practice. We find that [Formula: see text] random projections, where [Formula: see text] is the size of the original matrix, are sufficient for the Monte Carlo error to become negligible, in the sense of expected spectral norm difference, for both covariance and inverse covariance approximation, in the latter case under mild assumptions.
APA, Harvard, Vancouver, ISO, and other styles
43

JOSHI, MARK, and OH KANG KWON. "LEAST SQUARES MONTE CARLO CREDIT VALUE ADJUSTMENT WITH SMALL AND UNIDIRECTIONAL BIAS." International Journal of Theoretical and Applied Finance 19, no. 08 (December 2016): 1650048. http://dx.doi.org/10.1142/s0219024916500485.

Full text
Abstract:
Credit value adjustment (CVA) and related charges have emerged as important risk factors following the Global Financial Crisis. These charges depend on uncertain future values of underlying products, and are usually computed by Monte Carlo simulation. For products that cannot be valued analytically at each simulation step, the standard market practice is to use the regression functions from least squares Monte Carlo method to approximate their values. However, these functions do not necessarily provide accurate approximations to product values over all simulated paths and can result in biases that are difficult to control. Motivated by a novel characterization of the CVA as the value of an option with an early exercise opportunity at a stochastic time, we provide an approximation for CVA and other credit charges that rely only on the sign of the regression functions. The values are determined, instead, by pathwise deflated cash flows. A comparison of CVA for Bermudan swaptions and cancellable swaps shows that the proposed approximation results in much smaller errors than the standard approach of using the regression function values.
APA, Harvard, Vancouver, ISO, and other styles
44

Hauskrecht, M. "Value-Function Approximations for Partially Observable Markov Decision Processes." Journal of Artificial Intelligence Research 13 (August 1, 2000): 33–94. http://dx.doi.org/10.1613/jair.678.

Full text
Abstract:
Partially observable Markov decision processes (POMDPs) provide an elegant mathematical framework for modeling complex decision and planning problems in stochastic domains in which states of the system are observable only indirectly, via a set of imperfect or noisy observations. The modeling advantage of POMDPs, however, comes at a price -- exact methods for solving them are computationally very expensive and thus applicable in practice only to very simple problems. We focus on efficient approximation (heuristic) methods that attempt to alleviate the computational problem and trade off accuracy for speed. We have two objectives here. First, we survey various approximation methods, analyze their properties and relations and provide some new insights into their differences. Second, we present a number of new approximation methods and novel refinements of existing techniques. The theoretical results are supported by experiments on a problem from the agent navigation domain.
APA, Harvard, Vancouver, ISO, and other styles
45

Bendinelli, O., G. Parmeggiani, and F. Zavatti. "Analytical Approximations of Long-Exposure Point Spread Functions and their Use." Highlights of Astronomy 8 (1989): 657–61. http://dx.doi.org/10.1017/s1539299600008625.

Full text
Abstract:
AbstractThe observed light distribution in long exposure star images (PSF) may be fitted equally well by a variety of models. But dealing with undersainpled star images, only the use of the multi-Gaussian model allows the correct model parameters estimation, taking into account integration on pixel surface, image off-centering and background behaviour. It is also shown that the convolution of a spherical source with the multi-Gaussian and Moffat’s models gives in practice the same result.
APA, Harvard, Vancouver, ISO, and other styles
46

Nargesian, Fatemeh, Abolfazl Asudeh, and H. V. Jagadish. "Tailoring data source distributions for fairness-aware data integration." Proceedings of the VLDB Endowment 14, no. 11 (July 2021): 2519–32. http://dx.doi.org/10.14778/3476249.3476299.

Full text
Abstract:
Data scientists often develop data sets for analysis by drawing upon sources of data available to them. A major challenge is to ensure that the data set used for analysis has an appropriate representation of relevant (demographic) groups: it meets desired distribution requirements. Whether data is collected through some experiment or obtained from some data provider, the data from any single source may not meet the desired distribution requirements. Therefore, a union of data from multiple sources is often required. In this paper, we study how to acquire such data in the most cost effective manner, for typical cost functions observed in practice. We present an optimal solution for binary groups when the underlying distributions of data sources are known and all data sources have equal costs. For the generic case with unequal costs, we design an approximation algorithm that performs well in practice. When the underlying distributions are unknown, we develop an exploration-exploitation based strategy with a reward function that captures the cost and approximations of group distributions in each data source. Besides theoretical analysis, we conduct comprehensive experiments that confirm the effectiveness of our algorithms.
APA, Harvard, Vancouver, ISO, and other styles
47

Roijers, Diederik, Joris Scharpff, Matthijs Spaan, Frans Oliehoek, Mathijs De Weerdt, and Shimon Whiteson. "Bounded Approximations for Linear Multi-Objective Planning Under Uncertainty." Proceedings of the International Conference on Automated Planning and Scheduling 24 (May 11, 2014): 262–70. http://dx.doi.org/10.1609/icaps.v24i1.13641.

Full text
Abstract:
Planning under uncertainty poses a complex problem in which multiple objectives often need to be balanced. When dealing with multiple objectives, it is often assumed that the relative importance of the objectives is known a priori. However, in practice human decision makers often find it hard to specify such preferences, and would prefer a decision support system that presents a range of possible alternatives. We propose two algorithms for computing these alternatives for the case of linearly weighted objectives. First, we propose an anytime method, approximate optimistic linear support (AOLS), that incrementally builds up a complete set of ε-optimal plans, exploiting the piecewise linear and convex shape of the value function. Second, we propose an approximate anytime method, scalarised sample incremental improvement (SSII), that employs weight sampling to focus on the most interesting regions in weight space, as suggested by a prior over preferences. We show empirically that our methods are able to produce (near-)optimal alternative sets orders of magnitude faster than existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
48

Xiong, Weili, Mingchen Xue, and Baoguo Xu. "Constrained Dynamic Systems Estimation Based on Adaptive Particle Filter." Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/589347.

Full text
Abstract:
For the state estimation problem, Bayesian approach provides the most general formulation. However, most existing Bayesian estimators for dynamic systems do not take constraints into account, or rely on specific approximations. Such approximations and ignorance of constraints may reduce the accuracy of estimation. In this paper, a new methodology for the states estimation of constrained systems with nonlinear model and non-Gaussian uncertainty which are commonly encountered in practice is proposed in the framework of particles filter. The main feature of this method is that constrained problems are handled well by a sample size test and two particles handling strategies. Simulation results show that the proposed method can outperform particles filter and other two existing algorithms in terms of accuracy and computational time.
APA, Harvard, Vancouver, ISO, and other styles
49

Reyna, Carlos P. "Film and Ritual: epistemological dialogs between filmic anthropology and anthropological practice." Vibrant: Virtual Brazilian Anthropology 9, no. 2 (December 2012): 431–68. http://dx.doi.org/10.1590/s1809-43412012000200016.

Full text
Abstract:
This article discusses the dialog between filmic anthropology's procedures and methods and anthropological practice, with a focus on ritual, which is captured by the moving image in a more direct and fluid way. For this, the ritual of reproduction and preservation of cattle in Santiago, a peasant village in the Peruvian Andes is used as an empirical base. As an anthropologist-filmmaker I will try to make explicit the relationship between the observed process filmed and the informant, combining two important epistemological grids: Claudine de France's deferred observation, and Clifford Geertz's interpretation from the native's point of view. Finally, based on this experience, I will make some observations about the use of methodological approximations from filmic anthropology in anthropological practice, between film and anthropology.
APA, Harvard, Vancouver, ISO, and other styles
50

Snowsill, G. D., and C. Young. "Toward Defining Objective Criteria for Assessing the Adequacy of Assumed Axisymmetry and Steadiness of Flows in Rotating Cavities." Journal of Turbomachinery 128, no. 4 (February 1, 2005): 708–16. http://dx.doi.org/10.1115/1.2185124.

Full text
Abstract:
The need to make a priori decisions about the level of approximation that can be accepted—and subsequently justified—in flows of industrial complexity is a perennial problem for computational fluid dynamics (CFD) analysts. This problem is particularly acute in the simulation of rotating cavity flows, where the stiffness of the equation set results in protracted convergence times, making any simplification extremely attractive. For example, it is common practice, in applications where the geometry and boundary conditions are axisymmetric, to assume that the flow solution will also be axisymmetric. It is known, however, that inappropriate imposition of this assumption can lead to significant errors. Similarly, where the geometry or boundary conditions exhibit cyclic symmetry, it is quite common for analysts to constrain the solutions to satisfy this symmetry through boundary condition definition. Examples of inappropriate use of these approximating assumptions are frequently encountered in rotating machinery applications, such as the ventilation of rotating cavities within aero-engines. Objective criteria are required to provide guidance regarding the level of approximation that is appropriate in such applications. In the present work, a study has been carried out into: (i) The extent to which local three-dimensional features influence solutions in a generally two-dimensional (2D) problem. Criteria are proposed to aid in decisions about when a 2D axisymmetric model is likely to deliver an acceptable solution; (ii) the influence of flow features which may have a cyclic symmetry that differs from the bounding geometry or imposed boundary conditions (or indeed have no cyclic symmetry); and (iii) the influence of unsteady flow features and the extent to which their effects can be represented by mixing plane or multiple reference frame approximations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography