Academic literature on the topic 'Tennis – Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Tennis – Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Tennis – Mathematical models"

1

Tlustý, P. "The influence of changes of tennis rules on the course of a tennis match - mathematical model." Studia Kinanthropologica 17, no. 3 (September 30, 2016): 473–76. http://dx.doi.org/10.32725/sk.2016.102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Martínez-García, Héctor, Luis R. Román-Fernández, María G. Sáenz-Romo, Ignacio Pérez-Moreno, and Vicente S. Marco-Mancebón. "Optimizing Nesidiocoris tenuis (Hemiptera: Miridae) as a biological control agent: mathematical models for predicting its development as a function of temperature." Bulletin of Entomological Research 106, no. 2 (December 23, 2015): 215–24. http://dx.doi.org/10.1017/s0007485315000978.

Full text
Abstract:
AbstractFor optimal application of Nesidiocoris tenuis as a biological control agent, adequate field management and programmed mass rearing are essential. Mathematical models are useful tools for predicting the temperature-dependent developmental rate of the predator. In this study, the linear model and nonlinear models Logan type III, Lactin and Brière were estimated at constant temperatures and validated at alternating temperatures and under field conditions. N. tenuis achieved complete development from egg to adult at constant temperatures between 15 and 35°C with high survivorship (>80%) in the range 18–32°C. The total developmental time decreased from a maximum at 15°C (76.74 d) to a minimum at 33°C (12.67 d) and after that, increased to 35°C (13.98 d). Linear and nonlinear developmental models all had high accuracy (Ra2 >0.86). The maximum developmental rate was obtained between 31.9°C (Logan type III and Brière model for N1) and 35.6°C (for the egg stage in the Brière model). Optimal survival and the highest developmental rate fell within the range 27–30°C. The field validation revealed that the Logan type III and Lactin models offered the best predictions (95.0 and 94.5%, respectively). The data obtained on developmental time and mortality at different temperatures are useful for mass rearing this predator, and the developmental models are valuable for using N. tenuis as a biological control agent.
APA, Harvard, Vancouver, ISO, and other styles
3

Tabrizi, Sahar S., Saeid Pashazadeh, and Vajiheh Javani. "A Deep Learning Approach for Table Tennis Forehand Stroke Evaluation System Using an IMU Sensor." Computational Intelligence and Neuroscience 2021 (April 9, 2021): 1–15. http://dx.doi.org/10.1155/2021/5584756.

Full text
Abstract:
Psychological and behavioral evidence suggests that home sports activity reduces negative moods and anxiety during lockdown days of COVID-19. Low-cost, nonintrusive, and privacy-preserving smart virtual-coach Table Tennis training assistance could help to stay active and healthy at home. In this paper, a study was performed to develop a Forehand stroke’ performance evaluation system as the second principal component of the virtual-coach Table Tennis shadow-play training system. This study was conducted to show the effectiveness of the proposed LSTM model, compared with 2DCNN and RBF-SVR time-series analysis and machine learning methods, in evaluating the Table Tennis Forehand shadow-play sensory data provided by the authors. The data was generated, comprising 16 players’ Forehand strokes racket’s movement and orientation measurements; besides, the strokes’ evaluation scores were assigned by the three coaches. The authors investigated the ML models’ behaviors changed by the hyperparameters values. The experimental results of the weighted average of RMSE revealed that the modified LSTM models achieved 33.79% and 4.24% estimation error lower than 2DCNN and RBF-SVR, respectively. However, the R ¯ 2 results show that all nonlinear regression models are fit enough on the observed data. The modified LSTM is the most powerful regression method among all the three Forehand types in the current study.
APA, Harvard, Vancouver, ISO, and other styles
4

Fernandez, Francisco J., Manuel Gamez, Jozsef Garay, and Tomas Cabello. "Do Development and Diet Determine the Degree of Cannibalism in Insects? To Eat or Not to Eat Conspecifics." Insects 11, no. 4 (April 14, 2020): 242. http://dx.doi.org/10.3390/insects11040242.

Full text
Abstract:
Cannibalism in insects plays an important role in ecological relationships. Nonetheless, it has not been studied as extensively as in other arthropods groups (e.g., Arachnida). From a theoretical point of view, cannibalism has an impact on the development of more realistic stage-structure mathematical models. Additionally, it has a practical application for biological pest control, both in mass-rearing and out in the field through inoculative releases. In this paper, the cannibalistic behavior of two species of predatory bugs was studied under laboratory conditions—one of them a generalist predator (strictly carnivorous), Nabis pseudoferus, and the other a true omnivore (zoophytophagous), Nesidiocoris tenuis—and compared with the intraguild predation (IGP) behavior. The results showed that cannibalism in N. pseudoferus was prevalent in all the developmental stages studied, whereas in N. tenuis, cannibalism was rarely observed, and it was restricted mainly to the first three nymphal stages. Cannibalism and intraguild predation had no linear relationship with the different cannibal–prey size ratios, as evaluated by the mortality rates and survival times, although there were variations in cannibalism between stages, especially for N. pseudoferus. The mathematical model’s implications are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Sychenko, Viktor G., and Dmytro V. Mironov. "Development of a mathematical model of the generalized diagnostic indicato on the basis of full factorial experiment." Archives of Transport 43, no. 3 (September 13, 2017): 125–33. http://dx.doi.org/10.5604/01.3001.0010.4230.

Full text
Abstract:
Purpose. The aim of this work is to develop a mathematical model of the generalized diagnostic indicator of the technical state of traction substations electrical equipment. Methodology. The main tenets of the experiment planning theory, methods of structural-functional and multi-factor analysis, methods of mathematical and numerical modeling have been used to solve the set tasks. Results. To obtain the mathematical model of the generalized diagnostic indicator, a full factorial experiment for DC circuit breaker have been conducted. The plan of the experiment and factors affecting the change of the unit technical condition have been selected. The regression equation in variables coded values and the polynomial mathematical model of the generalized diagnostic indicator of the circuit breaker technical condition have been obtained. On the basis of regression equation analysis the character of influence of circuit breaker diagnostic indicators values on generalized diagnostic indicator changes has been defined. As a result of repeated performances of the full factorial experiment the mathematical models for other types of traction substations power equipment have been obtained. Originality. An improved theoretical approach to the construction of generalized diagnostic indicators mathematical models for main types of traction substations electric equipment with using the methods of experiments planning theory has been suggested. Practical value. The obtained polynomial mathematical models of the generalized diagnostic indicator D can be used for constructing the automated system of monitoring and forecasting of the traction substations equipment technical condition, which allows improving the performance of processing the diagnostic information and ensuring the accuracy of the diagnosis. Analysing and forecasting the electrical equipment technical condition with the using of mathematical models of generalized diagnostic indicator changes process allows constructing the optimal strategy of maintenance and repair based on the actual technical condition of the electrical equipment. This will reduce material and financial costs of maintenance and repair work as well as the equipment downtime caused by planned inspections and repair improving reliability and uptime of electrical equipment.
APA, Harvard, Vancouver, ISO, and other styles
6

Forrest, David, and Ian G. McHale. "Using statistics to detect match fixing in sport." IMA Journal of Management Mathematics 30, no. 4 (June 25, 2019): 431–49. http://dx.doi.org/10.1093/imaman/dpz008.

Full text
Abstract:
AbstractMatch fixing is a growing threat to the integrity of sport, facilitated by new online in-play betting markets sufficiently liquid to allow substantial profits to be made from manipulating an event. Screens to detect a fix employ in-play forecasting models whose predictions are compared in real-time with observed betting odds on websites around the world. Suspicions arise where model odds and market odds diverge. We provide real examples of monitoring for football and tennis matches and describe how suspicious matches are investigated by analysts before a final assessment of how likely it was that a fix took place is made. Results from monitoring driven by this application of forensic statistics have been accepted as primary evidence at cases in the Court of Arbitration for Sport, leading more sports outside football and tennis to adopt this approach to detecting and preventing manipulation.
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Arunachalam. "CHAOS SCIENCE & THE PARADOX IN ELECTRO CARDIOGRAPHIC INTERPRETATIONS." Journal of Health and Allied Sciences NU 01, no. 04 (December 2011): 35–37. http://dx.doi.org/10.1055/s-0040-1703537.

Full text
Abstract:
AbstractThe application of the radically and new non-linear science of 'chaos' has been slow in acceptance in the medical field; each and every diagnostic, investigative and pharmaco-therapeutic process involved in health science is dictated by direct Newtonian tenets of causeeffect and linear mathematical derivations. Exclusive and total dependence on scalar models have compartmentalized patient profiles into rigid cocoons such as age, sex, weight, height, physiological and biochemical evaluations making interventional therapeutic regimens solely dictated by our over-riding choice to use scales of convenience for every single one of our evaluation, assessment and treatment criteria.The paradoxical and skewed results in seemingly normal processes, when viewed through 'chaos' systems, show how wrong our total reliance on scale and linear applications are! As an example of how terribly paradoxical inferences can be, I have electrocardiography, a common and widely used diagnostic process
APA, Harvard, Vancouver, ISO, and other styles
8

Belkin, Mikhail, Daniel Hsu, Siyuan Ma, and Soumik Mandal. "Reconciling modern machine-learning practice and the classical bias–variance trade-off." Proceedings of the National Academy of Sciences 116, no. 32 (July 24, 2019): 15849–54. http://dx.doi.org/10.1073/pnas.1903070116.

Full text
Abstract:
Breakthroughs in machine learning are rapidly changing science and society, yet our fundamental understanding of this technology has lagged far behind. Indeed, one of the central tenets of the field, the bias–variance trade-off, appears to be at odds with the observed behavior of methods used in modern machine-learning practice. The bias–variance trade-off implies that a model should balance underfitting and overfitting: Rich enough to express underlying structure in data and simple enough to avoid fitting spurious patterns. However, in modern practice, very rich models such as neural networks are trained to exactly fit (i.e., interpolate) the data. Classically, such models would be considered overfitted, and yet they often obtain high accuracy on test data. This apparent contradiction has raised questions about the mathematical foundations of machine learning and their relevance to practitioners. In this paper, we reconcile the classical understanding and the modern practice within a unified performance curve. This “double-descent” curve subsumes the textbook U-shaped bias–variance trade-off curve by showing how increasing model capacity beyond the point of interpolation results in improved performance. We provide evidence for the existence and ubiquity of double descent for a wide spectrum of models and datasets, and we posit a mechanism for its emergence. This connection between the performance and the structure of machine-learning models delineates the limits of classical analyses and has implications for both the theory and the practice of machine learning.
APA, Harvard, Vancouver, ISO, and other styles
9

Fraschetti, Federico. "On the acceleration of ultra-high-energy cosmic rays." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, no. 1884 (September 23, 2008): 4417–28. http://dx.doi.org/10.1098/rsta.2008.0204.

Full text
Abstract:
Ultra-high-energy cosmic rays (UHECRs) hit the Earth's atmosphere with energies exceeding 10 18 eV. This is the same energy as carried by a tennis ball moving at 100 km h −1 , but concentrated on a subatomic particle. UHECRs are so rare (the flux of particles with E >10 20 eV is 0.5 km −2 per century) that only a few such particles have been detected over the past 50 years. Recently, the HiRes and Auger experiments have reported the discovery of a high-energy cut-off in the UHECR spectrum, and Auger has found an apparent clustering of the highest energy events towards nearby active galactic nuclei. Consensus is building that the highest energy particles are accelerated within the radio-bright lobes of these objects, but it remains unclear how this actually happens, and whether the cut-off is due to propagation effects or reflects an intrinsically physical limitation of the acceleration process. The low event statistics presently allows for many different plausible models; nevertheless observations are beginning to impose strong constraints on them. These observations have also motivated suggestions that new physics may be implicated. We present a review of the key theoretical and observational issues related to the processes of propagation and acceleration of UHECRs and proposed solutions.
APA, Harvard, Vancouver, ISO, and other styles
10

SOURADEEP, TARUN. "COSMOLOGICAL QUESTS IN THE CMB SKY." International Journal of Modern Physics D 15, no. 10 (October 2006): 1725–42. http://dx.doi.org/10.1142/s0218271806009066.

Full text
Abstract:
Observational cosmology has indeed made a very rapid progress in recent years. The ability to quantify the universe has largely improved due to observational constraints coming from structure formation measurements of CMB anisotropy and, more recently, polarization has played a very important role. Besides precise determination of various parameters of the "standard" cosmological model, observations have also established some important basic tenets that underlie models of cosmology and structure formation in the universe — "acausally" correlated initial perturbations in a flat, statistically isotropic universe, adiabatic nature of primordial density perturbations. These are consistent with the expectation of the paradigm of inflation and the generic prediction of the simplest realization of inflationary scenario in the early universe. Further, gravitational instability is the established mechanism for structure formation from these initial perturbations. In the next decade, future experiments promise to strengthen these deductions and uncover the remaining crucial signature of inflation — the primordial gravitational wave background.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Tennis – Mathematical models"

1

Barnett, Tristan J., and tbarnett@swin edu au. "Mathematical modelling in hierarchical games with specific reference to tennis." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20060504.151842.

Full text
Abstract:
This thesis investigates problems in hierarchical games. Mathematical models are used in tennis to determine when players should alter their effort in a game, set or match to optimize their available energy resources. By representing warfare, as a hierarchical scoring system, the results obtained in tennis are used to solve defence strategy problems. Forecasting in tennis is also considered in this thesis. A computer program is written in Visual Basic for Applications (VBA), to estimate the probabilities of players winning for a match in progress. A Bayesian updating rule is formulated to update the initial estimates with the actual match statistics as the match is progressing. It is shown how the whole process can be implemented in real-time. The estimates would provide commentators and spectators with an objective view on who is likely to win the match. Forecasting in tennis has applications to gambling and it is demonstrated how mathematical models can assist both punters and bookmakers. Investigation is carried out on how the court surface affects a player�s performance. Results indicate that each player is best suited to a particular surface, and how a player performs on a surface is directly related to the court speed of the surfaces. Recursion formulas and generating functions are used for the modelling techniques. Backward recursion formulas are used to calculate conditional probabilities and mean lengths remaining with the associated variance for points within a game, games within a set and sets within a match. Forward recursion formulas are used to calculate the probabilities of reaching score lines for points within a game, games within a set and sets within a match. Generating functions are used to calculate the parameters of distributions of the number of points, games and sets in a match.
APA, Harvard, Vancouver, ISO, and other styles
2

Bělohlávek, Jiří. "Agent pro kurzové sázení." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235980.

Full text
Abstract:
This master thesis deals with design and implementation of betting agent. It covers issues such as theoretical background of an online betting, probability and statistics. In its first part it is focused on data mining and explains the principle of knowledge mining form data warehouses and certain methods suitable for different types of tasks. Second, it is concerned with neural networks and algorithm of back-propagation. All the findings are demonstrated on and supported by graphs and histograms of data analysis, made via SAS Enterprise Miner program. In conclusion, the thesis summarizes all the results and offers specific methods of extension of the agent.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Tennis – Mathematical models"

1

Lucas, Pablo, and Diane Payne. "Usefulness of Agent-Based Simulation in Testing Collective Decision-Making Models." In Interdisciplinary Applications of Agent-Based Social Simulation and Modeling, 72–87. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-5954-4.ch005.

Full text
Abstract:
Political scientists seek to build more realistic Collective Decision-Making Models (henceforth CDMM) which are implemented as computer simulations. The starting point for this present chapter is the observation that efficient progress in this field may be being hampered by the fact that the implementation of these models as computer simulations may vary considerably and the code for these computer simulations is not usually made available. CDMM are mathematically deterministic formulations (i.e. without probabilistic inputs or outputs) and are aimed at explaining the behaviour of individuals involved in dynamic, collective negotiations with any number of policy decision-related issues. These CDMM differ from each other regarding the particular bargaining strategies implemented and tested in each model for how the individuals reach a collective binding policy agreement. The CDMM computer simulations are used to analyse the data and generate predictions of a collective decision. While the formal mathematical treatment of the models and empirical findings of CDMM are usually presented and discussed through peer-review journal publications, access to these CDMM implementations as computer simulations are often unavailable online nor easily accessed offline and this tends to dissuade cross fertilisation and learning in the field.
APA, Harvard, Vancouver, ISO, and other styles
2

Achlioptas, Dimitris. "Chapter 10. Random Satisfiabiliy." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2021. http://dx.doi.org/10.3233/faia200993.

Full text
Abstract:
In the last twenty years a significant amount of effort has been devoted to the study of randomly generated satisfiability instances. While a number of generative models have been proposed, uniformly random k-CNF formulas are by now the dominant and most studied model. One reason for this is that such formulas enjoy a number of intriguing mathematical properties, including the following: for each k≥3, there is a critical value, rk, of the clauses-to-variables ratio, r, such that for r<rk a random k-CNF formula is satisfiable with probability that tends to 1 as n→∞, while for r>rk it is unsatisfiable with probability that tends to 1 as n→∞. Algorithmically, even at densities much below rk, no polynomial-time algorithm is known that can find any solution even with constant probability, while for all densities greater than rk, the length of every resolution proof of unsatisfiability is exponential (and, thus, so is the running time of every DPLL-type algorithm). By now, the study of random k-CNF formulas has also attracted attention in areas such as mathematics and statistical physics and is at the center of an area of intense research activity. At the same time, random k-SAT instances are a popular benchmark for testing and tuning satisfiability algorithms. Indeed, some of the better practical ideas in use today come from insights gained by studying the performance of algorithms on them. We review old and recent mathematical results about random k-CNF formulas, demonstrating that the connection between computational complexity and phase transitions is both deep and highly nuanced.
APA, Harvard, Vancouver, ISO, and other styles
3

Chu, C. Y. Cyrus. "Demographic Models and Branching Processes." In Population Dynamics. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780195121582.003.0006.

Full text
Abstract:
All models describing the dynamic pattern of human population have two common features. First, the human population is usually divided into several types, and second, each type has a type-specific stochastic reproduction rate. The traditional literature of demography has been dominated by the age-specific models of Lotka (1939) and Leslie (1945,1948), where the type refers to the age of an individual and the type-specific reproduction rates refer to the age-specific vital rates in a life table, It has been shown that, mathematically, these age-specific models can be analyzed in a more general framework, namely, the multitype branching process. Most demography researchers, however, do not bother to pursue properties of the general branching process. They prefer to follow Lotka’s (1939) age-specific renewal equation approach in proceeding with their analysis because that renewal equation is technically convenient, whereas the steady-state and dynamic properties of a general branching process are usually much more difficult to derive. Although the analytical convenience of the age-specific models has facilitated the research on age-related topics, it also tends to obscure the fact that the age-specific model is merely a special kind of branching process. When female fertility becomes a decision variable of the family and the fertility-related family decision problems expand, these age-specific models are often unworkable. Despite the difficulties inherent in applying the traditional age-specific models to these decision dimensions, researchers still hesitate to go back to the general, but more difficult, branching process for solutions. This is perhaps why, as we mentioned in chapter 1, the demand-side theory of demography has not made much progress in describing the macro aggregate pattern of the population. In this chapter, I separate the discussion into the age-specific branching process and general branching processes. I show that the steady states and ergodic properties of these models can both be established under some regularity conditions. Although the material in this chapter is mostly a reorganization of previously established mathematical results, I believe that my summary is systematic and will be helpful to most readers. All the results summarized will be used in later chapters, but aspects of branching processes that are irrelevant to our purposes will not be discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Chemin, Jean-Yves, Benoit Desjardins, Isabelle Gallagher, and Emmanuel Grenier. "Ekman Boundary Layers for Rotating Fluids." In Mathematical Geophysics. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780198571339.003.0013.

Full text
Abstract:
In this chapter, we investigate the problem of rapidly rotating viscous fluids between two horizontal plates with Dirichlet boundary conditions. We present the model with so-called “turbulent” viscosity. More precisely, we shall study the limit when ε tends to 0 of the system where Ω = Ωh×]0, 1[: here Ωh will be the torus T2 or the whole plane R2. We shall use, as in the previous chapters, the following notation: if u is a vector field on Ω we state u = (uh, u3). In all that follows, we shall assume that on the boundary ∂Ω, uε0 · n = uε,30 = 0, and that div uε0 = 0. The condition u30 = 0 on the boundary implies the following fact: for any vector field u ∊ H(Ω), the function ∂3u3 is L2(]0, 1[) with respect to the variable x3 with values in H−1(Ωh) due to the divergence-free condition.
APA, Harvard, Vancouver, ISO, and other styles
5

Chemin, Jean-Yves, Benoit Desjardins, Isabelle Gallagher, and Emmanuel Grenier. "Other Systems." In Mathematical Geophysics. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780198571339.003.0017.

Full text
Abstract:
The methods developed in this book can be applied to various physical systems. We will not detail all the possible applications and will only quote three systems arising in magnetohydrodynamics (MHD) and meteorology, namely conducting fluids in a strong external “large scale” magnetic field, a classical MHD system with high rotation, and the quasigeostrophic limit. The main theorems of this book can be extended to these situations. The theory of rotating fluids is very close to the theory of conducting fluids in a strong magnetic field. Namely the Lorenz force and the Coriolis force have almost the same form, up to Ohm’s law. The common feature is that these phenomena appear as singular perturbation skew-symmetric operators. The simplest equations in MHD are Navier–Stokes equations coupled with Ohm’s law and the Lorenz force where ∇φ is the electric field, j the current, and e the direction of the imposed magnetic field. In this case ε is called the Hartmann number. In physical situations, like the geodynamo (study of the magnetic field of the Earth), it is really small, of order 10−5–10−10, much smaller than the Rossby number. These equations are the simplest model in geomagnetism and in particular in the geodynamo. As ε→0 the flow tends to become independent of x3. This is not valid near boundaries. For horizontal boundaries, Hartmann layers play the role of Ekman layers and in the layer the velocity is given by The critical Reynolds number for linear instability is very high, of order Rec ∼ 104. The main reason is that there is no inflexion point in the boundary layer profile (10.1.2), therefore it is harder to destabilize than the Ekman layer since the Hartmann profile is linearly stable for the inviscid model associated with (10.1.1). As for Ekman layers, Hartmann layers are stable for Re<Rec and unstable for Re>Rec. There is also something similar to Ekman pumping, which is responsible for friction and energy dissipation. Vertical layers are simpler than for rotating fluids since there is only one layer, of size (εν)1/4. We refer to for physical studies.
APA, Harvard, Vancouver, ISO, and other styles
6

Zangwill, Andrew. "Son of the Heartland." In A Mind Over Matter, 8–24. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198869108.003.0002.

Full text
Abstract:
Anderson’s parents come from academic families in Indiana. Phil and his sister Grace grew up in Urbana, Illinois because their father was a plant pathologist at the University of Illinois (UI). Mother Elsie demanded academic excellence and respect for others. Father Harry was a model of integrity, a fact displayed during the so-called Krebiozen affair. The Depression affected the family relatively little and Phil acquired his lifelong liberal politics from a UI social group called the Saturday Hikers. At age twelve, he accompanies his family to Europe (a sabbatical for his father) where they observe the rise of Nazism. Phil attends and excels at the University High School where he enjoys math, tennis, and speed skating, but not physics. He wins a National Scholarship to attend Harvard University with a plan to major in mathematics.
APA, Harvard, Vancouver, ISO, and other styles
7

Doveton, John H. "Permeability Estimation." In Principles of Mathematical Petrophysics. Oxford University Press, 2014. http://dx.doi.org/10.1093/oso/9780199978045.003.0008.

Full text
Abstract:
Because it is a measure of flow, permeability is a vector quantity, as contrasted with conventional petrophysical log data, which are responses to static properties of the rock. In the absence of a direct measurement of permeability, predictions must be inferred from the rock framework characteristics that control the ability of fluids to move through the rock. In this chapter, we consider methods that predict absolute permeability, that is, permeability with respect to a single fluid. This is the most widely used meaning of the term and would be immediately applicable to aquifers. In engineering applications to reservoirs, a relative permeability is assigned to each fluid phase, so that relative fluid rates and volumes can be characterized explicitly. Although the fundamental physics of permeability in tubes has been understood for many years, reliable estimations are difficult to make in all but the simplest rock types. As we shall see, one approach attempts to adapt modifications to a tube model to accommodate the complexity of pore-system geometry. This model-driven methodology tends to be favored by engineers and contrasts with a data-driven geological approach that applies empirical relationships from core data from mercury porosimetry measurements. The most fundamental property used to predict permeability is that of pore volume. Both porosity and permeability are routine measurements from core analysis. If a useable relationship can be developed to predict permeability from porosity, then predictions of permeability can be made in wells that were logged with conventional measurements but not cored. The simplest quantitative methods used to predict permeability from logs have been keyed to empirical equations of the type: . . . log k = P +Q.Φ or log k = P +Q.log Φ. . . where P and Q are constants determined from core measurements and applied to log measurements of porosity (Φ) to generate predictions of permeability (k). These equations are the basis for statistical predictions of permeability in regression analysis, where porosity is the independent variable and logarithmically scaled permeability is the dependent variable. The fitted function minimizes the sum of the squared deviations of the permeability about the trend line.
APA, Harvard, Vancouver, ISO, and other styles
8

Debowski, Lukasz. "Entropic Subextensivity in Language and Learning." In Nonextensive Entropy. Oxford University Press, 2004. http://dx.doi.org/10.1093/oso/9780195159769.003.0024.

Full text
Abstract:
In this chapter, we identify possible links between theoretical computer science, coding theory, and statistics reinforced by subextensivity of Shannon entropy. Our specific intention is to address these links in a way that may arise from a rudimentary theory of human learning from language communication. The semi-infinite stream of language production that a human being experiences during his or her life will be called simply parole (= "speech," [7]). Although modern computational linguistics tries to explain human language competence in terms of explicit mathematical models in order to enable its machine simulation [17, 20], modeling parole itself (widely known as "language modeling") is not trivial in a very obscure way. When a behavior of parole that improves its prediction is newly observed in a finite portion of the empirical data, it often suggests only minor improvements to the current model. When we use larger portions of parole to test the freshly improved model, this model always fails seriously, but in a different way. How we can provide necessary updates to, with- out harming the integrity of, the model is an important problem that experts must continually solve. Is there any sufficiently good definition of parole that is ready-made for industrial applications? Although not all readers of human texts learn continuously, parole is a product of those who can and often do learn throughout their lives. Thus, we assume that the amount of knowledge generalizable from a finite window of parole should diverge to infinity when the length of the window also tends to infinity. Many linguists assume that a very distinct part of the generalizable knowledge is "linguistic knowledge," which can be finite in principle. Nevertheless, for the sake of good modeling of parole in practical applications, it is useless to restrict ourselves solely to "finite linguistic knowledge" [6, 22]. Inspired by Crutchfield and Feldman [5], we will call any processes (distributions of infinite linear data) "finitary" when the amount of knowledge generalizable from them is finite, and "infinitary" when it is infinite. The crucial point is to accurately define the notion of knowledge generalized from a data sample. According to the principle of minimum description length (MDL), generalizable knowledge is the definition of such representation for the data which yields the shortest total description. In this case, we will define infinitarity as computational infinitarity (CIF).
APA, Harvard, Vancouver, ISO, and other styles
9

Farmer, Lesley S. J., and Shuhua An. "Pre-Service Teacher Preparation to Integrate Computational Thinking." In Advances in Educational Technologies and Instructional Design, 282–300. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1479-5.ch015.

Full text
Abstract:
United States education has experienced a big push for students to learn coding as part of computer science and more explicitly address computational thinking (CT). However, CT remains a challenging subject for many students, including pre-service teachers. CT, which overlaps mathematics and computer science, tends to be offered as an elective course, at best, in P-16 education. Pre-service teaching profession students usually do not have foundational knowledge to guide them in integrating computational thinking into the curriculum that they will eventually teach as instructors themselves. This chapter explains computational thinking in light of K-8 education, discusses issues and needs in integrating CT into K-8 curriculum, identifies relevant theories and models for teaching CT, describes current practice for integrating computational thinking into K-8 curriculum, and discusses pre-service teachers' preparation that can lead to their successful incorporation of CT into the curriculum.
APA, Harvard, Vancouver, ISO, and other styles
10

Farmer, Lesley S. J., and Shuhua An. "Pre-Service Teacher Preparation to Integrate Computational Thinking." In Research Anthology on Computational Thinking, Programming, and Robotics in the Classroom, 408–26. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-2411-7.ch020.

Full text
Abstract:
United States education has experienced a big push for students to learn coding as part of computer science and more explicitly address computational thinking (CT). However, CT remains a challenging subject for many students, including pre-service teachers. CT, which overlaps mathematics and computer science, tends to be offered as an elective course, at best, in P-16 education. Pre-service teaching profession students usually do not have foundational knowledge to guide them in integrating computational thinking into the curriculum that they will eventually teach as instructors themselves. This chapter explains computational thinking in light of K-8 education, discusses issues and needs in integrating CT into K-8 curriculum, identifies relevant theories and models for teaching CT, describes current practice for integrating computational thinking into K-8 curriculum, and discusses pre-service teachers' preparation that can lead to their successful incorporation of CT into the curriculum.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Tennis – Mathematical models"

1

Cheng, B., and X. Deng. "Mathematical Modelling of Near-Hover Insect Flight Dynamics." In ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4234.

Full text
Abstract:
Using a dynamically scaled robotic wing, we studied the aerodynamic torque generation of flapping wings during roll, pitch, and yaw rotations of the stroke plane. The total torque generated by a wing pair with symmetrical motions was previously known as flapping counter-torques (FCTs). For all three types of rotation, stroke-averaged FCTs act opposite to the directions of rotation and are collinear with the rotational axes. Experimental results indicate that the magnitude of FCTs is linearly dependent on both the flapping frequency and the angular velocity. We also compared the results with predictions by a mathematical model based on quasi-steady analyses, where we show that FCTs can be described through consideration of the asymmetries of wing velocity and the effective angle of attack caused by each type of rotation. For roll and yaw rotations, our model provided close estimations of the measured values. However, for pitch rotation the model tends to underestimate the magnitude of FCT, which might result from the effect of the neglected aerodynamics, especially the wake capture. Similar to the FCT, which is induced by body rotation, we further provide a mathematical model for the counter force induced by body translation, which is termed as flapping counter-force (FCF). Based on the FCT and FCF models, we are able to provide analytical estimations of stability derivatives and to study the flight dynamics at hovering. Using fruit fly (Drosophila) morphological data, we calculated the system matrix of the linearized flight dynamics. Similar to previous studies, the longitudinal dynamics consist of two stable subsidence modes with fast and slow time constants, as well as an unstable oscillatory mode. The longitudinal instability is mainly caused by the FCF induced by an initial forward/backward velocity, which imparts a pitch torque to the same direction of initial pitch velocity. Similarly, the lateral dynamics also consist of two stable subsidence modes and an unstable oscillatory mode. The lateral instability is mainly caused by the FCF induced by an initial lateral velocity, which imparts a roll torque to the same direction of initial roll velocity. In summary, our models provide the first analytical approximation of the six-degree-of-freedom flight dynamics, which is important in both studying the control strategies of the flying insects and designing the controller of the future flapping-wing micro air vehicles (MAVs).
APA, Harvard, Vancouver, ISO, and other styles
2

LaViolette, Marc, and Michael Strawson. "On the Prediction of Nitrogen Oxides From Gas Turbine Combustion Chambers Using Neural Networks." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-50566.

Full text
Abstract:
This paper describes a method of predicting the oxides of nitrogen emissions from gas turbine combustion chambers using neural networks. A short review of existing empirical models is undertaken and the reasoning behind the choice of correlation variables and mathematical formulations is presented. This review showed that the mathematical functions obtained from the underlying theory used to develop the semi-empirical model ultimately limit their general applicability. Under these conditions, obtaining a semi-empirical model with a large domain and good accuracy is difficult. An overview of the use of neural networks as a modelling tool is given. Using over 2000 data points, a neural network that can predict NOx emissions with greater accuracy than published correlations was developed. The coefficients of determination of the prediction for the previous published semi-empirical models are 0.8048 and 0.7885. However one tends to grossly overpredict and the other underpredict. The coefficient of determination is 0.8697 for the model using a neural network. Because of the nature of neural networks, this more accurate model does not allow better insight into the physical and chemical phenomena. It is however, a useful tool for the initial design of combustion chambers.
APA, Harvard, Vancouver, ISO, and other styles
3

Okabe, Akira, Takeshi Kudo, Hideo Yoda, Shigeo Sakurai, Osami Matsushita, and Koki Shiohata. "Rotor-Blade Coupled Vibration Analysis by Measuring Modal Parameters of Actual Rotor." In ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-59471.

Full text
Abstract:
The designers of rotor shafts and blades for a traditional turbine-generator set typically employed their own models and process by neglecting the coupled torsional effect. The torsional coupled umbrella mode of recent longer blades systems designed for higher output and efficiency tends to have nearly doubled the frequency of electric disturbance (i.e., 100 or 120 Hz). In order to precisely estimate the rotor-blade coupled vibration of rotating shafts, the analysis must include a process to identify the parameters of a mathematical model by using a real model. In this paper we propose the use of a unique quasi-modal technique based on a concept similar to that of the modal synthesis method, but which represents a unique method to provide a visually reduced model. An equivalent mass-spring system is produced for uncoupled umbrella mode and modal parameters are measured in an actual turbine rotor system. These parameters are used to estimate the rotor-blade coupled torsional frequencies of a 700-MW turbine-generator set, with the accuracy of estimation being verified through field testing.
APA, Harvard, Vancouver, ISO, and other styles
4

van Elsas, P. A., and J. S. M. Vergeest. "Creation and Manipulation of Complex Displacement Features During the Conceptual Phase of Design." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1332.

Full text
Abstract:
Abstract Surface feature design is not well supported by contemporary free form surface modelers. For one type of surface feature, the displacement feature, it is shown that intuitive controls can be defined for its design. A method is described that, given a surface model, allows a designer to create and manipulate displacement features. The method uses numerically stable calculations, and feedback can be obtained within tenths of a second, allowing the designer to employ the different controls with unprecedented flexibility. The algorithm does not use refinement techniques, that generally lead to data explosion. The transition geometry, connecting a base surface to a displaced region, is found explicitly. Cross-boundary smoothness is dealt with automatically, leaving the designer to concentrate on the design, instead of having to deal with mathematical boundary conditions. Early test results indicate that interactive support is possible, thus making this a useful tool for conceptual shape design.
APA, Harvard, Vancouver, ISO, and other styles
5

Esin, Nikolay, Nikolay Esin, Vladimir Ocherednik, and Vladimir Ocherednik. "CORRELATION OF THE BLACK, MARMARA AND AEGEAN SEAS DURING THE HOLOCENE." In Managing risks to coastal regions and communities in a changing world. Academus Publishing, 2017. http://dx.doi.org/10.21610/conferencearticle_58b4315c686d6.

Full text
Abstract:
A mathematical model describing the change in the Black Sea level depending on the Aegean Sea level changes is presented in the article. Calculations have shown that the level of the Black Sea has been repeating the course of the Aegean Sea level for the last at least 6,000 years. And the level of the Black Sea above the Aegean Sea level in the tens of centimeters for this period of time.
APA, Harvard, Vancouver, ISO, and other styles
6

Esin, Nikolay, Nikolay Esin, Vladimir Ocherednik, and Vladimir Ocherednik. "CORRELATION OF THE BLACK, MARMARA AND AEGEAN SEAS DURING THE HOLOCENE." In Managing risks to coastal regions and communities in a changing world. Academus Publishing, 2017. http://dx.doi.org/10.31519/conferencearticle_5b1b93726527e0.96178996.

Full text
Abstract:
A mathematical model describing the change in the Black Sea level depending on the Aegean Sea level changes is presented in the article. Calculations have shown that the level of the Black Sea has been repeating the course of the Aegean Sea level for the last at least 6,000 years. And the level of the Black Sea above the Aegean Sea level in the tens of centimeters for this period of time.
APA, Harvard, Vancouver, ISO, and other styles
7

Sainte-Marie, Nina, Philippe Velex, Guillaume Roulois, and Franck Marrot. "On the Correlation Between Dynamic Transmission Error and Dynamic Tooth Loads in Three-Dimensional Gear Systems." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46063.

Full text
Abstract:
A three-dimensional dynamic model is presented to simulate the dynamic behavior of single stage gears by using a combination of classic shaft, lumped parameter and specific 2-node gear elements. The mesh excitation formulation is based on transmission errors whose mathematical grounding is briefly described. The validity of the proposed methodology is assessed by comparison with experimental evidence from a test rig. The model is then employed to analyze the relationship between dynamic transmission errors and dynamic tooth loads or root stresses. It is shown that a linear dependency can be observed between the time variations of dynamic transmission error and tooth loading as long as the system can be assimilated to a torsional system but that this linear relationship tends to disappear when the influence of bending cannot be neglected.
APA, Harvard, Vancouver, ISO, and other styles
8

Murakami, Takahiro, Yasumi Ukida, Masami Fujii, Michiyasu Suzuki, and Takashi Saito. "Study on Detection of Epileptic Discharges Based on a Duffing Oscillator Model." In ASME 2014 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/imece2014-38107.

Full text
Abstract:
In order to establish a quantitative detection method for appearance in epileptic discharges (EDs), we propose using the model parameters in a Duffing oscillator, which is a nonlinear mathematical model. Extracting four frequency bands of delta, theta, alpha and beta waves from the time history of the electrocorticogram (ECoG) obtained from rats with induced EDs, we applied a sweep window to the time history for each band. So as to fit the equation for the Duffing oscillator to the time history of the ECoG, we used the least square method to determine the model parameters expressing characteristics of ECoG. The Duffing oscillator has three kinds of vibrational parameters and four kinds of parameters about the amplitude for the driving force with two predominant frequencies contained in ECoG. In order to examine the appearance time of the EDs and the change of ECoG characteristics, we determined the model parameters for each sweep window. When epilepsy occurs, we found that the amount of the parameters related to “conservation”, “dissipation” and “input quantities” increases. On the other hand, the parameter value corresponding to nonlinearity tends to decrease. It is found that the proposed method by the model parameters of the Duffing oscillator can be used in quantitative detection for EDs.
APA, Harvard, Vancouver, ISO, and other styles
9

Salwa, Tomasz, Onno Bokhove, and Mark A. Kelmanson. "Variational Modelling of Wave-Structure Interactions for Offshore Wind Turbines." In ASME 2016 35th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/omae2016-54897.

Full text
Abstract:
We consider the development of a mathematical model of water waves interacting with the mast of an offshore wind turbine. A variational approach is used for which the starting point is an action functional describing a dual system comprising a potential-flow fluid, a solid structure modelled with (linear) elasticity, and the coupling between them. The variational principle is applied and discretized directly using Galerkin finite elements that are continuous in space and dis/continuous in time. We develop a linearized model of the fluid-structure or wave-mast coupling, which is a linearization of the variational principle for the fully coupled nonlinear model. Our numerical results indicate that our variational approach yields a stable numerical discretization of a fully coupled model of water waves and a linear elastic beam. The energy exchange between the subsystems is seen to be in balance, yielding a total energy that shows only small and bounded oscillations whose amplitude tends to zero as the timestep goes to zero.
APA, Harvard, Vancouver, ISO, and other styles
10

Kotzalas, Michael N. "Statistical Distribution of Tapered Roller Bearing Fatigue Lives at High Levels of Reliability." In World Tribology Congress III. ASMEDC, 2005. http://dx.doi.org/10.1115/wtc2005-63052.

Full text
Abstract:
The original two-parameter Weibull distribution used for rolling element bearing fatigue tends to greatly underestimate life at high levels of reliability. This fact has been proven for through hardened ball, cylindrical and spherical roller bearings, as well as linear ball bearings, by other researchers. However, to date this has not been done with tapered roller bearings (TRB) or case carburized materials, and as such this study was conducted. First, the three-parameter Weibull distribution was utilized to create a mathematical model, and statistical data analysis methods were put into place. This algorithm was then investigated as to its ability to discern the shape of the reliability distribution using known, numerically generated, data sets for two and three-parameter Weibull distributions. After validation, an experimental data set of 9702 TRB’s, 98% of which were case carburized, was collected. Using the developed algorithm on this data set, the overall RMS error was reduced from 26.0% for the standard, two-parameter to 12.2% for the three-parameter Wiebull distribution. Also, the error at 99.9% reliability was reduced from 95.8% to 37%. However, as the results within varied from previously published values at high reliabilities, there is likely a difference in the underlying population and/or dependency on the statistical and mathematical methods utilized. Therefore, more investigation should be conducted in this area to identify the underlying variables and their effects on the results.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography