Academic literature on the topic 'Other mathematical sciences not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Other mathematical sciences not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Other mathematical sciences not elsewhere classified"

1

WALTON, JOHN K., and DAVID TIDSWELL. "‘Classified at random by veritable illiterates’: the taking of the Spanish census of 1920 in Guipúzcoa province." Continuity and Change 20, no. 2 (August 2005): 287–313. http://dx.doi.org/10.1017/s0268416005005503.

Full text
Abstract:
This article offers an approach through administrative and cultural history to the problems associated with gathering and processing data for the Spanish national census of 1920, and by implication for earlier Spanish censuses. It focuses on the Basque province of Guipúzcoa, making use of correspondence between the central statistical office in Madrid, the provincial jefe de estadística and the localities, and of reports on three problematic towns within the province. The issues that emerge regarding ‘undercounting’, the definition of administrative boundaries and the classification of demographic characteristics are set in the wider context of census-taking practices and problems elsewhere in Spain and in other cultures.
APA, Harvard, Vancouver, ISO, and other styles
2

Priyanto, Mia, and Dian Permatasari. "Students’ Worksheets Based on Problem Based Learning In Composition and Inverse Functions to Enhance Conceptual Understanding." JRPM (Jurnal Review Pembelajaran Matematika) 7, no. 1 (June 29, 2022): 73–88. http://dx.doi.org/10.15642/jrpm.2022.7.1.73-88.

Full text
Abstract:
This research aims to design student worksheets based on problem-based learning in composition and inverse functions to facilitate conceptual understanding is valid. This research uses a PPE model by Richey and Klein; there are three parts: planning, production, and evaluation. The planning stage is a product planning process that will be developed for a specific purpose. The production stage is the process of making a product according to a previously made design. The evaluation stage is the process of testing and assessing the product that has been developed. The data obtained from the expert validator is then processed using a Likert scale and calculated using Aiken's formula to conclude the validity of the product developed. The Student Worksheets based on Problem Based Learning to aid students' understanding of mathematical ideas on composition functions and inverse functions were valid, have a 0.84 average validity value, and were classified as high. The PBL model familiarizes students with understanding concepts through problem-solving processes. On the other hand, learning mathematics using PBL effectively improves students' understanding and abilities because students apply mathematical concepts in everyday life.
APA, Harvard, Vancouver, ISO, and other styles
3

Adnani, Hinde, Mohammed Cherraj, and Hamid Bouabid. "Similarity indexes for scientometric research: A comparative analysis." Malaysian Journal of Library & Information Science 25, no. 3 (December 27, 2020): 31–48. http://dx.doi.org/10.22452/mjlis.vol25no3.3.

Full text
Abstract:
A significant number of papers in the field of scientometrics addressed the comparisons of various similarity indexes. However, there is still a debate on the appropriateness of an index compared to others, beacause of the assessment differences reported in the literature. The objective of this paper is to make a comparative analysis of the five most used similarity indexes for the three scientometric analysis types: co-word, co-citation and co-authorship. A total of 388 papers addressing similarity indexes in scientometric analysis over three decades were retrieved from the Web of Science and examined; of which 49 were retained as the most relevant according to selective criteria. The approach consisted of building cross matrices for the five indexes (Jaccard, Dice-Sorensson, Salton, Pearson, and Association Strength) for the three types of scientometric analysis. For each of these analyses, a distinction is made between papers according to their theoretical or empirical results. Furthermore, papers are classified according to the mathematical formula of the similarity index being used (vector vs non vector). In the 49 relevant papers being selected, the comparative analysis showed that there is still no consensus on the appropriateness of an index for co-word and co-authorship analyses, while for co-citation, Salton is the widely preferred one. The Association Strength is the less covered and compared to other indexes for the three analysis types. An open source computer program was developed as a tool to facilitate empirical comparative studies of indexes. It allows generating normalized matrix of any chosen index for the two mathematical variants.
APA, Harvard, Vancouver, ISO, and other styles
4

Bannister, Frank, and Dan Remenyi. "Acts of Faith: Instinct, Value and it Investment Decisions." Journal of Information Technology 15, no. 3 (September 2000): 231–41. http://dx.doi.org/10.1177/026839620001500305.

Full text
Abstract:
Although well over 1000 journal articles, conference papers, books, technical notes and theses have been written on the subject of information technology (IT) evaluation, only a relatively small subset of this literature has been concerned with the core issues of what precisely is meant by the term ‘value’ and with the process of making (specifically) IT investment decisions. All too often, the problem and highly complex issue of value is either simplified, ignored or assumed away. Instead the focus of much of the research to date has been on evaluation methodologies and, within this literature, there are different strands of thought which can be classified as partisan, composite and meta approaches to evaluation. Research shows that a small number of partisan techniques are used by most decision makers with a minority using a single technique and a majority using a mixture of such techniques of whom a substantial minority use a formal composite approach. It is argued that, in mapping the set of evaluation methodologies on to what is termed the investment opportunity space, that there is a limit to what can be achieved by formal rational evaluation methods. This limit becomes evident when decision makers fall back on ‘gut feel’ and other non-formal/rigorous ways of making decisions. It is suggested that an understanding of these more complex processes and decision making, in IT as elsewhere, needs tools drawn from philosophy and psychology.
APA, Harvard, Vancouver, ISO, and other styles
5

Roberts, Kevin C., John B. Lindsay, and Aaron A. Berg. "An Analysis of Ground-Point Classifiers for Terrestrial LiDAR." Remote Sensing 11, no. 16 (August 16, 2019): 1915. http://dx.doi.org/10.3390/rs11161915.

Full text
Abstract:
Previous literature has compared the performance of existing ground point classification (GPC) techniques on airborne LiDAR (ALS) data (LiDAR—light detection and ranging); however, their performance when applied to terrestrial LiDAR (TLS) data has not yet been addressed. This research tested the classification accuracy of five openly-available GPC algorithms on seven TLS datasets: Zhang et al.’s inverted cloth simulation (CSF), Kraus and Pfeiffer’s hierarchical weighted robust interpolation classifier (HWRI), Axelsson’s progressive TIN densification filter (TIN), Evans and Hudak’s multiscale curvature classification (MCC), and Vosselman’s modified slope-based filter (MSBF). Classification performance was analyzed using the kappa index of agreement (KIA) and rasterized spatial distribution of classification accuracy datasets generated through comparisons with manually classified reference datasets. The results identified a decrease in classification accuracy for the CSF and HWRI classification of low vegetation, for the HWRI and MCC classifications of variably sloped terrain, for the HWRI and TIN classifications of low outlier points, and for the TIN and MSBF classifications of off-terrain (OT) points without any ground points beneath. Additionally, the results show that while no single algorithm was suitable for use on all datasets containing varying terrain characteristics and OT object types, in general, a mathematical-morphology/slope-based method outperformed other methods, reporting a kappa score of 0.902.
APA, Harvard, Vancouver, ISO, and other styles
6

Semin, M. A. "SIMPLIFICATION POSSIBILITIES FOR COUPLED THM MODELS OF ARTIFICIAL GROUND FREEZING IN THE CONSTRUCTION OF MINE SHAFTS." News of the Tula state university. Sciences of Earth 4, no. 1 (2021): 453–63. http://dx.doi.org/10.46689/2218-5194-2021-4-1-453-463.

Full text
Abstract:
An important stage in the design of the artificial ground freezing during the construc-tion of mine shafts (and other underground structures) is the simulation of deformation and heat transfer in the media to be frozen. This is necessary to calculate the required thicknesses of frozen wall, the time of its formation and the parameters of freezing stations. The choice of an adequate mathematical model is impossible without analyzing the significance and coupling of various physical processes occurring during the freezing of soil. Such an analysis allows se-lecting a reasonable degree of detailing of physical processes in the model: take into account all important factors and neglect the rest. This article proposes a methodology for analyzing the significance and coupling of such physical processes. For this, a general thermo-hydro-mechanical model of soil freezing has been formulated, a set of dimensionless complexes has been identified and classified, which determine the relationship between various physical pro-cesses. The transition from the general thermo-hydro-mechanical model to simpler models is possible only if the corresponding dimensionless complexes are small.
APA, Harvard, Vancouver, ISO, and other styles
7

Rezaei, Abbas, Salah I. Yahya, and Leila Nouri. "A Comprehensive Review on Microstrip Couplers." ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY 11, no. 1 (January 15, 2023): 22–31. http://dx.doi.org/10.14500/aro.11108.

Full text
Abstract:
In this work, several types of microstrip couplers are investigated in terms of structure, performance and design methods. These planar 4-ports passive devices transmit a signal through two different channels. Designers' competition has always been in miniaturizing and improving performance of couplers. Some couplers have been offered with a novel structure, which is a special feature. A high-performance coupler should have high isolation and low losses at both channels. The channels are usually overlapped so that the common port return loss in these channels should be low. Among the couplers, those with balanced amplitude and phase are more popular. The popular mathematical analysis methods are even/odd mode analysis, extracting the information from the ABCD matrix and analyzing the equivalent LC circuit of a simple resonator. According to the phase shift value, couplers are classified as 90º and correct multiples of 90º, where a microstrip 0º coupler can be used as a power divider. Some couplers have filtering and harmonic elimination features that are superior to other couplers. However, few designers paid attention to suppressing the harmonics. If the operating frequency is set in according to the type of application, the coupler becomes particularly valuable.ABCD Matrix
APA, Harvard, Vancouver, ISO, and other styles
8

Castillo, Rolando Torres. "Mathematics in the foreground and teacher knowledge." Green World Journal 5, no. 1 (February 4, 2022): 003. http://dx.doi.org/10.53313/gwj51003.

Full text
Abstract:
: Mathematics is an extremely necessary skill for everyone, as it is the main tool with which human beings have been able to understand the world around them. When we are students, it is common for us to ask ourselves, why should I study mathematics? We could start by saying that there are many activities of daily life that are related to this science, for example, managing money, preparing a recipe, calculating the distance we have to travel to get somewhere, among other things, but the answer goes further. It is difficult to find a completely comprehensive definition of the mathematical concept. Currently, it is classified as one of the formal sciences (along with logic), since, using logical reasoning as a tool, it focuses on the analysis of relationships and properties between numbers and geometric figures. Thus, the objective of this study was to analyze the trends and challenges in mathematics teacher education and to establish their challenges. The main results indicate that there is a bias to think that it is difficult to study mathematics.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Feng, Zhaofu Li, Shuyu Zhang, Naitao Huang, Zongyao Quan, Wenmin Zhang, Xiaojun Liu, Xiaosan Jiang, Jianjun Pan, and Alexander V. Prishchepov. "Mapping Winter Wheat with Combinations of Temporally Aggregated Sentinel-2 and Landsat-8 Data in Shandong Province, China." Remote Sensing 12, no. 12 (June 26, 2020): 2065. http://dx.doi.org/10.3390/rs12122065.

Full text
Abstract:
Winter wheat is one of the major cereal crops in China. The spatial distribution of winter wheat planting areas is closely related to food security; however, mapping winter wheat with time-series finer spatial resolution satellite images across large areas is challenging. This paper explores the potential of combining temporally aggregated Landsat-8 OLI and Sentinel-2 MSI data available via the Google Earth Engine (GEE) platform for mapping winter wheat in Shandong Province, China. First, six phenological median composites of Landsat-8 OLI and Sentinel-2 MSI reflectance measures were generated by a temporal aggregation technique according to the winter wheat phenological calendar, which covered seedling, tillering, over-wintering, reviving, jointing-heading and maturing phases, respectively. Then, Random Forest (RF) classifier was used to classify multi-temporal composites but also mono-temporal winter wheat development phases and mono-sensor data. The results showed that winter wheat could be classified with an overall accuracy of 93.4% and F1 measure (the harmonic mean of producer’s and user’s accuracy) of 0.97 with temporally aggregated Landsat-8 and Sentinel-2 data were combined. As our results also revealed, it was always good to classify multi-temporal images compared to mono-temporal imagery (the overall accuracy dropped from 93.4% to as low as 76.4%). It was also good to classify Landsat-8 OLI and Sentinel-2 MSI imagery combined instead of classifying them individually. The analysis showed among the mono-temporal winter wheat development phases that the maturing phase’s and reviving phase’s data were more important than the data for other mono-temporal winter wheat development phases. In sum, this study confirmed the importance of using temporally aggregated Landsat-8 OLI and Sentinel-2 MSI data combined and identified key winter wheat development phases for accurate winter wheat classification. These results can be useful to benefit on freely available optical satellite data (Landsat-8 OLI and Sentinel-2 MSI) and prioritize key winter wheat development phases for accurate mapping winter wheat planting areas across China and elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
10

Venyo, Anthony Kodzo-Grey. "Signet Ring Cell Carcinoma of the Prostate Gland: A Review and Update." Cancer Research and Cellular Therapeutics 5, no. 3 (July 26, 2021): 01–14. http://dx.doi.org/10.31579/2640-1053/082.

Full text
Abstract:
Signet-ring cell carcinoma of the prostate gland (SRCCP) an uncommon and aggressive malignant tumour of the prostate gland which is characterized by histopathology examination features of compression of the nucleus into the form of a crescent by a large cytoplasmic vacuole. SRCCPs that have so far been reported have been either (a) primary tumours, metastatic tumours with the primary tumour elsewhere with the gastro-intestinal tract being the site of the primary tumour but the primary tumour could originate elsewhere, and additionally some reported SRCCPs have been classified as carcinoma of unknown primary. SRCCP could be a pure tumour or a tumour that is contemporaneously associated with other types of tumour including various variants of adenocarcinoma. SRCCP can manifest in various ways including: Incidental finding following prostatectomy that has been undertaken for a presumed benign prostatic hyperplasia, lower urinary tract symptoms, visible and non-visible haematuria, raised levels of serum PSA but some SRCCPs have been diagnosed with normal / low levels of serum PSA, there may be a history of dyspepsia in cases of metastatic signet-ring cell carcinoma in association with contemporaneous primary signet-ring cell carcinoma of the stomach or there may be a past history of surgical treatment for signet-ring cell carcinoma of the gastrointestinal tract, or bleeding from the gastrointestinal tract in cases of upper gastrointestinal tract and rectal bleeding as well as change in bowel habit for primary tumours of the anorectal region, retention of urine, and rarely a rectal mass in the case of SRCCP with an anorectal primary tumour. In order to exclude a primary signet ring cell carcinoma elsewhere, a detailed past medical history is required as well as radiology imaging including contrast – enhanced computed tomography (CECT) scan and contrast-enhanced magnetic resonance imaging (CEMRI) scan as well as upper gastrointestinal endoscopy and colonoscopy to exclude a primary lesion within the gastrointestinal tract. Diagnosis of SRCCP requires utilization of the histopathology and immunohistochemistry examination features of prostate biopsy, prostatic chips obtained from trans-urethral resection of prostate specimen or radical prostatectomy specimen. SRCCPs upon immunohistochemistry staining studies tend to show tumour that tend to exhibit positive staining for the following tumour markers as follows: PSA – positive staining for PSA has been variable in some studies, AE1/AE3, CAM 5.2, Ki-67 with a mean of 8%, PAS-diastase, Mucicarmine (50%), Alcian blue (60%), Alpha-methyl-acyl coenzyme A racemase (P504S), and Cytokeratin 5/6. SRCCPs also tend to exhibit negative staining for: Bcl2 (rare positive), and CEA (80%). Traditionally the treatment of Primary Signet-Ring Cell Carcinoma of the Prostate Gland has tended to be similar to the treatment of the traditional adenocarcinoma of the prostate gland which does include: hormonal treatment, radiotherapy, and surgery. Nevertheless, considering that primary SRCCPs and metastatic SRCCPs that have been reported in the literature have generally tended to be associated with an aggressive biological behaviour, even though there is no consensus opinion on the treatment of the disease it would be strongly recommended that these tumours that tend to be associated with rapid progress of the disease and poor survival there is an urgent need to treat all these tumours with aggressive surgery including radical prostatectomy plus adjuvant therapies including: radical radiotherapy, combination chemotherapy, selective prostatic angiography and super-selective embolization of the artery feeding the tumour including intra-arterial infusion of chemotherapy agents directly to the tumour, radiofrequency ablation of the tumour as well as irreversible electroporation of the tumour which should form part of a global multicentre study of various treatment options. With regard to metastatic signet-ring cell carcinomas of the prostate gland with a contemporaneous primary tumour elsewhere the primary tumour should also be treated by radical and complete excision of the primary tumour plus radical surgery and aggressive adjuvant therapy. Considering that SRCCPs have tendered not to respond well to available chemotherapy agents, there is need for urologists, oncologists, and pharmacotherapy research workers to identify new chemotherapy medicaments that would more effectively and safely destroy signet-ring cell tumours in order to improve upon the prognosis.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Other mathematical sciences not elsewhere classified"

1

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation." Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Full text
Abstract:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
APA, Harvard, Vancouver, ISO, and other styles
2

Rattray, Magnus. "Modelling the dynamics of genetic algorithms using statistical mechanics." Thesis, University of Manchester, 1996. http://publications.aston.ac.uk/598/.

Full text
Abstract:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
APA, Harvard, Vancouver, ISO, and other styles
3

Svénsen, Johan F. M. "GTM: the generative topographic mapping." Thesis, Aston University, 1998. http://publications.aston.ac.uk/1245/.

Full text
Abstract:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
4

Csató, Lehel. "Gaussian processes : iterative sparse approximations." Thesis, Aston University, 2002. http://publications.aston.ac.uk/1327/.

Full text
Abstract:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
5

(9786986), Sze San Chong. "Adaptive sliding mode control for robotic manipulators by hybridisation." Thesis, 1997. https://figshare.com/articles/thesis/Adaptive_sliding_mode_control_for_robotic_manipulators_by_hybridisation/19326698.

Full text
Abstract:
In this thesis, the trajectory control of robotic manipulators is studied. The thesis consists of two parts.
The first part is the comparison studies of trajectory control of a two link robotic manipulator using various control laws. Simulations are presented to show the merits of each control law and discussions are given.
The second part is development of new control laws. A new control law is formulated by combining two existing control laws and the performance of the hybrid control law is assessed by simulating the action of the two link robotic manipulator controlled by the new hybrid control law. The effect of Regressor Matrix modification on the performance of the robotic manipulator is then studied by using the control law of Slotine and Li. In order to find out whether regressor matrix modification and composite adaptation can further improve the performance of a control law, these two modifications are then incorporated into the new hybrid control law and the performance of the manipulator is assessed. Simulations are presented to verify the effectiveness of the approach proposed.
APA, Harvard, Vancouver, ISO, and other styles
6

(9796349), Margaret Flanders. "An investigation of the existing mathematics component in tertiary business courses in Australia, and how it relates to the current needs of employer and professional groups in Australia." Thesis, 1997. https://figshare.com/articles/thesis/An_investigation_of_the_existing_mathematics_component_in_tertiary_business_courses_in_Australia_and_how_it_relates_to_the_current_needs_of_employer_and_professional_groups_in_Australia/19352636.

Full text
Abstract:
This thesis investigates the level of association between the users of business mathematics and those who design core service mathematics courses for undergraduate business students. It explores the supportive role of mathematics components in the business degree and makes the connection between this role and the role of mathematics in business. The thesis adds to a small but growing body of
literature which raises questions on issues such as the use of mathematics in the workplace by undergraduate business students, reasons for course content and presentation, and current issues in teaching and learning mathematics.
APA, Harvard, Vancouver, ISO, and other styles
7

(12246618), Robert N. Ellis. "Methodologies in material balance and statistical data adjustment." Thesis, 2022. https://figshare.com/articles/thesis/Methodologies_in_material_balance_and_statistical_data_adjustment/19365386.

Full text
Abstract:
A material balance model has been developed that allows for the assignment of statistical weightings to the raw data of a two -product separation.
The statistical weighting factors are derived by applying the concepts of variography based on Gy's sampling theory (1982). The issue of randomness when estimating the V(0) intercept value of the variogram at lag = 0 has been investigated. The effect of sampling duration on the magnitude of V(0) has also been quantified.

The importance of sampling correctness in relation to the automatic sampling of slurry streams has been highlighted.

The operational details of a hydrocyclone (two -product separator) have been outlined. The material balance model has been applied to a case study involving a typical separation in a hydrocyclone. Good agreement between several material balance methods has been found.

The contributions to this area of research can be listed as follows:

(1) The estimation of statistical weightings using variography.
(2) The development and application of a material balance model that incorporates statistical weighting factors.
(3) Define and examine the requirements of sampling correctness.
APA, Harvard, Vancouver, ISO, and other styles
8

(9788021), Colin Cole. "Fluidized bed combustion of waste material." Thesis, 1994. https://figshare.com/articles/thesis/Fluidized_bed_combustion_of_waste_material/13459283.

Full text
Abstract:
The behaviour of waste biomass materials, specifically bagasse* and sawdust, in deep fluidized beds was investigated. The bagasse used was dry (less than 1% moisture by mass). Sawdust was from mixed eucalyptus hardwoods with moisture contents up to 25% by mass. A series of cold flow visualisation tests were completed in a bed of 190mm diameter using graded river sand of surface mean particle diameters of 180 and 490 microns. Bagasse was added to the bed in various quantities and the ingestion and mixing phenomena observed. The influence of distributor design, cones, and draft tubes on mixing rates were investigated for use in the combustor design. Combustion Tests using Sawdust and Bagasse were completed in a Combustor of 489mm diameter with graded river sands of surface mean particle diameters of 300,490 and 530 microns. Various configurations were tested including a shallow bed of depth 130mm, deep beds of depths up to 460mm, a Reverse Circulation Bed, and Modified Spouted Beds of depths up to 740mm. Fuel feeding systems included above bed chutes, an ingestor tube, a direct bed wall screw feeder, and a pressurised screw feeder fitted to the air supply of a Modified Spouted Bed. Bagasse was not successfully fed through the screw feeder systems used. Sawdust, which has similar fluidized bed combustion characteristics to bagasse, was used in screw feeders to indicate the possible results that could be obtained from bagasse using below bed feed systems. Configurations utilising direct below bed surface screw feed, Ingestor tube feed, and pressurised screw feed to the fluidizing air were all successful in increasing the percentage of combustion occurring below the bed surface. The best results were obtained from pre-mixed air and fuel particles entering the modified spouted bed giving combustion efficiencies of up to 60% comparable to coal. Higher efficiencies would be possible with further optimisation of the design. The results of the investigation open several avenues of development including partial gasification/combustion systems and further development of the ingestor tube, reverse circulation bed and modified spouted bed concepts. The problems encountered with the combustion of lightweight, particulate biomass fuels are now reduced to finding practical methods of fuel feeding and rate control. *Bagasse is the cellulose residue from sugar cane stalks which remains after crushing.It is particulate, fibrous, tangled and irregular in size, length and aspect ratio.
APA, Harvard, Vancouver, ISO, and other styles
9

(9850352), E. Grigg. "A critical discourse analysis of literature pertaining to the historical "management" of the sexual and/or intimacy needs of people labelled as having a learning disability in Australia and the United Kingdom." Thesis, 2014. https://figshare.com/articles/thesis/A_critical_discourse_analysis_of_literature_pertaining_to_the_historical_management_of_the_sexual_and_or_intimacy_needs_of_people_labelled_as_having_a_learning_disability_in_Australia_and_the_United_Kingdom/13387214.

Full text
Abstract:
The research presented here analyses and compares textual narratives generated within policy, scholarly and popular media to discern how the sexuality or intimacy needs of people categorised as ‘learning disabled’ have been historically and more recently managed in Australia and the United Kingdom. The research uses a modified critical discourse analytical approach which, in order to clarify the distinct role of power in the construction of discourses of sexuality, is mediated by the more recent ideas of progressive phronesis offered by Flyvbjerg (1998a; 2001). The analysis identifies three broad stages in the historical development of the discourses about the sexuality of learning disabled people. The first phase was prior to the 1800s, when these people were labelled non-derogatorily as ‘idiots’, and perceived as childlike, innocent and asexual. The second stage was in the 1800s when, with the emergence of scientific rationality and medicalisation, so-called idiots became medicalised and categorised as ‘feeble-minded’ or ‘moral imbeciles’. This thesis demonstrates that, during this period, an emphasis on sexual self-denial, anxieties about venereal disease and non-procreative erotic pleasure helped to inform discourses of eugenics and learning disabled people became perceived as a sexual threat to the society. This underpinned policies of sexual control through institutionalisation, gender segregation and sterilisation. The third period in the development of discourses relating to the sexuality of learning disabled people paralleled the ‘sexual revolution’ of the late- 1900s and the move towards deinstitutionalisation and human rights. This analysis shows that, although the principle of ‘sexual freedom’ was ostensibly incorporated in modern policy discourse, the sexuality of learning disabled people continues to be influenced by significant barriers of sexual intolerance, demonstrated by continuing practices of sexual segregation, sterilisation, criminal labelling and imprisonment. The analysis indicates that a discourse of sexuality, which has legitimised the control and management of learning disabled people in varying forms since the Enlightenment, continues to be encountered in policy and popular narratives. Robust sexuality awareness and education programs for carers of these people, and society in general, are necessary so that intimacy and/or sexual desires are accepted as a normal need for all human beings.
APA, Harvard, Vancouver, ISO, and other styles
10

(9789053), Lynette Costigan. "An ordinary man, an extraordinary life: Eric Zillman, naturalist." Thesis, 1993. https://figshare.com/articles/thesis/An_ordinary_man_an_extraordinary_life_Eric_Zillman_naturalist/13464272.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Other mathematical sciences not elsewhere classified"

1

Keller, Morton, and Phyllis Keller. "The Faculty of Arts and Sciences." In Making Harvard Modern. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195144574.003.0009.

Full text
Abstract:
It was in his dealings with the Faculty of Arts and Sciences (FAS) that Conant’s attempt to create a more meritocratic Harvard met its severest test. Out of this often tumultuous relationship came one of Harvard’s most influential academic innovations: a system for the appointment of tenured faculty that became standard practice in American universities. Conant inherited a faculty that was not necessarily the nation’s best. Because of Lowell’s stress on undergraduate instruction, the number and proportion of tutors and instructors steadily increased during the 1920s. At the same time, many of the best known Harvard professors during the Lowell years—Charles Townsend “Copey” Copeland and LeBaron Russell Briggs of the English Department, Roger B. “Frisky” Merriman in History—were not world-class scholars but charismatic classroom performers. Harvard had only one Nobelist, Conant’s chemist father-inlaw, Theodore W. Richards, before 1934; Chicago had three. Nor did its social scientists compare to those at Chicago or Columbia. The rather small stable of Harvard’s scholarly stars included historian Frederick Jackson Turner and philosopher Alfred North Whitehead, whose major accomplishments, done elsewhere, were long behind them. Carnegie Corporation president Frederick Keppel reported the prevailing view in 1934: “Harvard is still princeps but no longer facile princeps; and the story is current that at one of America’s great universities [no doubt Chicago] it is considered the height of academic distinction to receive an invitation from Harvard and to decline it.” Conant warned early on that the growing appeal of other universities and Harvard’s standardized salary, teaching, and research scales made it “increasingly difficult to attract from other universities and research institutes the outstanding men whom we desire.” The dean of the Faculty of Arts and Sciences was English professor Kenneth Murdock. Though he resented Conant for having gotten the Harvard presidency, Murdock was “quite willing” to continue to be dean if Conant wanted him. Conant did not. He appointed the less assertive George D. Birkhoff (among his qualities were exceptional mathematical ability and a keen anti-Semitism), who stayed in the job until 1939, when he was succeeded by the even more unassertive historian William S. Ferguson. Weak deans meant that Conant was in effect his own dean, deeply engaged in curriculum, student recruitment, and above all the selection of faculty.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography