Dissertations / Theses on the topic 'School: School of Mathematics, Statistics and Computer Science'

To see the other types of publications on this topic, follow the link: School: School of Mathematics, Statistics and Computer Science.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'School: School of Mathematics, Statistics and Computer Science.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Webb, Derek, Glen Richgels, Marty J. Wolf, Todd Frauenholtz, and Ann Hougen. "Improving Student Interest, Mathematical Skills, and Future Success through Implementation of Novel Mathematics Bridge Course for High School Seniors and Post-secondary Students." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-81097.

Full text
Abstract:
We present a new course titled “Introduction to the Mathematical Sciences.” The course content is 1/3 algebra, 1/3 statistics, and 1/3 computer science and is taught in a laboratory environment on computers. The course pedagogy departs radically from traditional mathematics courses taught in the U.S. and makes extensive use of spreadsheet software to teach algebraic and statistical concepts. The course is currently offered in area high schools and two-year postsecondary institutions with financial support from a Blandin Foundation grant (referenced under BFG). We will present empirical evidence that indicates students in this course learn more algebra than students in a traditional semester-long algebra course. Additionally, we present empirical evidence that students learn statistical and computer science topics in addition to algebra. We will also present the model of developing this course which depended on increasing future student success in a variety of disciplines at the post-secondary level of study.
APA, Harvard, Vancouver, ISO, and other styles
2

Webb, Derek, Glen Richgels, Marty J. Wolf, Todd Frauenholtz, and Ann Hougen. "Improving Student Interest, Mathematical Skills, and Future Successthrough Implementation of Novel Mathematics Bridge Course for High School Seniors and Post-secondary Students." Proceedings of the tenth International Conference Models in Developing Mathematics Education. - Dresden : Hochschule für Technik und Wirtschaft, 2009. - S. 575 - 578, 2012. https://slub.qucosa.de/id/qucosa%3A1823.

Full text
Abstract:
We present a new course titled “Introduction to the Mathematical Sciences.” The course content is 1/3 algebra, 1/3 statistics, and 1/3 computer science and is taught in a laboratory environment on computers. The course pedagogy departs radically from traditional mathematics courses taught in the U.S. and makes extensive use of spreadsheet software to teach algebraic and statistical concepts. The course is currently offered in area high schools and two-year postsecondary institutions with financial support from a Blandin Foundation grant (referenced under BFG). We will present empirical evidence that indicates students in this course learn more algebra than students in a traditional semester-long algebra course. Additionally, we present empirical evidence that students learn statistical and computer science topics in addition to algebra. We will also present the model of developing this course which depended on increasing future student success in a variety of disciplines at the post-secondary level of study.
APA, Harvard, Vancouver, ISO, and other styles
3

Sahama, Tony. "Some practical issues in the design and analysis of computer experiments." Thesis, Victoria University, Melbourne, 2003. https://eprints.qut.edu.au/60715/1/Sahama_2003compressed.pdf.

Full text
Abstract:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
APA, Harvard, Vancouver, ISO, and other styles
4

Goble, Terence Melvin. "The development of a computer based modelling environment for upper secondary school geography classes." Thesis, University College London (University of London), 1994. http://discovery.ucl.ac.uk/10021561/.

Full text
Abstract:
This thesis describes the development of a specification for a computer based modelling system in geography. The modelling system will be for use in upper secondary school geography classes. The classroom approach to geography reflects the developments within the broader academic discipline. By adopting a systems analysis approach, it is possible to represent models on the computer, from the full range of geographical approaches. The essence of geographical modelling is to be able to use a computer based environment to manipulate, and create, the inter-relationships of the components of a geographical system. The development of the specification for the modelling system, follows an eleven step methodology. This has been adapted and modified from the Research and Development Methodology. It includes a formative evaluation of the prototypes in classroom trials. The possible forms of representation of geographical ideas on the computer are considered. Procedural and declarative models are developed, as prototypes, on a range of software tools. The software tools used, for the initial developments, are the Dynamic Modelling System, spreadsheets and the language, Prolog. The final prototype is developed in a Smalltalk environment. Consideration is also given to the use of both quantitative and qualitative methods of modelling. Model templates are identified which give an underlying structure to a range of geographical models. These templates allow the students to build new models for different geographical areas. Proposals are made for a staged approach which addresses the introduction and use of modelling in the geography classroom. These stages move through the use of simulation, through the modification of the underlying model, to the transfer of the model template to different areas and finally, the building of new models.
APA, Harvard, Vancouver, ISO, and other styles
5

Barkley, Cynthia Vanderwilt. "Math lessons for Fontana High School software." CSUSB ScholarWorks, 1994. https://scholarworks.lib.csusb.edu/etd-project/935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Akpinar, Yavuz. "Computer based interactive environments for learning school mathematics : the implementation and validation of design principles." Thesis, University of Leeds, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Deragisch, Patricia Amelia. "Electronic portfolio for mathematical problem solving in the elementary school." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1299.

Full text
Abstract:
Electronic portfolio for mathematical problem solving in the elementary school is an authentic assessment tool for teachers and students to utilize in evaluating mathematical skills. It is a computer-based interactive software program to allow teachers to easily access student work in the problem solving area for assessment purposes, and to store multimedia work samples over time.
APA, Harvard, Vancouver, ISO, and other styles
8

Medina, Eliana C. "Motivating high school students and teachers to create interactive software : can summer workshops affect participants' interest for developing games and animations? /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/7731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yum, Kim-hung, and 任劍熊. "Within the IEA Third international Mathematics and Science Study (TIMSS): the relationship between familybackground and mathematics achievement of Hong Kong students." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31959192.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gillispie, Lucas B. "Effects of a 3-D video game on middle school student achievement and attitude in mathematics." View electronic thesis, 2008. http://dl.uncw.edu/etd/2008-3/gillispiel/lucasgillispie.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Holifield, Steven Lee. "Mathematics, technology, and gender: Closing gender differences with a high school web site." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1871.

Full text
Abstract:
This project focuses on using technology to help motivate young females to make use of a high school web site to lesson anxieties and increase interest in mathematics and the use of technology. Additionally, it acts as a model to create an educational web site that brings about better communication within a community.
APA, Harvard, Vancouver, ISO, and other styles
12

Mullins, Sherry Lynn. "Statistics: Raising the Bar for the Seventh Grade Classroom." Digital Commons @ East Tennessee State University, 2006. https://dc.etsu.edu/etd/2221.

Full text
Abstract:
After recognizing the need for a more thorough concentration of statistics at the seventh grade level, the author concluded that it would be a good idea to include statistics that cover both seventh and eighth grade Virginia Standards of Learning. Many years of administering the SOL mathematics test at the eighth grade level led the author to the understanding that some of the more advanced seventh graders would be missing some key concepts taught in eighth grade because those advanced students would be taking algebra in the eighth grade. In this thesis, the author has developed four units that she feels are appropriate for this level and will fill the gap.
APA, Harvard, Vancouver, ISO, and other styles
13

O'Prey, Evelyn A. "Effects of CAI on the achievement and attitudes of high school geometry students." CSUSB ScholarWorks, 1991. https://scholarworks.lib.csusb.edu/etd-project/735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Coulombe, Steven Louis. "Using Blackboard technologies as an instructional supplement for teaching high school chemistry." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1907.

Full text
Abstract:
This project attempts to use an on-line telecommunication supplement to extend the boundary of the classroom beyond the limits of time and space in order to improve communication and extend the reach of the classroom.
APA, Harvard, Vancouver, ISO, and other styles
15

Poole, Kimberly S. "An Investigation of the Dayton Regional STEM School Public-Private Partnerships." Thesis, Nova Southeastern University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3645218.

Full text
Abstract:

This dissertation study documents in-depth the exploration of the Public Private Partnerships (PPPs) between the Dayton Regional STEM School (DRSS) and their industry partners as well as the establishment of a framework for evaluating and assessing PPPs. The public-private partnership agreements were studied in order to answer the over-arching research question: How is an effective public-private partnership established, assessed, and evaluated in education? A descriptive case study methodology was used to study DRSS' public-private partnership agreements to determine if goals and objectives were established and whether or not the partnerships met those goals and objectives. This case study also included the development and testing of a proposed evaluation framework that will allow for consistent, systematic inquiry that can produce defensible assertions regarding the assessment and evaluation of public-private partnerships in education.

Results of the case study support the findings that utilization of an evaluation framework can serve to make public-private partnerships more successful. Results also indicated that establishment of goals and objectives enable effective evaluation for informal partnerships but could not be definitively stated for formal partnerships due to the lack of data points. The data from this case study revealed many emergent themes that should be considered in the development of future public-private partnerships. Overall this study contributes to the growing body of knowledge for public-private partnerships in education.

APA, Harvard, Vancouver, ISO, and other styles
16

Pickle, Maria Consuelo (suzie) Capiral. "Statistical Content in Middle Grades Mathematics Textbooks." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4203.

Full text
Abstract:
Statistical Content in Middle Grades Mathematics Textbooks Maria Consuelo (Suzie) Capiral Pickle Abstract This study analyzed the treatment and scope of statistical concepts in four, widely-used, contemporary, middle grades mathematics textbook series: Glencoe Math Connects, Prentice Hall Mathematics, Connected Mathematics Project, and University of Chicago School Mathematics Project. There were three phases for the data analysis. Phase 1 addressed the location and sequence of the statistical concepts. Phase 2 focused upon an examination of the lesson narrative, its components and scope. Phase 3 analyzed the level of cognitive demand required of the students to complete the exercises, and the total number of exercises per statistical concept. These three phases taken together provided insight into students' potential opportunity to learn statistical topics found in middle grades mathematics textbooks. Results showed that concepts, such as measures of central tendency, were repeated in several grades while other topics such as circle graphs were presented earlier than the recommendations in documents such as the National Council of Teachers of Mathematics Principles and Standards (2000) and the Common Core State Standards (2010). Further results showed that most of the statistical content was found in a chapter near the end of the book that would likely not be covered should time run short. Also, each textbook had a particular lesson narrative style. Moreover, most of the statistical exercises required low level cognitive demand of the students to complete the exercises, potentially hindering the development of deep understanding of the concepts.
APA, Harvard, Vancouver, ISO, and other styles
17

Nagisetty, Vytas. "Using Music-Related Concepts to Teach High School Math." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1958.

Full text
Abstract:
The purpose of this research was to test a strategy which uses music-related concepts to teach math. A quasi-experimental study of two high school remedial geometry sections was conducted during a review lesson of ratio, proportion, and cross multiplication. A pretest was given to both groups. Then, Group A received normal textbook instruction while Group B received the treatment, Get the Math in Music, which is an online activity involving proportional reasoning in a music-related context. Afterwards, a posttest was given to both groups. Pretest and posttest scores were used to compare gains in subject knowledge between the groups. Then a second evaluation of the treatment was conducted. Group A received the treatment and took a post-posttest. Score gains for Group A before and after receiving the treatment were compared. After these tests, all participants took a survey to determine if their appreciation of math grew as a result of the treatment. Finally, interviews were conducted to provide better understanding of the results. The research questions of this study were: to what extent does the integration of Get the Math in Music improve students' academic performance in a remedial geometry review of ratio, proportion, and cross multiplication, and to what extent does participation in the Get the Math activity improve students' attitudes towards math? My hypotheses were that students would perform significantly better on a subject knowledge test after receiving the treatment, and that all students would have a more positive attitude towards math after receiving the treatment. Quantitative results did not triangulate to support or refute these hypotheses. Greater improvement from pretest to posttest was statistically correlated with Group B, which was the group first receiving the treatment. But later, between posttest and post-posttest Group A did not show statistically significant greater gains after receiving the treatment. Surveys results showed that students did not necessarily like math any more after the treatment. Interviews revealed that several of these students were apathetic to geometry in particular, if not to math in general. The case of one student's improvement suggested that positive teacher-student relationships are more effective than any particular method to increase academic performance and student engagement. Survey results were consistent with earlier psychological studies claiming teenagers care about music. Additional studies in the future on the merits of using music to teach high school math would be useful. Claims that proportional reasoning is challenging were supported. It would be beneficial to evaluate the treatment in an Algebra or Pre-Algebra setting when students first study proportions.
APA, Harvard, Vancouver, ISO, and other styles
18

Holland, Earl Joseph. "Using technology to reinforce the elementary science framework." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Basham, Jennifer Elizabeth. "The Effects of an Overnight Environmental Science Education Program on Students' Attendance Rate Change for Middle School Years." PDXScholar, 2015. http://pdxscholar.library.pdx.edu/open_access_etds/2730.

Full text
Abstract:
Programs that engage middle students in participatory, real-world, and hands-on field based instruction can be a powerful asset to the educational experiences for students; motivating and inspiring some to appreciate and value school in a different way. Overnight environmental science programs have a unique opportunity to support students by creating experiences where students can participate in learning in vastly different ways from what they may engage with in the traditional 4-walled classroom, while concurrently developing a relationship with the natural world. Decreasing educational budgets and increased need to substantiate educational programs in terms of their impact on students has added pressure for overnight environmental science programs to validate their impact through quantitative means. Utilizing overnight environmental science education program attendance records and merging them with school district data relating to attendance, this study investigates the impact of one such overnight environmental science program on students' attendance rate change. Analyzing the secondary data using multiple linear regressions modeling, researchers explored how the overnight environmental program impacted student attendance rate change and how it varied by demographic characteristics to understand if and how the program addresses school district and educational policy reform targets.
APA, Harvard, Vancouver, ISO, and other styles
20

Casey, Cheryl. "Computer-Based Instruction as a Form of Differentiated Instruction in a Traditional, Teacher-led, Low-Income, High School Biology Classroom." PDXScholar, 2018. https://pdxscholar.library.pdx.edu/open_access_etds/4437.

Full text
Abstract:
In 2015 the U.S. continues to struggle with academic achievement in public schools. Average test scores from 15 year olds taking the Program for International Student Assessment placed the U.S. as 38th out of 71 countries (Drew Devlin, 2017). It is common to discuss elimination of the achievement gap as the single most effective way to improve the U.S.'s mediocre standing among the highest scoring countries in the world in primary and secondary student test scores (McGhee,2004; Flemming 2012). In the broadest sense of the term the "achievement gap" refers to the difference in academic success between different groups of students. It is often used to describe the lower performance of underprivileged student populations (National Education Association, 2004). Attempts to understand why this GAP exists and how educators may narrow such GAPs, researchers have identified both large class size and lack of personalized instruction as two conditions that commonly accompany lower academic achieving student populations (Lee and Buxton, 2008). Although there is a wealth of literature attempting to assess the effect of class size, few studies have defined small and large class sizes. In her research, Sarah Leahy (2006) defines a small class as one containing between 13 and 17 students and a regular class as one containing between 22 and 25. For the purposes of this research, a large classroom is defined as one with over 25 students. In theory, computer-based instruction (CBI) offers great potential to expand on the concept of personalized instruction. However, there is very little research available that describes how this tool can be used to effectively enhance the classroom learning process. This study examines the impact of providing computer-based instruction (CBI) or teacher-led instruction on students of various achievement levels enrolled in a traditional, high school biology classroom. The high school in which this research as conducted is a Title One (low income) identified school. 111 from four sections of freshman high school biology, were randomly divided into two learning groups per section. Both groups in each section were taught one 50-minute lesson on cellular biology. One group received the lesson from CBI while the other group from teacher-led instruction. The impact on learning was measured by the change in pre- and post-test scores. All students in each section received the same lesson content which was provided in the same classroom concurrently. Data from 82 students that returned signed parental consent forms and took the pre-test on day one, the lesson on day two, and the post-test on day three, were analyzed in this study. Results: The twenty students ranked as high academic achievers scored the highest correct answers on pre- and post-tests (mean 7.1 and 9.4 respectively). Improvement in test scores, measured as mean number of additional correct answers on the post-test, for the high achievers was equal whether they received CBI or teacher-led instruction (+1.72 and +1.75 respectively). Twenty-seven middle ranked academic achieving students also showed a statistically equal degree of improvement from each instructional platform. However, middle students that scored the highest pre-test scores also produced the highest improvement from CBI. The thirty-five low academic achieving students produced the highest improvement in test scores overall from teacher-led instruction and produced a mean negative change in post-test scores from CBI (mean +2.13 and -.68 respectively). Findings from this study suggest that in a classroom setting, higher academic achieving students will learn equally well from CBI or from a teacher while lower achievers benefit more from small group, teacher-led instruction.
APA, Harvard, Vancouver, ISO, and other styles
21

Korr, Arlene. "Use of Specific Web-Based Simulations to Support Inquiry-Based High School Science Instruction." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/474.

Full text
Abstract:
The primary goal of this study was to acquire an understanding of those practices that encourage the sustained use of simulations in support of inquiry-based science instruction. With the rapid distribution of Internet-related technologies in the field of education, it is most important to undertand the function of these innovations. Technology, specifically the implementation of simulations to support inquiry-based instruction, provides new educational strategies for science teachers. Technology also influences the field of education by repeatedly making some teachers' best practices obsolete. The qualitative research design was selected to explore the nature of science leaders' and teachers’ consideration or lack of consideration to incorporate simulations into their inquiry-based instruction. The method for collecting the data for this study included in-depth, semi-structured interviews. The analysis of this interview data was conducted in two phases. Phase I focused on the consensus views of the participants regarding the implementation of simulations. In order to gain a more in-depth understanding of the interview data, Phase II focused on the subtle differences among the participants regarding their execution of this instructional tool. The overall conclusion of this study was that the use of simulations requires a multi-faceted approach to ensure sustainability. As noted, science leaders must continue to encourage the high, medium and low users of simulations to implement the ongoing use of these instructional tools. Also, science teachers must do their part to ensure the success of these programs. By addressing the primary and secondary research questions, five major conclusions were reached. These conclusions include (a) the use of web-based simulations can have a positive influence on inquiry-based science instruction, (b) technology challenges have influenced the teachers’ use of simulations, (c) time influences the use of simulations, (d) ongoing professional development strategies support the sustained use of simulations, and (e) student engagement in inquiry-based science instruction is positively influenced by the use of simulations. This study concludes with suggestions for educational leaders and teachers along with further considerations for future research.
APA, Harvard, Vancouver, ISO, and other styles
22

White, Richard Neal. "A high school physics instructor's website: Design, implementation, and evaluation." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2062.

Full text
Abstract:
In order to test the ability of the Internet to supplement classroom instruction, an instructor-authored WWW site crashwhite.com was developed for two Berkeley High courses: Advanced Placement (AP) physics, and college-prep physics class. The website was intended to supplement classroom instruction by making classroom materials available to students and parents outside the classroom, and to facilitate increased teacher-parent, teacher-student, and student-student communication.
APA, Harvard, Vancouver, ISO, and other styles
23

Hanna, George T. "Cubature rules from a generalized Taylor perspective." full-text, 2009. http://eprints.vu.edu.au/1922/1/hanna.pdf.

Full text
Abstract:
The accuracy and efficiency of computing multiple integrals is a very important problem that arises in many scientific, financial and engineering applications. The research conducted in this thesis is designed to build on past work and develop and analyze new numerical methods to evaluate double integrals efficiently. The fundamental aim is to develop and assess techniques for (numerically) evaluating double integrals with high accuracy. The general approach presented in this thesis involves the development of new multivariate approximations from a generalaised Taylor perspective in terms of Appell type polynomials and to study their use in multi-dimensional integration. The expectation is that the new methods will provide polynomial and polynomial-like approximations that can be used for application in a straight forward manner with better accuracy. That is, we aim to devise and investigate new multiple integration formulae and as well as provide information on a priori error bounds. A further major contribution of the work builds on the research conducted in the field of Grüss type inequalities and leads to a new approximation of the one and two dimensional finite Fourier transform. The approximations are in terms of the complex exponential mean and estimate of the error of approximation for different classes of functions of bounded variation defined on finite intervals. It is believed that this work will also have an impact in the area of numerical multidimensional integral evaluation for other integral operators.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhen, Yongjian. "Improving students' math problem-solving skills in a computer-assisted learning environment." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Van, Raden Stephanie Justine. "The Effect of Role Models on the Attitudes and Career Choices of Female Students Enrolled in High School Science." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/370.

Full text
Abstract:
Girls who have high aptitude in math are not entering careers related to science, technology, engineering, and math (STEM fields) at the same rate as boys. As a result, female students may have fewer employment opportunities. This study explores one potential way to reduce the gap between male and female career aspirations and choices. Specifically, it looks at the impact of bringing women with careers in math- and science-related fields into high school classrooms as role models. The study uses surveys to measure pre- and post-visit perceptions of science and scientific work as well as student's short-term interest in math and science courses. In addition to these surveys, student comments were collected about the role model visits. While the overall study yielded little statistical significance, it also indicated that the role model visits had some impact on student perceptions and choices and raised questions that warrant further study.
APA, Harvard, Vancouver, ISO, and other styles
26

Sigears, Kimberly Ann. "The effectiveness of integrating technology into science eduaction (sic) compared to the traditional science classroom." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2142.

Full text
Abstract:
The goal of this project is to assist the teacher in integrating technology into a seventh grade science classroom, with an emphasis on the human body systems. Through the integration of technology into science education, this project aided in enhancing the learning environment, while motivating students to become more active participants in their learning experience.
APA, Harvard, Vancouver, ISO, and other styles
27

Hanna, George T. "Cubature rules from a generalized Taylor perspective." Thesis, full-text, 2009. https://vuir.vu.edu.au/1922/.

Full text
Abstract:
The accuracy and efficiency of computing multiple integrals is a very important problem that arises in many scientific, financial and engineering applications. The research conducted in this thesis is designed to build on past work and develop and analyze new numerical methods to evaluate double integrals efficiently. The fundamental aim is to develop and assess techniques for (numerically) evaluating double integrals with high accuracy. The general approach presented in this thesis involves the development of new multivariate approximations from a generalaised Taylor perspective in terms of Appell type polynomials and to study their use in multi-dimensional integration. The expectation is that the new methods will provide polynomial and polynomial-like approximations that can be used for application in a straight forward manner with better accuracy. That is, we aim to devise and investigate new multiple integration formulae and as well as provide information on a priori error bounds. A further major contribution of the work builds on the research conducted in the field of Grüss type inequalities and leads to a new approximation of the one and two dimensional finite Fourier transform. The approximations are in terms of the complex exponential mean and estimate of the error of approximation for different classes of functions of bounded variation defined on finite intervals. It is believed that this work will also have an impact in the area of numerical multidimensional integral evaluation for other integral operators.
APA, Harvard, Vancouver, ISO, and other styles
28

Lopez-Fernandez, Pedro A. "Routing Protocol Performance Evaluation for Mobile Ad-hoc Networks." UNF Digital Commons, 2008. http://digitalcommons.unf.edu/etd/293.

Full text
Abstract:
Currently, MANETs are a very active area of research, due to their great potential to provide networking capabilities when it is not feasible to have a fixed infrastructure in place, or to provide a complement to the existing infrastructure. Routing in this kind of network is much more challenging than in conventional networks, due to its mobile nature and limited power and hardware resources. The most practical way to conduct routing studies of MANETs is by means of simulators such as GloMoSim. GloMoSim was utilized in this research to investigate various performance statistics and draw comparisons among different MANET routing protocols, namely AODV, LAR (augmenting DSR), FSR (also known as Fisheye), WRP, and Bellman-Ford (algorithm). The network application used was FTP, and the network traffic was generated with tcplib [Danzig91]. The performance statistics investigated were application bytes received, normalized application bytes received, routing control packets transmitted, and application byte delivery ratio. The scenarios tested consisted of an airborne application at a high (26.8 m/s) and a low speed (2.7 m/s) on a 2000 m x 2000 m domain for nodal values of 36, 49, 64, 81, and 100 nodes, and radio transmit power levels of 7.005, 8.589, and 10.527 dBm. Nodes were paired up in fixed client-server couples involving 10% and 25% of the nodes being V111 clients and the same quantity being servers. AODV and LAR showed a significant margin of performance advantage over the remaining protocols in the scenarios tested.
APA, Harvard, Vancouver, ISO, and other styles
29

Silva, Jean Carlo da. "Prática colaborativa na formação de professores:a informática nas aulas de matemática no cotidiano da escola." Universidade Federal de Uberlândia, 2005. https://repositorio.ufu.br/handle/123456789/13979.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
In this inquiry we perceive that the challenge is to find ways to form and to involve the university s professors in practical collaboratives to produce and to improve moments where the pupils of the course of Mathematical Licentiateship are socializing and producing to know professors to them related to the work with the new technologies in the daily one of the schools. We approach the question of the new technologies in the initial formation of professors of the Course of Mathematics and carry through reflections on its implications in the professional performance of these teaching futures. Supported in ideas structuralized in the history and the culture of the citizens of the research, we look for to analyze the relations that get implicated in the initial formation of these professionals during practical of education and the supervised curricular period of training. When working with the computers we look for to identify to know them that the trainees - future professors - withheld and/or had constituted on the didactic use of this informational tool. In this work we search to understand as if it processes the formation/construction of the knowledge concerning new methodologies of teaching work in environments informatics in the schools. The process of formation of professors of Mathematics suffers influence of diverse factors - social, politicians, cultural and technological. This in motivated them to extend our analysis on the practical one developed in the search to understand as if they constitute to know professors to them of the future professors of Mathematics concerning the work in a informatic s environment.
Nessa investigação, percebemos que o desafio é encontrar maneiras de capacitar e envolver os formadores de professores em práticas colaborativas, a fim de produzir e aprimorar momentos em que os alunos do curso de Licenciatura em Matemática socializem e produzam os saberes docentes relacionados ao trabalho com as novas tecnologias no cotidiano das escolas. Abordamos a questão das novas tecnologias na formação inicial de professores do Curso de Matemática, e realizamos reflexões sobre as implicações na atuação profissional desses futuros docentes. Alicerçados em idéias estruturadas na história e na cultura dos sujeitos da pesquisa, procuramos analisar as relações histórico-culturais - que permeiam a formação inicial desses profissionais durante a prática de ensino e o estágio curricular supervisionado. Ao trabalharmos com os computadores, procuramos identificar os saberes que os estagiários futuros professores detinham e/ou constituíram sobre o uso didático dessa ferramenta informacional. Neste trabalho, buscamos compreender como se processa a formação/construção do conhecimento acerca de novas metodologias de trabalho docente em ambientes informatizados nas escolas. Enfim, o processo de formação de professores de Matemática sofre influência de diversos fatores sociais, políticos, culturais e tecnológicos - e os diversos saberes acerca do uso de computadores nas aulas de Matemática como a elaboração de tarefas, o trabalho com o software, a relação com os alunos e demais sujeitos no processo de ensino e aprendizagem de Matemática, etc. - nos motivou a ampliar nossa análise sobre a prática desenvolvida, objetivando entender como se constituem os saberes docentes dos futuros professores de Matemática num ambiente informatizado.
Mestre em Educação
APA, Harvard, Vancouver, ISO, and other styles
30

Morais, Anuar Daian de. "O desenvolvimento do raciocínio condicional a partir do uso de teste no squeak etoys." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/164383.

Full text
Abstract:
A presente tese apresenta uma investigação acerca do desenvolvimento do raciocínio condicional, considerado um componente chave do pensamento lógico-dedutivo, em crianças e adolescentes que participaram de uma experiência de programação com o software Squeak Etoys. O desenvolvimento do raciocínio condicional é classificado em etapas relacionadas à composição e reversão de transformações que operam sobre a implicação, culminando com a plena reversibilidade que corresponde, na teoria piagetiana, à construção e mobilização do grupo de transformações INRC (Identidade, Negação, Recíproca, Correlativa). Tais etapas são identificadas a partir de entrevistas realizadas segundo o método clínico piagetiano, através da aplicação de três desafios de programação com complexidade crescente, cuja solução envolvia o uso da operação lógica da implicação. As entrevistas foram realizadas com oito crianças, com idades entre 10 e 16 anos, que cursavam as séries finais do Ensino Fundamental de duas escolas públicas. Com base nos dados, a análise revela a importância do pensamento combinatório, que permite aos adolescentes testarem, sistematicamente, todas as possibilidades de ordenamento e inclusão dos comandos sugeridos, e a obterem as conclusões lógicas adequadas, enquanto que as crianças mais novas não obtém o mesmo êxito. Além disso, na tese é realizada uma discussão sobre a inclusão da escola numa cultura digital sob uma perspectiva construtivista de construção do conhecimento. Nesse contexto, a metodologia de projetos de aprendizagem foi apresentada como sendo adequada e o software Squeak Etoys despontou como uma possibilidade interessante de se desenvolver projetos e de promover a aprendizagem de matemática. Por último, neste trabalho também é realizado um debate sobre a importância de se aprender a programar na escola.
The present thesis presents an investigation into the development of conditional reasoning, considered a key component of logical-deductive thinking, in children and adolescents who participated in a programming experience with the software Squeak Etoys. The development of conditional reasoning is classified into stages related to the composition and reversal of transformations that operate on the implication, culminating in the full reversibility that corresponds, in Piaget’s theory, to the construction and mobilization of the Transformations INRC (Identity, Negation, Reciprocity and Correlation). These steps are identified from interviews conducted according to Piaget’s clinical method, through the application of three programming challenges with increasing complexity, whose solution involved the use of the logical operation of the implication. The interviews were conducted with eight children aged 10-16, who attended the final series of the Elementary School of two public schools. Based on the data, the analysis revealed the importance of combining thinking, which allows teenagers to systematically test all the possibilities for ordering and inclusion of the suggested commands, and to obtain the appropriate logical conclusions, while younger children do not achieve the same results. Moreover, in the thesis a discussion is conducted on the inclusion of the school in a digital culture under a constructivist perspective of building knowledge. In this context, the methodology of learning through projects has been presented as being appropriate and the Squeak Etoys software has appeared as an interesting possibility of developing projects and promoting the learning of mathematics. Finally, in this study a debate is also conducted on the importance of learning to plan in the school.
APA, Harvard, Vancouver, ISO, and other styles
31

Venkatesan, Gopalachary. "A Statistical Approach to Automatic Process Control (regulation schemes)." Thesis, 1997. https://vuir.vu.edu.au/308/.

Full text
Abstract:
Automatic process control (APC) techniques have been applied to process variables such as feed rate, temperature, pressure, viscosity, and to product quality variables as well. Conventional practices of engineering control use the potential for step changes to justify an integral term in the controller algorithm to give (long-run) compensation for a shift in the mean of a product quality variable. Application of techniques from the fields of time series analysis and stochastic control to tackle product quality control problems is also common. The focus of this thesis is on the issues of process delay ('dead time') and dynamics ('inertia') which provides opportunity to utilise technologies from both statistical process control (SPC) and APC. A presentation of the application of techniques from both SPC and APC is made in an approach to control the quality of a product (product variability) at the output. The thesis considers the issues of process control in situations where some form of feedback control is necessary and yet where stability in the feedback control loop cannot be easily attained. 'Disturbances' afflict a process control system which together with issues of dynamics and dead time (time delay), compound the control problem. An explanation of proportional, integral and derivative (PID) controllers, time series controllers, minimum variance (mean square error) control and MMSE (minimum mean square error) controllers is given after a literature review of stochastic process control and 'dead-time compensation' methods. The dynamic relationship between (output) controlled and (input) manipulative variables is described by a second-order dynamic model (transfer function) as also is the process dead time. The use of an ARIMA (0,l,l) stochastic time series model characterizes and forecasts the drifting behaviour of process disturbances. A feedback control algorithm is developed which minimizes the variance of the output controlled variable by making an adjustment at every sample point that exactly compensates for the forecasted disturbance. An expression is derived for the input control adjustment required that will exactly cancel the output deviation by imposing feed back control stability conditions. The (dead-time) simulation of the stochastic feedback control algorithm and EWMA process control are critiqued. The feedback control algorithm is simulated to find the CESTDDVN (control error standard deviation) or control error sigma (product variability) and the adjustment frequency of the time series controller. An analysis of the time series controller performance results and discussion follow the simulation. Time series controller performance is discussed and an outline of a process regulation scheme given. The thesis enhances some of the methodologies that have been recently suggested in the literature on integrating SPC and APC and concludes with details of some suggestions for further research. Solutions to the problems of statistical process monitoring and feedback control adjustment connected with feedback (closed loop) stability, controller limitations and adequate compensation of dead time in achieving minimum variance control. are found by the application of both process control techniques. By considering the dynamic behaviour of the process and by manipulating the inputs during non-stationary conditions, dynamic optimization is achieved. The IMA parameter, suggested as an on-line tuning parameter to compensate dead time, leads to adaptive (self-tuning) control. It is demonstrated that the performance of the time series controller is superior to that of the EWMA and CUSUM controllers and provides minimum variance control even in the face of dead time and dynamics. Some articles/papers have appeared in Technometrics, Volume 34, No.3, 1992, in relation to statistical process monitoring and feedback adjustment (25l-267), ASPC (286-297), and discourse given on integrating SPC and APC (268-285). By exploiting the time series controller's one-step ahead forecasting feature and considering closed-loop (feedback) stability and dead-time compensation, this thesis adds further to these contributions.
APA, Harvard, Vancouver, ISO, and other styles
32

Venkatesan, Gopalachary. "A Statistical Approach to Automatic Process Control (regulation schemes)." 1997. http://eprints.vu.edu.au/308/1/308contents.pdf.

Full text
Abstract:
Automatic process control (APC) techniques have been applied to process variables such as feed rate, temperature, pressure, viscosity, and to product quality variables as well. Conventional practices of engineering control use the potential for step changes to justify an integral term in the controller algorithm to give (long-run) compensation for a shift in the mean of a product quality variable. Application of techniques from the fields of time series analysis and stochastic control to tackle product quality control problems is also common. The focus of this thesis is on the issues of process delay ('dead time') and dynamics ('inertia') which provides opportunity to utilise technologies from both statistical process control (SPC) and APC. A presentation of the application of techniques from both SPC and APC is made in an approach to control the quality of a product (product variability) at the output. The thesis considers the issues of process control in situations where some form of feedback control is necessary and yet where stability in the feedback control loop cannot be easily attained. 'Disturbances' afflict a process control system which together with issues of dynamics and dead time (time delay), compound the control problem. An explanation of proportional, integral and derivative (PID) controllers, time series controllers, minimum variance (mean square error) control and MMSE (minimum mean square error) controllers is given after a literature review of stochastic process control and 'dead-time compensation' methods. The dynamic relationship between (output) controlled and (input) manipulative variables is described by a second-order dynamic model (transfer function) as also is the process dead time. The use of an ARIMA (0,l,l) stochastic time series model characterizes and forecasts the drifting behaviour of process disturbances. A feedback control algorithm is developed which minimizes the variance of the output controlled variable by making an adjustment at every sample point that exactly compensates for the forecasted disturbance. An expression is derived for the input control adjustment required that will exactly cancel the output deviation by imposing feed back control stability conditions. The (dead-time) simulation of the stochastic feedback control algorithm and EWMA process control are critiqued. The feedback control algorithm is simulated to find the CESTDDVN (control error standard deviation) or control error sigma (product variability) and the adjustment frequency of the time series controller. An analysis of the time series controller performance results and discussion follow the simulation. Time series controller performance is discussed and an outline of a process regulation scheme given. The thesis enhances some of the methodologies that have been recently suggested in the literature on integrating SPC and APC and concludes with details of some suggestions for further research. Solutions to the problems of statistical process monitoring and feedback control adjustment connected with feedback (closed loop) stability, controller limitations and adequate compensation of dead time in achieving minimum variance control. are found by the application of both process control techniques. By considering the dynamic behaviour of the process and by manipulating the inputs during non-stationary conditions, dynamic optimization is achieved. The IMA parameter, suggested as an on-line tuning parameter to compensate dead time, leads to adaptive (self-tuning) control. It is demonstrated that the performance of the time series controller is superior to that of the EWMA and CUSUM controllers and provides minimum variance control even in the face of dead time and dynamics. Some articles/papers have appeared in Technometrics, Volume 34, No.3, 1992, in relation to statistical process monitoring and feedback adjustment (25l-267), ASPC (286-297), and discourse given on integrating SPC and APC (268-285). By exploiting the time series controller's one-step ahead forecasting feature and considering closed-loop (feedback) stability and dead-time compensation, this thesis adds further to these contributions.
APA, Harvard, Vancouver, ISO, and other styles
33

Leggett, Nicholas R. "Computer use and achievement in high school mathematics and science." 1995. http://catalog.hathitrust.org/api/volumes/oclc/33409478.html.

Full text
Abstract:
Thesis (M.A.)--University of Wisconsin--Madison, 1995.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 35-37).
APA, Harvard, Vancouver, ISO, and other styles
34

Gebrekal, Zeslassie Melake. "The influence of the use of computers in the teaching and learning of functions in school mathematics." Diss., 2007. http://hdl.handle.net/10500/2084.

Full text
Abstract:
The aim of the study was to investigate what influence the use of computers using MS Excel and RJS Graph software has on grade 11 Eritrean students' understanding of functions in the learning of mathematics. An empirical investigation using quantitative and qualitative research methods was carried out. A pre-test (task 1) and a post-test (task 2), a questionnaire and an interview schedule were used to collect data. Two randomly selected sample groups (i.e. experimental and control groups) of students were involved in the study. The experimental group learned the concepts of functions, particularly quadratic functions using computers. The control group learned the same concepts through the traditional paper-pencil method. The results indicated that the use of computers has a positive impact on students' understanding of functions as reflected in their achievement, problem-solving skills, motivation, attitude and the classroom environment.
Educational Studies
M. Ed. (Math Education)
APA, Harvard, Vancouver, ISO, and other styles
35

Moore, Robert. "Computer recognition of musical instruments : an examination of within class classification." 2007. http://eprints.vu.edu.au/1574/1/RobMoore_PhD_thesis.pdf.

Full text
Abstract:
This dissertation records the development of a process that enables within class classification of musical instruments. That is, a process that identifies a particular instrument of a given type - in this study four guitars and five violins. In recent years there have been numerous studies where between class classification has been attempted, but there have been no attempts at within class classification. Since timbre is the quality/quantity that enables one musical sound to be differentiated from another, before any classification can take place, a means to measure and describe it in physical terms needs to be devised. Towards this end, a review of musical timbre is presented which includes research into musical timbre from the work of Helmholtz through to the present. It also includes related work in speech recognition and musical instrument synthesis. The representation of timbre used in this study is influenced by the work of Hourdin and Charbonneau who used an adaption of multi-dimensional scaling, based on frequency analysis over time, to represent the evolution of each musical tone. A trajectory path, a plot of frequencies over time for each tone, was used to represent the evolution of each tone. This is achieved by taking a sequence of samples from the initial waveform and applying the discrete Fourier transform (DFT) or the constant Q transform (CQT) to achieve a frequency analysis of each data window. The classification technique used, is based on statistical distance methods. Two sets of data were recorded for each of the guitars and violins in the study across the pitch range of each instrument type. In the classi¯cation trials, one set of data was used as reference tones, and the other set, as test tones. To measure the similarity of timbre for a pair of tones, the closeness of the two trajectory paths was measured. This was achieved by summing the squared distances between corresponding points along the trajectory paths. With four guitars, a 97% correct classification rate was achieved for tones of the same pitch (fundamental frequency), and for five violins, a 94% correct classification rate was achieved for tones of the same pitch. The robustness of the classification system was tested by comparing a smaller portion of the whole tone, by comparing tones of differing pitch, and a number of other variations. It was found that classification of both guitars and violins was highly sensitive to pitch. The classification rate fell away markedly when tones of different pitch were compared. Further investigation was done to examine the timbre of each instrument across the range of the instrument. This conformed that the timbres of the guitar and violin are highly frequency dependent and suggested the presence of formants that is, certain fixed frequencies that are boosted when the tone contains harmonics at or near those frequencies.
APA, Harvard, Vancouver, ISO, and other styles
36

Kaspi, Samuel. "Transaction Models and Algorithms for Improved Transaction Throughput." 2002. http://eprints.vu.edu.au/221/1/02whole.pdf.

Full text
Abstract:
Currently, e-commerce is in its infancy, however its expansion is expected to be exponential and as it grows, so too will the demands for very fast real time online transaction processing systems. One avenue for meeting the demand for increased transaction processing speed is conversion from disk-based to in-memory databases. However, while in-memory systems are very promising, there are many organizations whose data is too large to fit in in-memory systems or who are not willing to undertake the investment that an implementation of an in-memory system requires. For these organizations an improvement in the performance of disk-based systems is required. Accordingly, in this thesis, we introduce two mechanisms that substantially improve the performance of disk-based systems. The first mechanism, which we call a contention-based scheduler, is attached to a standard 2PL system. This scheduler determines each transaction's probability of conflict before it begins executing. Using this knowledge, the contention-based scheduler allows transactions into the system in both optimal numbers and an optimal mix. We present tests that show that the contention-based scheduler substantially outperforms standard 2PL concurrency control in a wide variety of disk-based hardware configurations. The improvement though most pronounced in the throughput of low contention transactions extends to all transaction types over an extended processing period. We call the second mechanism that we develop to improve the performance of disk-based database systems, enhanced memory access (EMA). The purpose of EMA is to allow very high levels of concurrency in the pre-fetching of data thus bringing the performance of disk-based systems close to that achieved by in-memory systems. The basis of our proposal for EMA is to ensure that even when conditions satisfying a transaction's predicate change between pre-fetch time and execution time, the data required for satisfying transactions' predicates are still found in memory. We present tests that show that the implementation of EMA allows the performance of disk-based systems to approach the performance achieved by in-memory systems. Further, the tests show that the performance of EMA is very robust to the imposition of additional costs associated with its implementation.
APA, Harvard, Vancouver, ISO, and other styles
37

Moore, Robert. "Computer recognition of musical instruments : an examination of within class classification." Thesis, 2007. https://vuir.vu.edu.au/1574/.

Full text
Abstract:
This dissertation records the development of a process that enables within class classification of musical instruments. That is, a process that identifies a particular instrument of a given type - in this study four guitars and five violins. In recent years there have been numerous studies where between class classification has been attempted, but there have been no attempts at within class classification. Since timbre is the quality/quantity that enables one musical sound to be differentiated from another, before any classification can take place, a means to measure and describe it in physical terms needs to be devised. Towards this end, a review of musical timbre is presented which includes research into musical timbre from the work of Helmholtz through to the present. It also includes related work in speech recognition and musical instrument synthesis. The representation of timbre used in this study is influenced by the work of Hourdin and Charbonneau who used an adaption of multi-dimensional scaling, based on frequency analysis over time, to represent the evolution of each musical tone. A trajectory path, a plot of frequencies over time for each tone, was used to represent the evolution of each tone. This is achieved by taking a sequence of samples from the initial waveform and applying the discrete Fourier transform (DFT) or the constant Q transform (CQT) to achieve a frequency analysis of each data window. The classification technique used, is based on statistical distance methods. Two sets of data were recorded for each of the guitars and violins in the study across the pitch range of each instrument type. In the classi¯cation trials, one set of data was used as reference tones, and the other set, as test tones. To measure the similarity of timbre for a pair of tones, the closeness of the two trajectory paths was measured. This was achieved by summing the squared distances between corresponding points along the trajectory paths. With four guitars, a 97% correct classification rate was achieved for tones of the same pitch (fundamental frequency), and for five violins, a 94% correct classification rate was achieved for tones of the same pitch. The robustness of the classification system was tested by comparing a smaller portion of the whole tone, by comparing tones of differing pitch, and a number of other variations. It was found that classification of both guitars and violins was highly sensitive to pitch. The classification rate fell away markedly when tones of different pitch were compared. Further investigation was done to examine the timbre of each instrument across the range of the instrument. This conformed that the timbres of the guitar and violin are highly frequency dependent and suggested the presence of formants that is, certain fixed frequencies that are boosted when the tone contains harmonics at or near those frequencies.
APA, Harvard, Vancouver, ISO, and other styles
38

Cirstea, Florica-Corina. "Nonlinear Methods in the Study of Singular Partial Differential Equations." 2005. http://eprints.vu.edu.au/310/1/Cirstea_FloricaCorina.pdf.

Full text
Abstract:
Nonlinear singular partial differential equations arise naturally when studying models from such areas as Riemannian geometry, applied probability, mathematical physics and biology. The purpose of this thesis is to develop analytical methods to investigate a large class of nonlinear elliptic PDEs underlying models from physical and biological sciences. These methods advance the knowledge of qualitative properties of the solutions to equations of the form &Delta u= &fnof(x,u) where &Omega is a smooth domain in R^N (bounded or possibly unbounded) with compact (possibly empty) boundary &part&Omega. A non-negative solution of the above equation subject to the singular boundary condition u(x)&rarr &infin as dist(x,&part&Omega)&rarr 0 (if &Omega&ne R^N), or u(x)&rarr &infin as | x | &rarr &infin (if &Omega=R^N) is called a blow-up or large solution; in the latter case the solution is called an entire large solution. Issues such as existence, uniqueness and asymptotic behavior of blow-up solutions are the main questions addressed and resolved in this dissertation. The study of similar equations with homogeneous Dirichlet boundary conditions, along with that of ODEs, supplies basic tools for the theory of blow-up. The treatment is based on devices used in Nonlinear Analysis such as the maximum principle and the method of sub and super-solutions, which is one of the main tools for finding solutions to boundary value problems. The existence of blow-up solutions is examined not only for semilinear elliptic equations, but also for systems of elliptic equations in R^N and for singular mixed boundary value problems. Such a study is motivated by applications in various fields and stimulated by very recent trends in research at the international level. The influence of the nonlinear term &fnof(x,u) on the uniqueness and asymptotics of the blow-up solution is very delicate and still eludes researchers, despite a very extensive literature on the subject. This challenge is met in a general setting capable of modelling competition near the boundary (that is, 0&sdot &infin near &part &Omega), which is very suitable to applications in population dynamics. As a special feature, we develop innovative methods linking, for the first time, the topic of blow-up in PDEs with regular variation theory (or Karamata's theory) arising in applied probability. This interplay between PDEs and probability theory plays a crucial role in proving the uniqueness of the blow-up solution in a setting that removes previous restrictions imposed in the literature. Moreover, we unveil the intricate pattern of the blow-up solution near the boundary by establishing the two-term asymptotic expansion of the solution and its variation speed (in terms of Karamata's theory). The study of singular phenomena is significant because computer modelling is usually inefficient in the presence of singularities or fast oscillation of functions. Using the asymptotic methods developed by this thesis one can find the appropriate functions modelling the singular phenomenon. The research outcomes prove to be of significance through their potential applications in population dynamics, Riemannian geometry and mathematical physics.
APA, Harvard, Vancouver, ISO, and other styles
39

Azzam, Ibrahim Ahmed Aref. "Implicit Concept-based Image Indexing and Retrieval for Visual Information Systems." 2006. http://eprints.vu.edu.au/479/1/Implicit_Concept-based_Image.pdf.

Full text
Abstract:
This thesis focuses on Implicit Concept-based Image Indexing and Retrieval (ICIIR), and the development of a novel method for the indexing and retrieval of images. Image indexing and retrieval using a concept-based approach involves extraction, modelling and indexing of image content information. Computer vision offers a variety of techniques for searching images in large collections. We propose a method, which involves the development of techniques to enable components of an image to be categorised on the basis of their relative importance within the image in combination with filtered representations. Our method concentrates on matching subparts of images, defined in a variety of ways, in order to find particular objects. The storage of images involves an implicit, rather than an explicit, indexing scheme. Retrieval of images will then be achieved by application of an algorithm based on this categorisation, which will allow relevant images to be identified and retrieved accurately and efficiently. We focus on Implicit Concept-based Image Indexing and Retrieval, using fuzzy expert systems, density measure, supporting factors, weights and other attributes of image components to identify and retrieve images.
APA, Harvard, Vancouver, ISO, and other styles
40

Danda, Murali Krishna Reddy. "Design and analysis of hash functions." 2007. http://eprints.vu.edu.au/1514/1/Danda.pdf.

Full text
Abstract:
A function that compresses an arbitrarily large message into a fixed small size ‘message digest’ is known as a hash function. For the last two decades, many types of hash functions have been defined but, the most widely used in many of the cryptographic applications currently are hash functions based on block ciphers and the dedicated hash functions. Almost all the dedicated hash functions are generated using the Merkle-Damgård construction which is developed independently by Merkle and Damgård in 1989 [6, 7]. A hash function is said to be broken if an attacker is able to show that the design of the hash function violates at least one of its claimed security property. There are various types of attacking strategies found on hash functions, such as attacks based on the block ciphers, attacks depending on the algorithm, attacks independent of the algorithm, attacks based on signature schemes, and high level attacks. Besides this, in recent years, many structural weaknesses have been found in the Merkle-Damgård construction [51-54], which indirectly effects the hash functions developed based on this construction. MD5, SHA-0 and SHA-1 are currently the most widely deployed hash functions. However, they were all broken by Wang using a differential collision attack in 2004 [55-60], which increased the urgency of replacement for these widely used hash functions. Since then, many replacements and modifications have been proposed for the existing hash functions. The first alternative proposed is the replacement of the effected hash function with the SHA-2 group of hash functions. This thesis presents a survey on different types of the hash functions, different types of attacks on the hash functions and structural weaknesses of the hash functions. Besides that, a new type of classification based on the number of inputs to the hash function and based on the streamability and non-streamability of the design is presented. This classification consists of explanation of the working process of the already existing hash functions and their security analysis. Also, compression of the Merkle-Damgård construction with its related constructions is presented. Moreover, three major methods of strengthening hash functions so as to avoid the recent threats on hash functions are presented. The three methods dealt are: 1) Generating a collision resistant hash function using a new message preprocessing method called reverse interleaving. 2) Enhancement of hash functions such as MD-5 and SHA-1 using a different message expansion coding, and 3) Proposal of a new hash function called 3-branch. The first two methods can be considered as modifications and the third method can be seen as a replacement to the already existing hash functions which are effected by recent differential collision attacks. The security analysis of each proposal is also presented against the known generic attacks, along with some of the applications of the dedicated hash function.
APA, Harvard, Vancouver, ISO, and other styles
41

Danda, Murali Krishna Reddy. "Design and analysis of hash functions." Thesis, 2007. https://vuir.vu.edu.au/1514/.

Full text
Abstract:
A function that compresses an arbitrarily large message into a fixed small size ‘message digest’ is known as a hash function. For the last two decades, many types of hash functions have been defined but, the most widely used in many of the cryptographic applications currently are hash functions based on block ciphers and the dedicated hash functions. Almost all the dedicated hash functions are generated using the Merkle-Damgård construction which is developed independently by Merkle and Damgård in 1989 [6, 7]. A hash function is said to be broken if an attacker is able to show that the design of the hash function violates at least one of its claimed security property. There are various types of attacking strategies found on hash functions, such as attacks based on the block ciphers, attacks depending on the algorithm, attacks independent of the algorithm, attacks based on signature schemes, and high level attacks. Besides this, in recent years, many structural weaknesses have been found in the Merkle-Damgård construction [51-54], which indirectly effects the hash functions developed based on this construction. MD5, SHA-0 and SHA-1 are currently the most widely deployed hash functions. However, they were all broken by Wang using a differential collision attack in 2004 [55-60], which increased the urgency of replacement for these widely used hash functions. Since then, many replacements and modifications have been proposed for the existing hash functions. The first alternative proposed is the replacement of the effected hash function with the SHA-2 group of hash functions. This thesis presents a survey on different types of the hash functions, different types of attacks on the hash functions and structural weaknesses of the hash functions. Besides that, a new type of classification based on the number of inputs to the hash function and based on the streamability and non-streamability of the design is presented. This classification consists of explanation of the working process of the already existing hash functions and their security analysis. Also, compression of the Merkle-Damgård construction with its related constructions is presented. Moreover, three major methods of strengthening hash functions so as to avoid the recent threats on hash functions are presented. The three methods dealt are: 1) Generating a collision resistant hash function using a new message preprocessing method called reverse interleaving. 2) Enhancement of hash functions such as MD-5 and SHA-1 using a different message expansion coding, and 3) Proposal of a new hash function called 3-branch. The first two methods can be considered as modifications and the third method can be seen as a replacement to the already existing hash functions which are effected by recent differential collision attacks. The security analysis of each proposal is also presented against the known generic attacks, along with some of the applications of the dedicated hash function.
APA, Harvard, Vancouver, ISO, and other styles
42

Khomo, Thabo Garth. "The use of information and communication technology by mathematics and physical science teachers at secondary schools." Diss., 2018. http://hdl.handle.net/10500/25000.

Full text
Abstract:
Information and communication technology (ICT) advances have dramatically changed teaching and learning processes. This study investigates the use of ICT in teaching and learning with the objective of establishing whether teachers are utilising the skills acquired through the Sci-Bono Discovery Centre training. The study sample comprised of 30 secondary school teachers who were trained in 2012 and who were teaching mathematics and/or physical science. The participating teachers were from schools that fell within the Johannesburg North and Johannesburg East regions of the Gauteng Department of Education (GDE). An overall understanding of reviewed literature on the use of ICT in teaching and learning contributed to the preparation of the research survey questionnaire and interview questions. A research survey design using a multi-methods approach allowed both questionnaires and interviews. The questionnaires were analysed using a simple descriptive data analysis technique. The interviews were conducted with 12 of the initial 30 participants over a period of two weeks in a one-on-one setting. The recorded interviews were transcribed and analysed using a thematic content analysis technique. The results of both quantitative and qualitative analysis are presented using charts and tables. The research findings identified issues such as the need for teachers to maintain a positive attitude towards the use of ICT in teaching, and for schools to create a conducive teaching environment for effective use of ICT in the classroom, including the availability of computer resources. The study provides recommendations including the provision of ICT coordinators at schools, and the provision of an ongoing teacher ICT training programme.
School of Computing
M. Tech. (Information Technology)
APA, Harvard, Vancouver, ISO, and other styles
43

Miliszewska, Iwona. "A Multidimensional Model for Transnational Computing Education Programs." 2006. http://eprints.vu.edu.au/579/1/Template.pdf.

Full text
Abstract:
As transnational education is becoming firmly embedded as a part of the distance education landscape, governments and universities are calling for meaningful research on transnational education. This study involved the development and validation of a model for effective transnational education programs. The study used student experience as a key indicator of program effectiveness and, following a holistic approach, took into consideration various dimensions of the transnational education context including student, instructor, curriculum and instruction design, interaction, evaluation and assessment, technology, and program management and organisational support. This selection of dimensions, together with their attributes, formed the proposed model for transnational education programs. The model was applied for validation against three transnational computing education programs currently offered by Australian universities in Hong Kong. Two methods of data collection - a survey, and group interviews with students - were used to validate the model; data was obtained from approximately three hundred subjects. The model was evaluated in terms of the perceived importance, to the students, of the various attributes of each program dimension on program effectiveness. The results of the validation indicated that the students in all the programs participating in the evaluation were in agreement as to the factors they consider most important to the effectiveness of transnational programs. The validation of the model led to its refinement; first, the least important attributes were removed from dimensions; second, a new dimension, pre-enrolment considerations, was introduced to the model; and finally, the attributes within each of the dimensions were ordered in terms of their perceived importance.
APA, Harvard, Vancouver, ISO, and other styles
44

Kaspi, Samuel. "Transaction Models and Algorithms for Improved Transaction Throughput." Thesis, 2002. https://vuir.vu.edu.au/221/.

Full text
Abstract:
Currently, e-commerce is in its infancy, however its expansion is expected to be exponential and as it grows, so too will the demands for very fast real time online transaction processing systems. One avenue for meeting the demand for increased transaction processing speed is conversion from disk-based to in-memory databases. However, while in-memory systems are very promising, there are many organizations whose data is too large to fit in in-memory systems or who are not willing to undertake the investment that an implementation of an in-memory system requires. For these organizations an improvement in the performance of disk-based systems is required. Accordingly, in this thesis, we introduce two mechanisms that substantially improve the performance of disk-based systems. The first mechanism, which we call a contention-based scheduler, is attached to a standard 2PL system. This scheduler determines each transaction's probability of conflict before it begins executing. Using this knowledge, the contention-based scheduler allows transactions into the system in both optimal numbers and an optimal mix. We present tests that show that the contention-based scheduler substantially outperforms standard 2PL concurrency control in a wide variety of disk-based hardware configurations. The improvement though most pronounced in the throughput of low contention transactions extends to all transaction types over an extended processing period. We call the second mechanism that we develop to improve the performance of disk-based database systems, enhanced memory access (EMA). The purpose of EMA is to allow very high levels of concurrency in the pre-fetching of data thus bringing the performance of disk-based systems close to that achieved by in-memory systems. The basis of our proposal for EMA is to ensure that even when conditions satisfying a transaction's predicate change between pre-fetch time and execution time, the data required for satisfying transactions' predicates are still found in memory. We present tests that show that the implementation of EMA allows the performance of disk-based systems to approach the performance achieved by in-memory systems. Further, the tests show that the performance of EMA is very robust to the imposition of additional costs associated with its implementation.
APA, Harvard, Vancouver, ISO, and other styles
45

Miliszewska, Iwona. "A Multidimensional Model for Transnational Computing Education Programs." Thesis, 2006. https://vuir.vu.edu.au/579/.

Full text
Abstract:
As transnational education is becoming firmly embedded as a part of the distance education landscape, governments and universities are calling for meaningful research on transnational education. This study involved the development and validation of a model for effective transnational education programs. The study used student experience as a key indicator of program effectiveness and, following a holistic approach, took into consideration various dimensions of the transnational education context including student, instructor, curriculum and instruction design, interaction, evaluation and assessment, technology, and program management and organisational support. This selection of dimensions, together with their attributes, formed the proposed model for transnational education programs. The model was applied for validation against three transnational computing education programs currently offered by Australian universities in Hong Kong. Two methods of data collection - a survey, and group interviews with students - were used to validate the model; data was obtained from approximately three hundred subjects. The model was evaluated in terms of the perceived importance, to the students, of the various attributes of each program dimension on program effectiveness. The results of the validation indicated that the students in all the programs participating in the evaluation were in agreement as to the factors they consider most important to the effectiveness of transnational programs. The validation of the model led to its refinement; first, the least important attributes were removed from dimensions; second, a new dimension, pre-enrolment considerations, was introduced to the model; and finally, the attributes within each of the dimensions were ordered in terms of their perceived importance.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Hao Lan. "Agent-based open connectivity for decision support systems." 2007. http://eprints.vu.edu.au/1453/1/zhang.pdf.

Full text
Abstract:
One of the major problems that discourages the development of Decision Support Systems (DSSs) is the un-standardised DSS environment. Computers that support modern business processes are no longer stand-alone systems, but have become tightly connected both with each other and their users. Therefore, having a standardised environment that allows different DSS applications to communicate and cooperate is crucial. The integration difficulty is the most crucial problem that affects the development of DSSs. Therefore, an open and standardised environment for integrating various DSSs is required. Despite the critical need for an open architecture in the DSS designs, the present DSS architectural designs are unable to provide a fundamental solution to enhance the flexibility, connectivity, compatibility, and intelligence of a DSS. The emergence of intelligent agent technology fulfils the requirements of developing innovative and efficient DSS applications as intelligent agents offer various advantages, such as mobility, flexibility, intelligence, etc., to tackle the major problems in existing DSSs. Although various agent-based DSS applications have been suggested, most of these applications are unable to balance manageability with flexibility. Moreover, most existing agent-based DSSs are based on agent-coordinated design mechanisms, and often overlook the living environment for agents. This could cause the difficulties in cooperating and upgrading agents because the agent-based coordination mechanisms have limited capabilities to provide agents with relatively comprehensive information about global system objectives. This thesis proposes a novel multi-agent-based architecture for DSS, called Agentbased Open Connectivity for Decision support systems (AOCD). The AOCD architecture adopts a hybrid agent network topology that makes use of a unique feature called the Matrix-agent connection. The novel component, i.e. Matrix, provides a living environment for agents; it allows agents to upgrade themselves through interacting with the Matrix. This architecture is able to overcome the difficulties in concurrency control and synchronous communication that plague many decentralised systems. Performance analysis has been carried out on this framework and we find that it is able to provide a high degree of flexibility and efficiency compared with other frameworks. The thesis explores the detailed design of the AOCD framework and the major components employed in this framework including the Matrix, agents, and the unified Matrices structure. The proposed framework is able to enhance the system reusability and maximize the system performance. By using a set of interoperable autonomous agents, more creative decision-making can be accomplished in comparison with a hard-coded programmed approach. In this research, we systematically classified the agent network topologies, and developed an experimental program to evaluate the system performance based on three different agent network topologies. The experimental results present the evidence that the hybrid topology is efficient in the AOCD framework design. Furthermore, a novel topological description language for agent networks (TDLA) has been introduced in this research work, which provides an efficient mechanism for agents to perceive the information about their interconnected network. A new Agent-Rank algorithm is introduced in the thesis in order to provide an efficient matching mechanism for agent cooperation. The computational results based on our recently developed program for agent matchmaking demonstrate the efficiency and effectiveness of the Agent-Rank algorithm in the agent-matching and re-matching processes
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Hao Lan. "Agent-based open connectivity for decision support systems." Thesis, 2007. https://vuir.vu.edu.au/1453/.

Full text
Abstract:
One of the major problems that discourages the development of Decision Support Systems (DSSs) is the un-standardised DSS environment. Computers that support modern business processes are no longer stand-alone systems, but have become tightly connected both with each other and their users. Therefore, having a standardised environment that allows different DSS applications to communicate and cooperate is crucial. The integration difficulty is the most crucial problem that affects the development of DSSs. Therefore, an open and standardised environment for integrating various DSSs is required. Despite the critical need for an open architecture in the DSS designs, the present DSS architectural designs are unable to provide a fundamental solution to enhance the flexibility, connectivity, compatibility, and intelligence of a DSS. The emergence of intelligent agent technology fulfils the requirements of developing innovative and efficient DSS applications as intelligent agents offer various advantages, such as mobility, flexibility, intelligence, etc., to tackle the major problems in existing DSSs. Although various agent-based DSS applications have been suggested, most of these applications are unable to balance manageability with flexibility. Moreover, most existing agent-based DSSs are based on agent-coordinated design mechanisms, and often overlook the living environment for agents. This could cause the difficulties in cooperating and upgrading agents because the agent-based coordination mechanisms have limited capabilities to provide agents with relatively comprehensive information about global system objectives. This thesis proposes a novel multi-agent-based architecture for DSS, called Agentbased Open Connectivity for Decision support systems (AOCD). The AOCD architecture adopts a hybrid agent network topology that makes use of a unique feature called the Matrix-agent connection. The novel component, i.e. Matrix, provides a living environment for agents; it allows agents to upgrade themselves through interacting with the Matrix. This architecture is able to overcome the difficulties in concurrency control and synchronous communication that plague many decentralised systems. Performance analysis has been carried out on this framework and we find that it is able to provide a high degree of flexibility and efficiency compared with other frameworks. The thesis explores the detailed design of the AOCD framework and the major components employed in this framework including the Matrix, agents, and the unified Matrices structure. The proposed framework is able to enhance the system reusability and maximize the system performance. By using a set of interoperable autonomous agents, more creative decision-making can be accomplished in comparison with a hard-coded programmed approach. In this research, we systematically classified the agent network topologies, and developed an experimental program to evaluate the system performance based on three different agent network topologies. The experimental results present the evidence that the hybrid topology is efficient in the AOCD framework design. Furthermore, a novel topological description language for agent networks (TDLA) has been introduced in this research work, which provides an efficient mechanism for agents to perceive the information about their interconnected network. A new Agent-Rank algorithm is introduced in the thesis in order to provide an efficient matching mechanism for agent cooperation. The computational results based on our recently developed program for agent matchmaking demonstrate the efficiency and effectiveness of the Agent-Rank algorithm in the agent-matching and re-matching processes
APA, Harvard, Vancouver, ISO, and other styles
48

So, Wing Wah Simon. "Content-based image indexing and retrieval for visual information systems." Thesis, 2000. https://vuir.vu.edu.au/15318/.

Full text
Abstract:
The dominance of visual data in recent times has made a fundamental change to our everyday life. Less than five to ten years ago, Internet and World Wide Web were not the daily vocabulary for the general public. But now, even a young child can use the Internet to search for information. This, however, does not mean that we have a mature technology to perform visual information search. On the contrary, visual information retrieval is still in its infancy. The problem lies on the semantic richness and complexity of visual information in comparison to alphanumeric information. In this thesis, we present new paradigms for content-based image indexing and retrieval for Visual Information Systems. The concept of Image Hashing and the developments of Composite Bitplane Signatures with Inverted Image Indexing and Compression are the main contributions to this dissertation. These paradigms are analogous to the signature-based indexing and inversion-based postings for text information retrieval. We formulate the problem of image retrieval as a two dimensional hashing as oppose to a one-dimensional hash vector used in conventional hashing techniques. Wavelets are used to generate the bitplane signatures. The natural consequence to our bitplane signature scheme is the superimposed bitplane signatures for efficient retrieval. Composite bitplanes can then be used as the low-level feature information together with high-level semantic indexing to form a unified and integrated framework in our inverted model for content-based image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
49

Xu, Guandong. "Web mining techniques for recommendation and personalization." 2008. http://eprints.vu.edu.au/1422/1/xu.pdf.

Full text
Abstract:
Nowadays Web users are facing the problems of information overload and drowning due to the significant and rapid growth in the amount of information and the number of users. As a result, how to provide Web users with more exactly needed information is becoming a critical issue in web-based information retrieval and Web applications. In this work, we aim to address improving the performance of Web information retrieval and Web presentation through developing and employing Web data mining paradigms. Web data mining is a process that discovers the intrinsic relationships among Web data, which are expressed in the forms of textual, linkage or usage information, via analysing the features of the Web and web-based data using data mining techniques. Particularly, we concentrate on discovering Web usage pattern via Web usage mining, and then utilize the discovered usage knowledge for presenting Web users with more personalized Web contents, i.e. Web recommendation. For analysing Web user behaviour, we first establish a mathematical framework, called the usage data analysis model, to characterise the observed co-occurrence of Web log files. In this mathematical model, the relationships between Web users and pages are expressed by a matrix-based usage data schema. On the basis of this data model, we aim to devise algorithms to discover mutual associations between Web pages and user sessions hidden in the collected Web log data, and in turn, to use this kind of knowledge to uncover user access patterns. To reveal the underlying relationships among Web objects, such as Web pages or user sessions, and find the Web page categories and usage patterns from Web log files, we have proposed three kinds of latent semantic analytical techniques based on three statistical models, namely traditional Latent Semantic Indexing, Probabilistic Latent Semantic Analysis and Latent Dirichlet Allocation model. In comparison to conventional Web usage mining approaches, the main strengths of latent semantic based analysis are their capabilities that can not only, capture the mutual correlations hidden in the observed objects explicitly, but also reveal the unseen latent factors/tasks associated with the discovered knowledge implicitly. In the traditional Latent Semantic Indexing, a specific matrix operation, i.e. Singular Value Decomposition algorithm, is employed on the usage data to discover the Web user behaviour pattern over a transformed latent Web page space, which contains the maximum approximation of the original Web page space. Then, a k-means clustering algorithm is applied to the transformed usage data to partition user sessions. The discovered Web user session group is eventually treated as a user session aggregation, in which all users share like-minded access task or intention. The centroids of the discovered user session clusters are, then, constructed as user profiles. In addition to intuitive latent semantic analysis, Probabilistic Latent Semantic Analysis and Latent Dirichlet Allocation approaches are also introduced into Web usage mining for Web page grouping and usage profiling via a probability inference approach. Meanwhile, the latent task space is captured by interpreting the contents of prominent Web pages, which significantly contribute to the user access preference. In contrast to traditional latent semantic analysis, the latter two approaches are capable of not only revealing the underlying associations between Web pages and users, but also capturing the latent task space, which is corresponding to user navigational patterns and Web site functionality. Experiments are performed to discover user access patterns, reveal the latent task space and evaluate the proposed techniques in terms of quality of clustering. The discovered user profiles, which are represented by the centroids of the Web user session clusters, are then used to make usage-based collaborative recommendation via a top-N weighted scoring scheme algorithm. In this scheme, the generated user profiles are learned from usage data in an offline stage using above described methods, and are considered as a usage pattern knowledge base. When a new active user session is coming, a matching operation is carried out to find the most matched/closest usage pattern/user profile by measuring the similarity between the active user session and the learned user profiles. The user profile with the largest similarity is selected as the most matched usage profile, which reflects the most similar access interest to the active user session. Then, the pages in the most matched usage profile are ranked in a descending order by examining the normalized page weights, which are corresponding to how likely it is that the pages will be visited in near future. Finally, the top-N pages in the ranked list are recommended to the user as the recommendation pages that are very likely to be visited in the coming period. To evaluate the effectiveness and efficiency of the recommendation, experiments are conducted in terms of the proposed recommendation accuracy metric. The experimental results have demonstrated that the proposed latent semantic analysis models and related algorithms are able to efficiently extract needed usage knowledge and to accurately make Web recommendations. Data mining techniques have been widely used in many other domains recently due to the powerful capability of non-linear learning from a wide range of data sources. In this study, we also extend the proposed methodologies and technologies to a biomechanical data mining application, namely gait pattern mining. Likewise in the context of Web mining, various clustering-based learning approaches are performed on the constructed gait variable data model, which is expressed as a feature vector of kinematic variables, to discover the subject gait classes. The centroids of the partitioned gait clusters are used to represent different specific walking characteristics. The data analysis on two gait datasets corresponding to various specific populations is carried out to demonstrate the feasibility and applicability of gait pattern mining. The results have shown the discovered gait pattern knowledge can be used as a useful means for human movement research and clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
50

Sztendur, Ewa M. "Precision of the path of steepest ascent in response surface methodology." Thesis, 2005. https://vuir.vu.edu.au/15792/.

Full text
Abstract:
This thesis provides some extensions to the existing method of determining the precision of the path of steepest ascent in response surface methodology to cover situations with correlated and heteroscedastic responses, including the important class of generalised linear models. It is shown how the eigenvalues of a certain matrix can be used to express the proportion of included directions in the confidence cone for the path of steepest ascent as an integral, which can then be computed using numerical integration. In addition, some tight inequalities for the proportion of included directions are derived for the two, three, and four dimensional cases. For generalised linear models, methods are developed using the Wald approach and profile likelihood confidence regions approach, and bootstrap methods are used to improve the accuracy of the calculations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography