Dissertations / Theses on the topic 'Tiling (Mathematics) Data processing'

To see the other types of publications on this topic, follow the link: Tiling (Mathematics) Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tiling (Mathematics) Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Yufei, and 李宇飛. "A study on surface and volume tiling for geometric modeling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48329733.

Full text
Abstract:
Surface tiling, as well as its counterpart in 3D, i.e. volume tiling, is a fundamental research problem in the subject of computer graphics and geometric modeling, which has found applications in numerous areas, such as computer-aided design (CAD), physical simulation, realtime rendering and architectural modeling. The objective of surface tiling is to compute discrete mesh representations for given surfaces which are often required to possess some desirable geometric properties. Likewise, volume tiling focuses on the study of discretizing a given 3D volume with complex boundary into a set of high-quality volumetric elements. This thesis starts with the study of computing optimal sampling for parametric surfaces, that is, decompose the surface into quad patches such that 1) each quad patch should have their sides with equal length; and 2) the shapes and sizes of all the quad patches should be the same as much as possible. Then, the similar idea is applied to the discrete case, i.e. optimizing the face elements of a quad mesh surface with the goal of making it possess, as much as possible, face elements of desired shapes and sizes. This thesis further studies the computation of hexagonal tiling on free-form surfaces, where the planarity of the faces is more concerned. Free-form meshes with planar hexagonal faces, to be called P-Hex meshes, provide a useful surface representation in discrete differential geometry and are demanded in architectural design for representing surfaces built with planar glass/metal panels. We study the geometry of P-Hex meshes and present an algorithm for computing a free-form P-Hex mesh of a specified shape. Lastly, this thesis progresses to 3D volume case and proposes an automatic method for generating boundary-aligned all-hexahedron meshes with high quality, which possess nice numerical properties, such as a reduced number of elements and high approximation accuracy in physical simulation and mechanical engineering.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
2

Farnham, Rodrigo Bouchardet. "Processing and inpainting of sparse data as applied to atomic force microscopy imaging." California State University, Long Beach, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.

Full text
Abstract:
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
APA, Harvard, Vancouver, ISO, and other styles
4

Turkmen, Muserref. "Digital Image Processing Of Remotely Sensed Oceanographic Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609948/index.pdf.

Full text
Abstract:
Developing remote sensing instrumentation allows obtaining information about an area rapidly and with low costs. This fact offers a challenge to remote sensing algorithms aimed at extracting information about an area from the available re¬
mote sensing data. A very typical and important problem being interpretation of satellite images. A very efficient approach to remote sensing is employing discrim¬
inant functions to distinguish different landscape classes from satellite images. Various methods on this direction are already studied. However, the efficiency of the studied methods are still not very high. In this thesis, we will improve efficiency of remote sensing algorithms. Besides we will investigate improving boundary detection methods on satellite images.
APA, Harvard, Vancouver, ISO, and other styles
5

van, Schaik Sebastiaan Johannes. "A framework for processing correlated probabilistic data." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:91aa418d-536e-472d-9089-39bef5f62e62.

Full text
Abstract:
The amount of digitally-born data has surged in recent years. In many scenarios, this data is inherently uncertain (or: probabilistic), such as data originating from sensor networks, image and voice recognition, location detection, and automated web data extraction. Probabilistic data requires novel and different approaches to data mining and analysis, which explicitly account for the uncertainty and the correlations therein. This thesis introduces ENFrame: a framework for processing and mining correlated probabilistic data. Using this framework, it is possible to express both traditional and novel algorithms for data analysis in a special user language, without having to explicitly address the uncertainty of the data on which the algorithms operate. The framework will subsequently execute the algorithm on the probabilistic input, and perform exact or approximate parallel probability computation. During the probability computation, correlations and provenance are succinctly encoded using probabilistic events. This thesis contains novel contributions in several directions. An expressive user language – a subset of Python – is introduced, which allows a programmer to implement algorithms for probabilistic data without requiring knowledge of the underlying probabilistic model. Furthermore, an event language is presented, which is used for the probabilistic interpretation of the user program. The event language can succinctly encode arbitrary correlations using events, which are the probabilistic counterparts of deterministic user program variables. These highly interconnected events are stored in an event network, a probabilistic interpretation of the original user program. Multiple techniques for exact and approximate probability computation (with error guarantees) of such event networks are presented, as well as techniques for parallel computation. Adaptations of multiple existing data mining algorithms are shown to work in the framework, and are subsequently subjected to an extensive experimental evaluation. Additionally, a use-case is presented in which a probabilistic adaptation of a clustering algorithm is used to predict faults in energy distribution networks. Lastly, this thesis presents techniques for integrating a number of different probabilistic data formalisms for use in this framework and in other applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Smith, Sydney. "Approaches to Natural Language Processing." Scholarship @ Claremont, 2018. http://scholarship.claremont.edu/cmc_theses/1817.

Full text
Abstract:
This paper explores topic modeling through the example text of Alice in Wonderland. It explores both singular value decomposition as well as non-­‐‑negative matrix factorization as methods for feature extraction. The paper goes on to explore methods for partially supervised implementation of topic modeling through introducing themes. A large portion of the paper also focuses on implementation of these techniques in python as well as visualizations of the results which use a combination of python, html and java script along with the d3 framework. The paper concludes by presenting a mixture of SVD, NMF and partially-­‐‑supervised NMF as a possible way to improve topic modeling.
APA, Harvard, Vancouver, ISO, and other styles
7

Akleman, Ergun. "Pseudo-affine functions : a non-polynomial implicit function family to describe curves and sufaces." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Paterson, Judith Evelyn. "A study of nine girl's learning before, during and after their introduction to some of the basics of LOGO." Thesis, University of Cape Town, 1986. http://hdl.handle.net/11427/23348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Palmer, David Donald. "Modeling uncertainty for information extraction from speech data /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Cham-wing, and 鄧湛榮. "The attitudes of secondary school mathematics teachers towards the teaching of mathematics by using computers." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31958886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Liao, Haiyong. "Computational methods for bioinformatics and image restoration." HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ng, Yui-kin, and 吳銳堅. "Computers, Gödel's incompleteness theorems and mathematics education: a study of the implications of artificialintelligence for secondary school mathematics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31957419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Shuo. "MALDI-TOF MS data processing using wavelets, splines and clustering techniques." [Johnson City, Tenn. : East Tennessee State University], 2004. http://etd-submit.etsu.edu/etd/theses/available/etd-1112104-113123/unrestricted/ChenS121404f.pdf.

Full text
Abstract:
Thesis (M.S.)--East Tennessee State University, 2004.
Title from electronic submission form. ETSU ETD database URN: etd-1112104-113123 Includes bibliographical references. Also available via Internet at the UMI web site.
APA, Harvard, Vancouver, ISO, and other styles
14

Roussos, Evangelos. "Bayesian methods for sparse data decomposition and blind source separation." Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589766.

Full text
Abstract:
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or 'sources' via a generally unknown mapping. Reconstructing sources from their mixtures is an extremely ill-posed problem in general. However, solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian method- ology, allowing us to incorporate "soft" constraints in a natural manner. This Thesis proposes the use of sparse statistical decomposition methods for ex- ploratory analysis of datasets. We make use of the fact that many natural signals have a sparse representation in appropriate signal dictionaries. The work described in this Thesis is mainly driven by problems in the analysis of large datasets, such as those from functional magnetic resonance imaging of the brain for the neuro-scientific goal of extracting relevant 'maps' from the data. We first propose Bayesian Iterative Thresholding, a general method for solv- ing blind linear inverse problems under sparsity constraints, and we apply it to the problem of blind source separation. The algorithm is derived by maximiz- ing a variational lower-bound on the likelihood. The algorithm generalizes the recently proposed method of Iterative Thresholding. The probabilistic view en- ables us to automatically estimate various hyperparameters, such as those that control the shape of the prior and the threshold, in a principled manner. We then derive an efficient fully Bayesian sparse matrix factorization model for exploratory analysis and modelling of spatio-temporal data such as fMRI. We view sparse representation as a problem in Bayesian inference, following a ma- chine learning approach, and construct a structured generative latent-variable model employing adaptive sparsity-inducing priors. The construction allows for automatic complexity control and regularization as well as denoising. The performance and utility of the proposed algorithms is demonstrated on a variety of experiments using both simulated and real datasets. Experimental results with benchmark datasets show that the proposed algorithms outper- form state-of-the-art tools for model-free decompositions such as independent component analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Jung, Uk. "Wavelet-based Data Reduction and Mining for Multiple Functional Data." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5084.

Full text
Abstract:
Advance technology such as various types of automatic data acquisitions, management, and networking systems has created a tremendous capability for managers to access valuable production information to improve their operation quality and efficiency. Signal processing and data mining techniques are more popular than ever in many fields including intelligent manufacturing. As data sets increase in size, their exploration, manipulation, and analysis become more complicated and resource consuming. Timely synthesized information such as functional data is needed for product design, process trouble-shooting, quality/efficiency improvement and resource allocation decisions. A major obstacle in those intelligent manufacturing system is that tools for processing a large volume of information coming from numerous stages on manufacturing operations are not available. Thus, the underlying theme of this thesis is to reduce the size of data in a mathematical rigorous framework, and apply existing or new procedures to the reduced-size data for various decision-making purposes. This thesis, first, proposes {it Wavelet-based Random-effect Model} which can generate multiple functional data signals which have wide fluctuations(between-signal variations) in the time domain. The random-effect wavelet atom position in the model has {it locally focused impact} which can be distinguished from other traditional random-effect models in biological field. For the data-size reduction, in order to deal with heterogeneously selected wavelet coefficients for different single curves, this thesis introduces the newly-defined {it Wavelet Vertical Energy} metric of multiple curves and utilizes it for the efficient data reduction method. The newly proposed method in this thesis will select important positions for the whole set of multiple curves by comparison between every vertical energy metrics and a threshold ({it Vertical Energy Threshold; VET}) which will be optimally decided based on an objective function. The objective function balances the reconstruction error against a data reduction ratio. Based on class membership information of each signal obtained, this thesis proposes the {it Vertical Group-Wise Threshold} method to increase the discriminative capability of the reduced-size data so that the reduced data set retains salient differences between classes as much as possible. A real-life example (Tonnage data) shows our proposed method is promising.
APA, Harvard, Vancouver, ISO, and other styles
16

Grunden, Beverly K. "On the Characteristics of a Data-driven Multi-scale Frame Convergence Algorithm." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1622208959661057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Shaalan, Hesham Ezzat. "An interval mathematics approach to economic evaluation of power distribution systems." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/40081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Akpinar, Yavuz. "Computer based interactive environments for learning school mathematics : the implementation and validation of design principles." Thesis, University of Leeds, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ngaye, Zonke. "User experience metrics for Dr Math." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1012036.

Full text
Abstract:
The purpose of this research study is to propose guidelines for providing a positive user experience for pupils using Dr Math®. User experience was found to have a positive impact on the acceptance and adoption of a product. Thus the proposed guidelines contribute in maximizing the adoption and acceptance of Dr Math® among pupils. This study begins with an introductory chapter that describes the problem that forms the basis for this research. The chapter defines the objectives that this study is intended to achieve in order to accomplish its ultimate goal. The methodology followed to conduct this research study as well as its scope are also defined here. The results from a preliminary survey revealed that despite its potential accessibility, Dr Math® has a low adoption rate. However, when compared to other mobile learning (m-learning) applications for mathematics learning, Dr Math® is more popular. Thus Dr Math® was selected as a case for study. Chapter 2 of this study provides a detailed description of Dr Math® as a local mobile application for mathematics learning. It was found that the affordability and accessibility of Dr Math® did not necessarily imply a high adoption rate. There are various possible barriers to its low adoption. User experience (UX), which is the focus of this study, is one of them. Thus, a subsequent chapter deals with UX. Chapter 3 discusses UX, its scope, components and definition and places particular emphasis on its significance in the success of any product. The chapter also highlights the characteristics of a positive UX and the importance of designing for this outcome. In Chapter 4, a discussion and justification of the methodology used to conduct this research is discussed. This study primarily employs a qualitative inductive approach within an interpretivism paradigm. An exploratory single case study was used to obtain an in-depth analysis of the case. Data was collected using Dr Math® log files as a documentary source. Gathered data was then analysed and organized into themes and categories using qualitative content analysis as outlined in Chapter 5. Also the findings obtained from the results, which are mainly the factors that were found to have an impact on the user interaction with Dr Math®, are presented here. The identified factors served as a basis from which the guidelines presented in Chapter 6 were developed. Chapter 7 presents the conclusions and recommendations of the research. From both theoretical and empirical work, it was concluded that Dr Math® has the potential to improve mathematics learning in South Africa. Its adoption rate, however, is not satisfying: hence, the investigation of the factors impacting on the user interaction with Dr Math®, from which the proposed guidelines are based.
APA, Harvard, Vancouver, ISO, and other styles
20

Chau, Teresa C. "IFE, an interactive formula editor." FIU Digital Commons, 1988. http://digitalcommons.fiu.edu/etd/2117.

Full text
Abstract:
IFE, the Interactive Formula Editor is an experimental system designed, developed and implemented to propose a different approach to handle mathematical formulae. Its main characteristics are: (1) Interactive creation and edition of a mathematical formula, (2) Tex-form output for a printed version of a formula, and (3) Ability to generate a new set of characters by means of a character editor. Mathematical symbols are provided and adjusted by the system, automatically. The system also, guides the user during the formula description because it knows the syntax of the graphical representation. The system seems to be complete and perform well. The listing of the program is included, as are suggestions for further development.
APA, Harvard, Vancouver, ISO, and other styles
21

Xue, Feng, and 薛峰. "Evolutionary computation of geodesic paths in CAD/CAM." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31226978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chan, Ho Yin. "Graph-theoretic approach to the non-binary index assignment problem /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20CHAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Norman, Maxie. "Making pupils think: the development of a microcomputer-inspired adaptation of the Standard 7 mathematics curriculum." Thesis, Rhodes University, 1992. http://hdl.handle.net/10962/d1003550.

Full text
Abstract:
This half-thesis gives an overview of the influence of the microcomputer on the way in which mathematics is done, taught and learnt. The nature of mathematics and the nature of the tutor, tool and tutee modes of microcomputer usage are discussed as background. A case is made for the use of action research methods and a classroom-based curriculum development model to facilitate innovation and the integration of the microcomputer into the mathematics classroom. A curriculum development cycle of situation analysis. planning, trial and evaluation is advocated. This approach is used to develop a microcomputer-based course aimed at enhancing the reasoning skills of standard 7 pupils. Pupils, working in groups of three, interact with the PROLOG system to build up databases of facts and rules. The microcomputer is used in tutee mode. In "teaching" this tutee, pupils discover the need for formal language and logical reasoning. Active learning is promoted by pupils' interaction with the PROLOG system and by discussions within groups. In this environment the teacher becomes a consultant and constructive critic rather than a lecturer. Findings suggest that the microcomputer plays an important role in terms of pupil motivation and that the microcomputer-based course enables pupils to experience formal language usage and logical reasoning·as relevant activities. Pupil databases provide evidence of the pupils' ability to make appropriate use of rules and to distinguish between and-conditions and or-conditions. The objective of making pupils think was largely achieved. It is recommended that the course be incorporated in the standard 1 or standard 8 curriculum to complement or replace parts of the Euclidean geometry sections as a vehicle for developing logical reasoning skills. Suggestions for the further use of the microcomputer as an investigative tool in mathematics classes and for further microcomputer-inspired courses are also made. The provision of appropriate training to enable teachers to make effective and innovative use of the microcomputer in mathematics lessons is advocated.
APA, Harvard, Vancouver, ISO, and other styles
24

Powell, David Richard 1973. "Algorithms for sequence alignment." Monash University, School of Computer Science and Software Engineering, 2001. http://arrow.monash.edu.au/hdl/1959.1/8051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Siu, Ha-ping Angel, and 蕭霞萍. "Using web-based assessment for learning and teaching primary mathematics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30424574.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Khoury, Imad. "Mathematical and computational tools for the manipulation of musical cyclic rhythms." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101858.

Full text
Abstract:
This thesis presents and analyzes tools and experiments that aim at achieving multiple yet related goals in the exploration and manipulation of musical cyclic rhythms. The work presented in this thesis may be viewed as a preliminary study for the ultimate future goal of developing a general computational theory of rhythm. Given a family of rhythms, how does one reconstruct its ancestral rhythms? How should one change a rhythm's cycle length while preserving its musicologically salient properties, and hence be able to confirm or disprove popular or historical beliefs regarding its origins and evolution? How should one compare musical rhythms? How should one automatically generate rhythmic patterns? All these questions are addressed and, to a certain extent, solved in our study, and serve as a basis for the development of novel general tools, implemented in Matlab, for the manipulation of rhythms.
APA, Harvard, Vancouver, ISO, and other styles
27

Vanderhyde, James. "Topology Control of Volumetric Data." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16215.

Full text
Abstract:
Three-dimensional scans and other volumetric data sources often result in representations that are more complex topologically than the original model. The extraneous critical points, handles, and components are called topological noise. Many algorithms in computer graphics require simple topology in order to work optimally, including texture mapping, surface parameterization, flows on surfaces, and conformal mappings. The topological noise disrupts these procedures by requiring each small handle to be dealt with individually. Furthermore, topological descriptions of volumetric data are useful for visualization and data queries. One such description is the contour tree (or Reeb graph), which depicts when the isosurfaces split and merge as the isovalue changes. In the presence of topological noise, the contour tree can be too large to be useful. For these reasons, an important goal in computer graphics is simplification of the topology of volumetric data. The key to this thesis is that the global topology of volumetric data sets is determined by local changes at individual points. Therefore, we march through the data one grid cell at a time, and for each cell, we use a local check to determine if the topology of an isosurface is changing. If so, we change the value of the cell so that the topology change is prevented. In this thesis we describe variations on the local topology check for use in different settings. We use the topology simplification procedure to extract a single component with controlled topology from an isosurface in volume data sets and partially-defined volume data sets. We also use it to remove critical points from three-dimensional volumes, as well as time-varying volumes. We have applied the technique to two-dimensional (plus time) data sets and three dimensional (plus time) data sets.
APA, Harvard, Vancouver, ISO, and other styles
28

Brown, Roger George, and rogergbrown@mac com. "The impact of the introduction of the graphics calculator on system wide 'high stakes' end of secondary school mathematics examinations." Swinburne University of Technology, 2005. http://adt.lib.swin.edu.au./public/adt-VSWT20051117.121210.

Full text
Abstract:
There has been widespread interest in the potential impact of the graphics calculator on system wide 'high stakes' end of secondary school mathematics examinations. This thesis has focused on one aspect, the way in which examiners have gone about writing examination questions in a graphics calculator assumed environment. Two aspects of this issue have been investigated. The first concerns the types of questions that can be asked in a graphics calculator assumed environment and their frequency of use. The second addresses the level of skills assessed and whether with the introduction of the graphics calculator has been associated with an increase in difficulty as has been frequently suggested. A descriptive case study methodology was used with three examination boards, the Danish Ministry of Education, Victorian Curriculum and Assessment Authority and the International Baccalaureate Organization. Four distinct categories of questions were identified which differed according to the potential for the graphics calculator to contribute to the solution of the question and the freedom the student was then given to make use of this potential. While all examination boards made use of the full range of questions, the tendency was to under use questions in which required the use of the calculator for their solution. In respect to the level of skills assessed, it was found that both prior to and after the introduction of the graphics calculator, all three examination boards used question types that primarily tested the use of lower level mathematical skills. With exceptions, where graphics calculator active questions have been used, the tendency has been to continue to ask routine mechanistic questions. In this regard, there is no evidence of the introduction of the graphics calculator being associated with either lowering or raising of the level of the mathematical skills assessed. For all cases studied, the graphics calculator was introduced with minimal change to the curriculum and examination policies. The role of the graphics calculator in the enacted curriculum was left implicit. The resulting examinations were consistent with the stated policies. However, the inexperience of some examiners and a general policy of containment or minimal change enabled examiners to minimise the impact of the introduction of the graphics calculators on assessment.
APA, Harvard, Vancouver, ISO, and other styles
29

Parshakov, Ilia. "Automatic class labeling of classified imagery using a hyperspectral library." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Geography, c2012, 2012. http://hdl.handle.net/10133/3372.

Full text
Abstract:
Image classification is a fundamental information extraction procedure in remote sensing that is used in land-cover and land-use mapping. Despite being considered as a replacement for manual mapping, it still requires some degree of analyst intervention. This makes the process of image classification time consuming, subjective, and error prone. For example, in unsupervised classification, pixels are automatically grouped into classes, but the user has to manually label the classes as one land-cover type or another. As a general rule, the larger the number of classes, the more difficult it is to assign meaningful class labels. A fully automated post-classification procedure for class labeling was developed in an attempt to alleviate this problem. It labels spectral classes by matching their spectral characteristics with reference spectra. A Landsat TM image of an agricultural area was used for performance assessment. The algorithm was used to label a 20- and 100-class image generated by the ISODATA classifier. The 20-class image was used to compare the technique with the traditional manual labeling of classes, and the 100-class image was used to compare it with the Spectral Angle Mapper and Maximum Likelihood classifiers. The proposed technique produced a map that had an overall accuracy of 51%, outperforming the manual labeling (40% to 45% accuracy, depending on the analyst performing the labeling) and the Spectral Angle Mapper classifier (39%), but underperformed compared to the Maximum Likelihood technique (53% to 63%). The newly developed class-labeling algorithm provided better results for alfalfa, beans, corn, grass and sugar beet, whereas canola, corn, fallow, flax, potato, and wheat were identified with similar or lower accuracy, depending on the classifier it was compared with.
vii, 93 leaves : ill., maps (some col.) ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
30

Van, Hille Gilles Ernst Willem. "A preliminary investigation into the use of computers in the teaching of mathematics." Thesis, Rhodes University, 1986. http://hdl.handle.net/10962/d1004382.

Full text
Abstract:
Like many South African high school mathematics teachers I have followed the development of computers with interest and I have tried wherever possible to gain some experience on them. Thus when microcomputers became more readily available the mathematics department at our school, Graeme College in Grahamstown, motivated for the school to acquire this powerful new tool. The eventual outcome was that the Old Boys' Association donated to the school 3 BBC B microcomputers with monitors, a disc drive, a printer and two tape recorders. These have now been in the school for three years. The acquisition prompted this research project which takes the following form:- 1) An investigation into some of the uses of microcomputers in schools and, in particular, in the mathematics classroom. 2) A statement on the present position adopted by the Cape Education Department on the use of computers in schools. 3) A study of what the experience has been in other countries, particularly in Britain and the United States of America. 4) A description of an investigation which was undertaken at our school using the method of Action Research and Triangulation. Its aim was to investigate the feasibility of using a microcomputer to aid in the teaching of mathematics and the reaction of the pupils to this innovation. Three different approaches were implemented. a) The algorithmic approach: In this investigation a class of standard eight pupils were required, with the help of the teacher, to write, enter and test a short computer program which would solve any pair of simultaneous linear equations of the form, ax + by = c. Their reaction to this form of instruction was noted by myself and a non-participant observer. The pupils themselves were also asked to express their reactions, both verbally and by filling in a prepared questionnaire. Examples of worksheets, exam questions and analysed questionnaires are given in the appendix. Short programs which examine various other mathematical concepts are also listed and discussed. b) The audio-visual approach: In this case use was made of a graphs software package in which the computer would draw either a straight line, circle, parabola or hyperbola when the appropriate variables were entered. This package also includes a graph game facility where participants are required to find the equation of the graph which will pass through three given points. Points are awarded if the correct type of graph is chosen and the variables are entered within a certain time interval. The pupils involved in this investigation were standard eight higher grade mathematics pupils and their reaction to this form of instruction was again noted using the methods described in (a) above. c) Computer Aided Instruction: Here I was most fortunate to be able to make use of the Rhodes University PLATO Centre. This allowed me to take a class of eighteen standard eight higher grade mathematics pupils to the Centre. Here during four sessions, each of just over an hour, the pupils interacted with the software on the computer terminal. The software used was a set of five lessons written by Barbara Lederman of the Community College Maths Group, of the University of Illinois in 1976. The lessons give instruction and require the pupils to transform, plot and draw the graphs of linear equations of the form, ax + by + c = 0, x = c and y = b. They are also taught and required to find the equations of given straight lines. Their reactions to this form of instruction are discussed after each session. 5) In conclusion some thoughts are given on how computers can best be utilised in the school situation, with particular reference to the teaching of mathematics.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Cheuk-hing, and 李卓興. "Using GeoGebra to enhance learning and teaching of basic properties ofcircles for a secondary 5 class." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B48367667.

Full text
Abstract:
With advancements in information technology, people can now access enormous amounts of information with ease. The education system, which plays a vital role in developing our future, has undergone important changes. In the past decades, nearly every part of the education process, from curricula design, assessment methodologies, to teaching methodologies, have been scrutinized. As a result, a series of reforms or changes have been implemented. The purposes of the present study are to investigate more about perceptions and attitudes of secondary school mathematics teachers towards the use of computers in their teaching. Besides, the factors affecting teachers’ attitudes towards computer application in teaching are also analyzed. Finally, the study will also explore the effectiveness of students’ learning through cooperative learning One class of 36 students from 5C, aged 14-16, were invited to participate in this research by using the DMS of GeoGebra for teaching the topics of ‘Basic Properties of Circles’. The teaching outcome of 5C would then be compared with 2 other classes of 5A and 5B, which would be taught by my peer teachers. 5A and 5B’s teachers would employ conventional teaching methods to teach ‘Basic Properties of Circles’ (i.e. the Control Group). Five student worksheets for Basic Operation of GeoGebra and each sub-topic of “Basic Properties of Circles” were devised (see Appendices I to V). Those 5 students were all asked to fill in the questionnaire I. Besides, 12 mathematics teachers were asked to fill in the questionnaire I (see Appendix VIII) of Mathematics with Technology Perceptions Survey (MTPS) in order to investigate their perceptions of using information technology (IT) in teaching mathematics. The purpose of the MTPS items was to ascertain the prevalence of key attitudes and perceptions creating barriers or enabling teachers’ intentions to alter their practice and to teach mathematics with technology. Demographic data of MTPS items were collected on gender, age group, years of teaching, education level, teacher training, teaching level and subject taught. During the whole study, video-recording was taken. In addition, my two peer teachers from classes 5A and 5B were also invited to have an interview. Then, they were asked to fill in the questionnaire II (see Appendix XIII) in order to investigate their’ intention in using the DMS of GeoGebra for teaching and learning mathematics in Secondary 5 classes. Besides, 14 students were randomly selected in order to find out the effects of their learning by using DMS of GeoGebra through peer groups, and these 14 students were invited to complete an extended version of questionnaire II (see Appendix X). Finally, students were asked to conduct a test (see Appendix VI) in order to compare the learning outcome of students learning ‘Basic Properties of Circles’ with the DMS of GeoGebra with those learning in its absence. It is suggested that the integration of computer in learning mathematics should be required. Schools need to make full use of technology to guide students to learn as much as possible. Also, leadership needs to be available to teachers and to provide an in-service education in technology use, so that technology use was operating as well as possible.
published_or_final_version
Education
Master
Master of Education
APA, Harvard, Vancouver, ISO, and other styles
32

Coetzer, Johannes. "Off-line signature verification." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/1355.

Full text
Abstract:
Thesis (PhD (Mathematical Sciences))--University of Stellenbosch, 2005.
A great deal of work has been done in the area of off-line signature verification over the past two decades. Off-line systems are of interest in scenarios where only hard copies of signatures are available, especially where a large number of documents need to be authenticated. This dissertation is inspired by, amongst other things, the potential financial benefits that the automatic clearing of cheques will have for the banking industry.
APA, Harvard, Vancouver, ISO, and other styles
33

Wegner, Alexander. "The construction of finite soluble factor groups of finitely presented groups and its application." Thesis, University of St Andrews, 1992. http://hdl.handle.net/10023/12600.

Full text
Abstract:
Computational group theory deals with the design, analysis and computer implementation of algorithms for solving computational problems involving groups, and with the applications of the programs produced to interesting questions in group theory, in other branches of mathematics, and in other areas of science. This thesis describes an implementation of a proposal for a Soluble Quotient Algorithm, i.e. a description of the algorithms used and a report on the findings of an empirical study of the behaviour of the programs, and gives an account of an application of the programs. The programs were used for the construction of soluble groups with interesting properties, e.g. for the construction of soluble groups of large derived length which seem to be candidates for groups having efficient presentations. New finite soluble groups of derived length six with trivial Schur multiplier and efficient presentations are described. The methods for finding efficient presentations proved to be only practicable for groups of moderate order. Therefore, for a given derived length soluble groups of small order are of interest. The minimal soluble groups of derived length less than or equal to six are classified.
APA, Harvard, Vancouver, ISO, and other styles
34

Ali, Shirook M. Nikolova Natalia K. "Efficient sensitivity analysis and optimization with full-wave EM solvers." *McMaster only, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
35

Boswell, Benny Edward, and Henrietta Gale Boswell. "The use of a computer assisted learning program for teaching and reinforcing the basic mathematical skills." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1947.

Full text
Abstract:
The purpose of this project is to provide an instructional computer program that will be an alternative way to teach and reinforce basic mathematics skills for any student that is having difficulty in any given area and for students that are falling behind in the regular math class.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Yelei. "Heartbeat detection, classification and coupling analysis using Electrocardiography data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1405084050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Haskins, Michael Sean. "A hypercard stack on exploring single variable equations." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Mbangeni, Litha. "Development of methods for parallel computation of the solution of the problem for optimal control." Thesis, Cape Peninsula University of Technology, 2010. http://hdl.handle.net/20.500.11838/1110.

Full text
Abstract:
Thesis (MTech(Electrical Engineering))--Cape Peninsula University of Technology, 2010
Optimal control of fermentation processes is necessary for better behaviour of the process in order to achieve maximum production of product and biomass. The problem for optimal control is a very complex nonlinear, dynamic problem requiring long time for calculation Application of decomposition-coordinating methods for the solution of this type of problems simplifies the solution if it is implemented in a parallel way in a cluster of computers. Parallel computing can reduce tremendously the time of calculation through process of distribution and parallelization of the computation algorithm. These processes can be achieved in different ways using the characteristics of the problem for optimal control. Problem for optimal control of a fed-batch, batch and continuous fermentation processes for production of biomass and product are formulated. The problems are based on a criterion for maximum production of biomass at the end of the fermentation process for the fed-batch process, maximum production of metabolite at the end of the fermentation for the batch fermentation process and minimum time for achieving steady state fermentor behavior for the continuous process and on unstructured mass balance biological models incorporating in the kinetic coefficients, the physiochemical variables considered as control inputs. An augmented functional of Lagrange is applied and its decomposition in time domain is used with a new coordinating vector. Parallel computing in a Matlab cluster is used to solve the above optimal control problems. The calculations and tasks allocation to the cluster workers are based on a shared memory architecture. Real-time control implementation of calculation algorithms using a cluster of computers allows quick and simpler solutions to the optimal control problems.
APA, Harvard, Vancouver, ISO, and other styles
39

Deragisch, Patricia Amelia. "Electronic portfolio for mathematical problem solving in the elementary school." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1299.

Full text
Abstract:
Electronic portfolio for mathematical problem solving in the elementary school is an authentic assessment tool for teachers and students to utilize in evaluating mathematical skills. It is a computer-based interactive software program to allow teachers to easily access student work in the problem solving area for assessment purposes, and to store multimedia work samples over time.
APA, Harvard, Vancouver, ISO, and other styles
40

Schulze, Walter. "A formal language theory approach to music generation." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4157.

Full text
Abstract:
Thesis (MSc (Mathematical Sciences))-- University of Stellenbosch, 2010.
ENGLISH ABSTRACT: We investigate the suitability of applying some of the probabilistic and automata theoretic ideas, that have been extremely successful in the areas of speech and natural language processing, to the area of musical style imitation. By using music written in a certain style as training data, parameters are calculated for (visible and hidden) Markov models (of mixed, higher or first order), in order to capture the musical style of the training data in terms of mathematical models. These models are then used to imitate two instrument music in the trained style.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die toepasbaarheid van probabilitiese en outomaatteoretiese konsepte, wat uiters suksesvol toegepas word in die gebied van spraak en natuurlike taal-verwerking, op die gebied van musiekstyl nabootsing. Deur gebruik te maak van musiek wat geskryf is in ’n gegewe styl as aanleer data, word parameters vir (sigbare en onsigbare) Markov modelle (van gemengde, hoër- of eerste- orde) bereken, ten einde die musiekstyl van die data waarvan geleer is, in terme van wiskundige modelle te beskryf. Hierdie modelle word gebruik om musiek vir twee instrumente te genereer, wat die musiek waaruit geleer is, naboots.
APA, Harvard, Vancouver, ISO, and other styles
41

Li, Ling Feng. "An image encryption system based on two-dimensional quantum random walks." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950660.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Carle, Marlene Lovelace. "A Qualitative Study Describing the Relationship and Mediating Factors Between Junior High School Mathematics Achievement and Computer Expenditures." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc332018/.

Full text
Abstract:
Using a case study approach, this investigation focused on the nature of the relationship between computer related expenditures and student achievement in mathematics, with consideration given to the mediating factors influencing the relationship. Some of these factors included the types of computers and software being used, the objectives of computer instruction, teacher preparation in the use of the computer as an instructional tool, the amount of time individual students had access to a computer during the school year, and the socioeconomic status of pupils. Two of the twenty-five largest school districts in Texas were selected as the subjects for this study. Numerical data were collected from existing documents including general ledgers, bid tabulations, test score tables, and records showing the numbers of students participating in the free and reduced price lunch programs. Specific information regarding the implementations of the instructional programs was gathered through observations and 2 interviews with principals/ teachers, and students in four— teen junior high schools in each of the two school district. The districts exhibited more differences than similarities in the approaches to using computers for instruction in mathematics. One district, for about two hundred dollars per student, purchased a prepared, copyrighted, and patented program consisting of mini-computers and sixteen terminal remote labs used exclusively for the remediation of students two or more years behind in achievement in mathematics. The other district purchased microcomputers at a cost of about ten dollars per student and introduced a three to six weeks unit on computer programming into the eighth grade mathematics curriculum. Although neither district demonstrated clear patterns of increased achievement, tendencies did emerge which would suggest some linkage between concentration of the program and achievement. Other factors emerging from the fortythree taped interviews indicated that achievement test scores of students should not be the only measure of the worth of the computer-assisted instructional programs used in these school districts.
APA, Harvard, Vancouver, ISO, and other styles
43

Muller, Rikus. "A study of image compression techniques, with specific focus on weighted finite automata." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Mesquita, Luis Clemente. "Structural optimization for control of stiffened laminated composite plates using nonlinear mixed integer programming." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/52309.

Full text
Abstract:
The effect of structural optimization on control of stiffened laminated composite structures is considered. The structural optimization considered here, is the maximization of structural frequencies of the structure subject to maximum weight and frequency separation constraints and an upper bound on weight. The number of plies with a given orientation and the stiffener areas form the two sets of design variables. As the number of plies is restricted to integer values, the optimization problem considered belongs to the class of nonlinear mixed integer problems (NMIP). Several efficiency measures are proposed to reduce the computational cost for solution of the optimization problem. Savings in computer time due to each of the measures is discussed. The control problem is solved using the independent modal space control technique. This technique greatly simplifies the evaluation of the sensitivity of the performance index with respect to the individual frequencies. The effect of different optimization schemes on the control performance is considered. To reduce the probability, that conclusions drawn from numerical results, are purely coincidental, a large number of cases has been studied. It has been concluded that sufficient improvement in control performance can be achieved through structural optimization.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
45

Ranganathan, Sushilee. "Automated and accurate description of protein structure -- from secondary to tertiary structure." Scholarly Commons, 2008. https://scholarlycommons.pacific.edu/uop_etds/707.

Full text
Abstract:
The automated protein structure analysis (APSA) has been developed that describes protein structure via its backbone in a novel way. APSA generates a smooth line for the backbone which is completely described using curvature κ and torsion τ as a function of arc lengths. Diagrams of κ(s) and τ(s) reveal conformational features as typical patterns. In this way ideal and natural helices (α, 310 and π) and β-strands (left and right-handed, parallel and antiparallel) can be rapidly distinguished, their distortions classified, and a detailed picture of secondary structure developed. Such foundations make it possible to qualitatively and quantitatively compare domain structure utilizing calculated κ(s) and τ(s) patterns of proteins. Focusing on the torsion diagrams alone, 16 regions of τ(s) values that correspond to unique groups of conformations have been identified and encoded into 16 letters. The entire protein backbone is described, effectively projecting its three-dimensional (3D) conformation into a one-dimensional (1D) string of letters called the primary code (3D-ID projection), which is APSA's conformational equivalent of a protein's primary structure. The secondary structure is obtained from specific patterns of the primary code (resulting in secondary code). The letter code is used to describe supersecondary structure, which involves a unique characterization of the tum. It contains sufficient information to reconstruct the overall shape of a protein in an unambiguous 1D→3D translation step. Therefore, it is possible to classify supersecondary structure with the help of the letter code in form of a novel labeling system (F.#.M.X.O.L.N.R.U.S) that collects information on the relative orientations of tum and flanking structures (helices, strands). The overall shape of supersecondary structure is obtained by partitioning the surrounding space into octants and cones and assigning the parts of a supersecondary structure to these sub spaces via its labels. This approach can be easily extended to tertiary structure.
APA, Harvard, Vancouver, ISO, and other styles
46

Harper, Robert T. "Determination of the proton affinities of gas phase peptides by mass spectrometry and computational chemistry." Scholarly Commons, 2007. https://scholarlycommons.pacific.edu/uop_etds/673.

Full text
Abstract:
Helices in proteins have substantial permanent dipole moments arising from the nearly perfect alignment of the individual dipole moments of each peptide bond. Interaction with this helix "macrodipole" is thought to perturb the pKa values of basic or acidic residues at the helix termini. The goal of this project is to investigate the effect of the helix confonnation on the proton affinities ofbasic amino acids placed at theN- or Ctenninus of helical model peptides in the gas phase. Several series of model peptides having a basic residue, lysine (K) or 2,3- diaminopropionic acid (Dap ), located at either terminus were synthesized by solid phase peptide synthesis using conventional techniques or the amino acid fluoride approach. Proton affinities were determined for several basic amino acids and peptides using mass spectrometry by applying the extended Cooks' kinetic method. Favorable conformations and theoretical proton affinities were probed using computational chemistry. The proton affinities determined for Na-acetyl-(L)-lysine, Ac-AK, Ac-KA, and Ac-KAA are 236.8 ± 1.9 kcal mol-1 , 249.4 ± 2.0 kcal mol-1 , 241.5 ± 1.9 kcal mol-1 , and 244.4 ± 2.0 kcal mol-1 respectively. The large negative entropy changes for each of the peptides upon protonation ( -11.2 to - 21.7 cal mol-1 K- 1 ) are consistent with globular confmmations adopted by the protonated peptides due to extensive intramolecular hydrogen bonding. The measured proton affinities of the peptides increased with the size of the peptide as expected. However, the measured proton affinity of the peptide with C-terminal lysine, Ac-AK, is substantially higher than that of the con·esponding peptide with N-terrninal lysine, Ac-KA, contrary to expectations. Proton affinities determined for these compounds using computational chemistry are in reasonable agreement with experimental results. Additionally, proton affinities calculated for helical polyalanine and Aib (aaminoisobutytic acid) modified polyalanine peptides with C-terminal basic residues (Ac AnK and Ac-(AibA)n-Dap) are much larger than proton affinities calculated for the corresponding peptides with N-terminal basic residues. These results indicate that the helix dipole has a substantial effect on the basicity of residues at the helix termini.
APA, Harvard, Vancouver, ISO, and other styles
47

Grift, Werner. "Visualizing Qos in networks." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/17356.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2006.
ENGLISH ABSTRACT: Network simulations generate large volumes of data. This thesis presents an animated visualization system that utilizes the latest affordable Computer Graphics (CG) hardware to simplify the task of visualizing and analyzing these large volumes of data. The use of modern CG hardware allows us to create an interactive system which allows the user to interact with the data sets and extract the relevant data in real time. We also present an alternate approach to the network layout problem, using Self Organizing Maps to find an aesthetic layout for a network which is fundamental to a successful network visualization. We finally discuss the design and implementation of such an network visualization tool.
AFRIKAANSE OPSOMMING: Netwerk simulasies genereer groot volumes data. Hierdie tesis stel voor ’n geanimeerde visualiseringwat gebruik maak van die nuutste bekostigbare rekenaar grafika hardeware om die visualisering van groot volumes data te vergemaklik. Die gebruik van moderne rekenaar grafika hardeware stel ons in staat om sagteware te skep wat n gebruiker in staat stel om met die data te werk. Ons stel voor ’n alternatiewe benadering om die netwerk se uitleg daar te stel, met die hulp van tegnieke wat gebruik word in die studie van neurale netwerke. Ons bespreek dan die ontwerp en implementering van so ’n netwerk visualisering program.
APA, Harvard, Vancouver, ISO, and other styles
48

Hong, Changwan. "Code Optimization on GPUs." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1557123832601533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Luc, Françoise. "Contribution à l'étude du raisonnement et des stratégies dans le diagnostic de dépannage des systèmes réels et simulés." Doctoral thesis, Universite Libre de Bruxelles, 1991. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

"Fast and robust methods for missing data recovery in image processing." 2005. http://library.cuhk.edu.hk/record=b5892408.

Full text
Abstract:
by Wong Yin Shung.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 62-64).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.7
Chapter 2 --- Fundamentals --- p.9
Chapter 2.1 --- Representation of a digital image --- p.9
Chapter 2.2 --- Salt-and-pepper --- p.10
Chapter 2.3 --- Resolution of a gray digital image --- p.11
Chapter 3 --- Filters --- p.14
Chapter 3.1 --- Median filter --- p.15
Chapter 3.2 --- Adaptive median filter --- p.15
Chapter 3.3 --- Multi-state median filter --- p.16
Chapter 3.4 --- Directional difference-based switching median filter --- p.18
Chapter 3.5 --- Improved switching median filters --- p.20
Chapter 3.6 --- Variational method --- p.21
Chapter 3.7 --- Two-phase method --- p.22
Chapter 4 --- New Two Phase Methods --- p.25
Chapter 4.1 --- Triangule-based interpolation --- p.25
Chapter 4.1.1 --- Delaunay triangulation --- p.26
Chapter 4.1.2 --- Linear interpolation --- p.28
Chapter 4.1.3 --- Cubic interpolation --- p.29
Chapter 4.2 --- Gradient estimation --- p.32
Chapter 4.3 --- Regularization method --- p.33
Chapter 4.3.1 --- Least square method with Laplacian regularization --- p.33
Chapter 4.3.2 --- Lagrange multipliers --- p.35
Chapter 4.4 --- Fast transform for finding the inverse of Laplacian matrix --- p.38
Chapter 5 --- Inpainting and Zooming --- p.39
Chapter 5.1 --- Inpainting --- p.39
Chapter 5.2 --- Zooming --- p.40
Chapter 5.2.1 --- Bilinear interpolation --- p.40
Chapter 5.2.2 --- Bicubic interpolation --- p.41
Chapter 6 --- Results --- p.46
Chapter 6.1 --- Results of denoising --- p.47
Chapter 6.2 --- Results of inpainting --- p.47
Chapter 6.3 --- Results of zooming --- p.48
Chapter 6.4 --- Conclusions --- p.51
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography