Dissertations / Theses on the topic '080106 Image Processing'

To see the other types of publications on this topic, follow the link: 080106 Image Processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 40 dissertations / theses for your research on the topic '080106 Image Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kerwin, Matthew. "Comparison of Traditional Image Segmentation Techniques and Geostatistical Threshold." Thesis, James Cook University, 2006. https://eprints.qut.edu.au/99764/1/kerwin-honours-thesis.pdf.

Full text
Abstract:
A general introduction to image segmentation is provided, including a detailed description of common classic techniques: Otsu’s threshold, k-means and fuzzy c-means clustering; and suggestions of ways in which these techniques have been subsequently modified for special situations. Additionally, a relatively new approach is described, which attempts to address certain exposed failings of the classic techniques listed by incorporating a spatial statistical analysis technique commonly used in geological studies. Results of different segmentation techniques are calculated for various images, and evaluated and compared, with deficiencies explained and suggestions for improvements made.
APA, Harvard, Vancouver, ISO, and other styles
2

Peynot, Thierry. "Selection et controle de modes de deplacement pour un robot mobile autonome en environnements naturels." Thesis, Institut National Polytechnique de Toulouse, 2006. http://ethesis.inp-toulouse.fr/archive/00000395/.

Full text
Abstract:
Autonomous navigation and locomotion of a mobile robot in natural environments remain a rather open issue. Several functionalities are required to complete the usual perception/decision/action cycle. They can be divided in two main categories : navigation (perception and decision about the movement) and locomotion (movement execution). In order to be able to face the large range of possible situations in natural environments, it is essential to make use of various kinds of complementary functionalities, defining various navigation and locomotion modes. Indeed, a number of navigation and locomotion approaches have been proposed in the literature for the last years, but none can pretend being able to achieve autonomous navigation and locomotion in every situation. Thus, it seems relevant to endow an outdoor mobile robot with several complementary navigation and locomotion modes. Accordingly, the robot must also have means to select the most appropriate mode to apply. This thesis proposes the development of such a navigation/locomotion mode selection system, based on two types of data: an observation of the context to determine in what kind of situation the robot has to achieve its movement and an evaluation of the behavior of the current mode, made by monitors which influence the transitions towards other modes when the behavior of the current one is considered as non satisfying. Hence, this document introduces a probabilistic framework for the estimation of the mode to be applied, some navigation and locomotion modes used, a qualitative terrain representation method (based on the evaluation of a difficulty computed from the placement of the robot's structure on a digital elevation map), and monitors that check the behavior of the modes used (evaluation of rolling locomotion efficiency, robot's attitude and configuration watching. . .). Some experimental results obtained with those elements integrated on board two different outdoor robots are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Leitner, Jürgen. "From vision to actions: Towards adaptive and autonomous humanoid robots." Thesis, Università della Svizzera Italiana, 2014. https://eprints.qut.edu.au/90178/2/2014INFO020.pdf.

Full text
Abstract:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Fang. "Facial Feature Point Detection." Thesis, 2011. http://hdl.handle.net/1807/30546.

Full text
Abstract:
Facial feature point detection is a key issue in facial image processing. One main challenge of facial feature point detection is the variation of facial structures due to expressions. This thesis aims to explore more accurate and robust facial feature point detection algorithms, which can facilitate the research on facial image processing, in particular the facial expression analysis. This thesis introduces a facial feature point detection system, where the Multilinear Principal Component Analysis is applied to extract the highly descriptive features of facial feature points. In addition, to improve the accuracy and efficiency of the system, a skin color based face detection algorithm is studied. The experiment results have indicated that this system is effective in detecting 20 facial feature points in frontal faces with different expressions. This system has also achieved a higher accuracy during the comparison with the state-of-the-art, BoRMaN.
APA, Harvard, Vancouver, ISO, and other styles
5

Lenc, Emil. "Digital image transformation and compression." Thesis, 1996. https://vuir.vu.edu.au/17915/.

Full text
Abstract:
Compression algorithms have tended to cater only for high compression ratios at reasonable levels of quality. Little work has been done to find optimal compression methods for high quality images where no visual distortion is essential. The need for such algorithms is great, particularly for satellite, medical and motion picture imaging. In these situations any degradation in image quality is unacceptable, yet the resolutions of the images introduce extremely high storage costs. Hence the need for a very low distortion image compression algorithm. An algorithm is developed to find a suitable compromise between hardware and software implementation. The hardware provides raw processing speed whereas the software provides algorithm flexibility. The algorithm is also optimised for the compression of high quality images with no visible distortion in the reconstructed image. The final algorithm consists of a Discrete Cosine Transform (DCT), quantiser, runlength coder and a statistical coder. The DCT is performed in hardware using the SGSThomson STV3200 Discrete Cosine Transform. The quantiser is specially optimised for use with high quality images. It utilises a non-uniform quantiser and is based on a series of lookup tables to increase the rate of computation. The run-length coder is also optimised for the characteristics exhibited by high-quality images. The statistical coder is an adaptive version of the Huffman coder. The coder is fast, efficient, and produced results comparable to the much slower arithmetic coder. Test results of the new compression algorithm are compared with those using both the lossy and lossless Joint Photographic Experts Group (JPEG) techniques. The lossy JPEG algorithm is based on the DCT whereas the lossless algorithm is based on a Differential Pulse Code Modulation (DPCM) algorithm. The comparison shows that for most high quality images the new algorithm compressed them to a greater degree than the two standard methods. It is also shown that, if execution speed is not critical, the final result can be improved further by using an arithmetic statistical coder rather than the Huffman coder.
APA, Harvard, Vancouver, ISO, and other styles
6

Ganin, Iaroslav. "Natural image processing and synthesis using deep learning." Thèse, 2019. http://hdl.handle.net/1866/23437.

Full text
Abstract:
Nous étudions dans cette thèse comment les réseaux de neurones profonds peuvent être utilisés dans différents domaines de la vision artificielle. La vision artificielle est un domaine interdisciplinaire qui traite de la compréhension d’images et de vidéos numériques. Les problèmes de ce domaine ont traditionnellement été adressés avec des méthodes ad-hoc nécessitant beaucoup de réglages manuels. En effet, ces systèmes de vision artificiels comprenaient jusqu’à récemment une série de modules optimisés indépendamment. Cette approche est très raisonnable dans la mesure où, avec peu de données, elle bénéficient autant que possible des connaissances du chercheur. Mais cette avantage peut se révéler être une limitation si certaines données d’entré n’ont pas été considérées dans la conception de l’algorithme. Avec des volumes et une diversité de données toujours plus grands, ainsi que des capacités de calcul plus rapides et économiques, les réseaux de neurones profonds optimisés d’un bout à l’autre sont devenus une alternative attrayante. Nous démontrons leur avantage avec une série d’articles de recherche, chacun d’entre eux trouvant une solution à base de réseaux de neurones profonds à un problème d’analyse ou de synthèse visuelle particulier. Dans le premier article, nous considérons un problème de vision classique: la détection de bords et de contours. Nous partons de l’approche classique et la rendons plus ‘neurale’ en combinant deux étapes, la détection et la description de motifs visuels, en un seul réseau convolutionnel. Cette méthode, qui peut ainsi s’adapter à de nouveaux ensembles de données, s’avère être au moins aussi précis que les méthodes conventionnelles quand il s’agit de domaines qui leur sont favorables, tout en étant beaucoup plus robuste dans des domaines plus générales. Dans le deuxième article, nous construisons une nouvelle architecture pour la manipulation d’images qui utilise l’idée que la majorité des pixels produits peuvent d’être copiés de l’image d’entrée. Cette technique bénéficie de plusieurs avantages majeurs par rapport à l’approche conventionnelle en apprentissage profond. En effet, elle conserve les détails de l’image d’origine, n’introduit pas d’aberrations grâce à la capacité limitée du réseau sous-jacent et simplifie l’apprentissage. Nous démontrons l’efficacité de cette architecture dans le cadre d’une tâche de correction du regard, où notre système produit d’excellents résultats. Dans le troisième article, nous nous éclipsons de la vision artificielle pour étudier le problème plus générale de l’adaptation à de nouveaux domaines. Nous développons un nouvel algorithme d’apprentissage, qui assure l’adaptation avec un objectif auxiliaire à la tâche principale. Nous cherchons ainsi à extraire des motifs qui permettent d’accomplir la tâche mais qui ne permettent pas à un réseau dédié de reconnaître le domaine. Ce réseau est optimisé de manière simultané avec les motifs en question, et a pour tâche de reconnaître le domaine de provenance des motifs. Cette technique est simple à implémenter, et conduit pourtant à l’état de l’art sur toutes les tâches de référence. Enfin, le quatrième article présente un nouveau type de modèle génératif d’images. À l’opposé des approches conventionnels à base de réseaux de neurones convolutionnels, notre système baptisé SPIRAL décrit les images en termes de programmes bas-niveau qui sont exécutés par un logiciel de graphisme ordinaire. Entre autres, ceci permet à l’algorithme de ne pas s’attarder sur les détails de l’image, et de se concentrer plutôt sur sa structure globale. L’espace latent de notre modèle est, par construction, interprétable et permet de manipuler des images de façon prévisible. Nous montrons la capacité et l’agilité de cette approche sur plusieurs bases de données de référence.
In the present thesis, we study how deep neural networks can be applied to various tasks in computer vision. Computer vision is an interdisciplinary field that deals with understanding of digital images and video. Traditionally, the problems arising in this domain were tackled using heavily hand-engineered adhoc methods. A typical computer vision system up until recently consisted of a sequence of independent modules which barely talked to each other. Such an approach is quite reasonable in the case of limited data as it takes major advantage of the researcher's domain expertise. This strength turns into a weakness if some of the input scenarios are overlooked in the algorithm design process. With the rapidly increasing volumes and varieties of data and the advent of cheaper and faster computational resources end-to-end deep neural networks have become an appealing alternative to the traditional computer vision pipelines. We demonstrate this in a series of research articles, each of which considers a particular task of either image analysis or synthesis and presenting a solution based on a ``deep'' backbone. In the first article, we deal with a classic low-level vision problem of edge detection. Inspired by a top-performing non-neural approach, we take a step towards building an end-to-end system by combining feature extraction and description in a single convolutional network. The resulting fully data-driven method matches or surpasses the detection quality of the existing conventional approaches in the settings for which they were designed while being significantly more usable in the out-of-domain situations. In our second article, we introduce a custom architecture for image manipulation based on the idea that most of the pixels in the output image can be directly copied from the input. This technique bears several significant advantages over the naive black-box neural approach. It retains the level of detail of the original images, does not introduce artifacts due to insufficient capacity of the underlying neural network and simplifies training process, to name a few. We demonstrate the efficiency of the proposed architecture on the challenging gaze correction task where our system achieves excellent results. In the third article, we slightly diverge from pure computer vision and study a more general problem of domain adaption. There, we introduce a novel training-time algorithm (\ie, adaptation is attained by using an auxilliary objective in addition to the main one). We seek to extract features that maximally confuse a dedicated network called domain classifier while being useful for the task at hand. The domain classifier is learned simultaneosly with the features and attempts to tell whether those features are coming from the source or the target domain. The proposed technique is easy to implement, yet results in superior performance in all the standard benchmarks. Finally, the fourth article presents a new kind of generative model for image data. Unlike conventional neural network based approaches our system dubbed SPIRAL describes images in terms of concise low-level programs executed by off-the-shelf rendering software used by humans to create visual content. Among other things, this allows SPIRAL not to waste its capacity on minutae of datasets and focus more on the global structure. The latent space of our model is easily interpretable by design and provides means for predictable image manipulation. We test our approach on several popular datasets and demonstrate its power and flexibility.
APA, Harvard, Vancouver, ISO, and other styles
7

Tse, Kwok Chung. "Efficient storage and retrieval methods for multimedia information." Thesis, 1999. https://vuir.vu.edu.au/15370/.

Full text
Abstract:
The input/output performance has always been the bottleneck problem of computer systems, and with multimedia applications, the problem has been intensified. Hierarchical storage systems provide extensive storage capacity for multimedia data at very economical cost, but the long access latency of tertiary storage devices makes them not attractive for multimedia systems. In this thesis, we present new storage and retrieval methods to handle multimedia data on hierarchical storage systems efficiently. First, we create a novel hierarchical storage organization to increase the storage system throughput. Second, we enhance the data migration method to reduce the multimedia stream response time. Third, we design a new bandwidth based placement method to store heterogeneous objects. Fourth, we demonstrate that disk performance is significantly enhanced using constant density recording disks. We have quantitatively analysed and compared the performance of magnetic disks and hierarchical storage systems in serving multimedia streams of requests. We have also earned out empirical studies which confirm our findings. Our new storage and retrieval methods are able to offer significant advantages and flexibility over existing methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Shen, Zhenliang. "Colour differentiation in digitial images." Thesis, 2003. https://vuir.vu.edu.au/15529/.

Full text
Abstract:
To measure the quality of green vegetables in digital images, the colour appearance of the vegetable is one of the main factors. In general, green colour represents good quality and yellow colour represents poor quality empirically for green-vegetable. The colour appearance is mainly determined by its hue, however, the value of brightness and saturation affects the colour appearance under certain conditions. To measure the colour difference between green and yellow, a series of experiments have been designed to measure the colour difference under varying conditions. Five people were asked to measure the colour differences in different experiments. First, colour differences are measured as two of the values hue, brightness, and saturation are kept constant. Then, the previous results are applied to measure the colour difference as one of the values hue, brightness, and saturation is kept constant. Lastly, we develop a colour difference model from the different values of hue, brightness, and saturation. Such a colour difference model classifies the colours between green and yellow. A windows application is designed to measure the quality of leafy vegetables by using the colour difference model. The colours of such vegetables are classified to represent different qualities. The measurement by computer analysis conforms to that produced by human inspection.
APA, Harvard, Vancouver, ISO, and other styles
9

Yan, Shuo. "Adaptive Image Quality Improvement with Bayesian Classification for In-line Monitoring." Thesis, 2008. http://hdl.handle.net/1807/11279.

Full text
Abstract:
Development of an automated method for classifying digital images using a combination of image quality modification and Bayesian classification is the subject of this thesis. The specific example is classification of images obtained by monitoring molten plastic in an extruder. These images were to be classified into two groups: the “with particle” (WP) group which showed contaminant particles and the “without particle” (WO) group which did not. Previous work effected the classification using only an adaptive Bayesian model. This work combines adaptive image quality modification with the adaptive Bayesian model. The first objective was to develop an off-line automated method for determining how to modify each individual raw image to obtain the quality required for improved classification results. This was done in a very novel way by defining image quality in terms of probability using a Bayesian classification model. The Nelder Mead Simplex method was then used to optimize the quality. The result was a “Reference Image Database” which was used as a basis for accomplishing the second objective. The second objective was to develop an in-line method for modifying the quality of new images to improve classification over that which could be obtained previously. Case Based Reasoning used the Reference Image Database to locate reference images similar to each new image. The database supplied instructions on how to modify the new image to obtain a better quality image. Experimental verification of the method used a variety of images from the extruder monitor including images purposefully produced to be of wide diversity. Image quality modification was made adaptive by adding new images to the Reference Image Database. When combined with adaptive classification previously employed, error rates decreased from about 10% to less than 1% for most images. For one unusually difficult set of images that exhibited very low local contrast of particles in the image against their background it was necessary to split the Reference Image Database into two parts on the basis of a critical value for local contrast. The end result of this work is a very powerful, flexible and general method for improving classification of digital images that utilizes both image quality modification and classification modeling.
APA, Harvard, Vancouver, ISO, and other styles
10

Azzam, Ibrahim Ahmed Aref. "Implicit Concept-based Image Indexing and Retrieval for Visual Information Systems." Thesis, 2006. https://vuir.vu.edu.au/479/.

Full text
Abstract:
This thesis focuses on Implicit Concept-based Image Indexing and Retrieval (ICIIR), and the development of a novel method for the indexing and retrieval of images. Image indexing and retrieval using a concept-based approach involves extraction, modelling and indexing of image content information. Computer vision offers a variety of techniques for searching images in large collections. We propose a method, which involves the development of techniques to enable components of an image to be categorised on the basis of their relative importance within the image in combination with filtered representations. Our method concentrates on matching subparts of images, defined in a variety of ways, in order to find particular objects. The storage of images involves an implicit, rather than an explicit, indexing scheme. Retrieval of images will then be achieved by application of an algorithm based on this categorisation, which will allow relevant images to be identified and retrieved accurately and efficiently. We focus on Implicit Concept-based Image Indexing and Retrieval, using fuzzy expert systems, density measure, supporting factors, weights and other attributes of image components to identify and retrieve images.
APA, Harvard, Vancouver, ISO, and other styles
11

Pandey, Dinesh. "Multidimensional medical image analysis with automatic segmentation techniques." Thesis, 2019. https://vuir.vu.edu.au/40059/.

Full text
Abstract:
The advancement of medical imaging techniques such as fundus photography and breast magnetic resonance imaging (MRI) has shown tremendous improvement in the quality of multidimensional image produced. The image segmentation technology is used to partition the medical image into different regions for accurate identification and segregation of diseased area. Hence, the medical image is a vital entity to diagnose several pathological conditions. However, Multidimensional medical image analysis with automatic segmentation techniques these medical images have problems such as: 1. lack inherent spatial resolution; 2. contains different form of noise; 3. have boundary with the similar color intensity; and 4. populated with non-uniform illumination across the image and other imaging ambiguities. In many clinical studies, the segmentation process can be carried out either manually or automatically. Manual segmentation for the identification of several landmarks in medical images has been popularly considered, but is time consuming, tedious, error prone and observer-dependent. On the other hand, automatic segmentation technique are highly desirable because of its robustness, improved efficiency, reliability and faster computation. Therefore, the development of an automatic segmentation technique for the medical images has become an integral part of the medical diagnosis system that yields a practical insight. However, achieving a desirable result from automatic segmentation is still challenging. This is because; variation is seen in image features for different cases, even when produced with same imaging technique. The broad aim of this thesis is to identify the robust and automatic segmentation technique overcoming the issues seen in medical images and hence can assist doctors for the evaluation and detection of several pathologies. The objective is fulfilled by developing automatic segmentation algorithms and provide solutions to tackle challenges associated in two different imaging modalities: fundus photography (2D) and breast MRI (3D). The result is a series of work associated with the problem identification, analysis and a desirable solution with qualitative and quantitative validation. Specifically, we have strengthened the state-of-the-art by making the following novel contributions: 1. The analysis of retinal blood vessel is crucial for finding several pathological disorder that manifest through human eye. Therefore, blood vessel segmentation in fundus photography has great importance in medical image analysis. From the experiment, we observed that the retinal images with lesions, exudate’s, non-uniformed illuminations and pathological artefacts have intrinsic problems such as the absence of thin vessels and detection of false vessels. In our work, we developed an automatic blood vessel segmentation framework, which is effective in analysing retinal blood vessels on noisy, pathological and abnormal retinal images. Initially, the noise is minimized with image subtraction technique using morphological operation. Then, we investigated thin and thick blood vessels separately. Thin vessels are detected using local phase-preserving denoising, line detection, local normalization, and maximum entropy thresholding. Local phase-preservation denoising removes the additional noise while preserving phase information (detailed) of the image. Thick vessels are segmented using maximum entropy thresholding. The performance of the proposed methods is carried in four popular databases (DRIVE, STARE, CHASE DB1, HRF). The result shows that the proposed segmentation method is automatic, accurate and computationally efficient. Furthermore, the proposed methods is found to be superior when compared with the other methods in the state of art. 2. The automatic optic disc (OD) segmentation is a challenging task for the images, which are under the influence of noise, uneven illumination and pathologies. As per the state-of-art, development of OD segmentation is still a challenging task because of several reasons such as 1) Ophthalmic pathologies causes the change of color, shape or depth of OD 2) Retinal pathologies (exudate, lesion), sometimes possess similar properties causing a false identification of OD. 3) Different factors like illuminations and contrast irregularities, boundary artefacts and blurred image edges makes segmentation complicated and requires pixel to pixel analysis. 4) Also the texture feature of OD vary for different images, adding more challenges, thus requiring a pre-processing step prior to the segmentation. 5) If the vessels are dense and around OD, the identification the OD boundary becomes difficult. To solve the above-mentioned challenges, a new method for the accurate localization and detection of the optic disc is developed. The process utilizes kmeans clustering over foreground and background estimated images to obtain the brightest cluster. The obtained results are merged together to estimate the OD center. The OD boundary is then estimated using circular Hough transform (CHT) using the radius and center obtained in the initial step. The boundary estimation is also obtained from superpixels method. Finally, the OD boundary pixels are identified with the geometrical model over the edge information obtained from superpixels and CHT. The experiments carried out on seven publicly available database verify the efficiency of proposed methods. In addition, the outstanding results while compared with the other proposed methods in the current state of art proves the superiority of proposed methods. 3. A novel and accurate segmentation method of the breast region of interest (BROI) and breast density (BD) in breast MRI is proposed. The precise segmentation of BROI and BD is challenging, especially in noisy magnetic resonance images (MRI) due to similar intensity levels and the closely connected boundaries between BROI and other anatomical structure such as heart, lung and pectoral muscle. The segmentation of BROI is carried out in three major steps. Initially, we utilize adaptive wiener filtering and k-means clustering to denoised image by preserving edges and unwanted artefacts. Then, active contour based level sets is used to eliminate the heart area from the denoised image. Initial contour points for the active contour methods are determined by the maximum entropy thresholding and convolution method. Finally, a pectoral muscle is removed to obtain a BROI segmentation by using a morphological operations and local adaptive thresholding methods. The segmentation of BD is obtained with 4 level fuzzy c-means (FCM) thresholding methods on the result image obtained from BROI segmentation. The validation of proposed methods is performed using the 1350 breast images from 15 female subjects. The obtained result show that the proposed method is automatic, fast and efficient. 4. The segmentation of breast lesions in breast MRI is considered as a important and challenging task in medical image analysis. Noise, intensity similarity of lesions and other tissues, and variable shape and size of lesion are the primary challenges during the process of lesion segmentation. Hence, the framework for the accurate segmentation of breast lesion from the DCE MRI image is proposed. The framework is built using max flow and min cut problems in the continuous domain over the denoised image. The proposed method is achieved in three steps. Firstly, in the pre-processing step, the post contrast and pre-contrast image are subtracted. This is followed by image registration that benefits by enhancing the tumor area. Secondly, a phase preservation denoising and pixel-wise adaptive Wiener filtering technique are used which is followed by max flow and min cut problems in the continuous domain. A denoising mechanism clears the noise in the image by preserving the useful and detailed features such as edges. Then, a tumor detection is done using continuous max flow. Finally, morphological operation is used as a post-processing step to further delineate the obtained results. The efficiency of the proposed method is verified with the series of qualitative and quantitative experiments carried out on 21 cases with two different MR image resolution. The obtained results when compared with the manually segmented results demonstrates the quality of segmentation obtained from the proposed method. The segmentation experiments for all above-mentioned four proposed algorithms are performed on Matlab R2013b running under Intel(R) core(TM) i5-4570s CPU@ 2.90 Ghz with 8GB of RAM. In an effort to test the performance of the proposed algorithms, both the public and private datasets with the manually drawn ground truth image are used. Moreover, the qualitative and quantitative measurements were used as a way to verify the robustness of the proposed algorithms. Also, the result were compared with the recent state-of-art which demonstrate the enhanced performance and advancement of the proposed methods. Finally, our overall results on the proposed methods show that the proposed algorithms are automatic, accurate and computationally efficient.
APA, Harvard, Vancouver, ISO, and other styles
12

Liang, Sisi. "Modelling of diffusion-weighted MRI signals in non-neural tissue." Thesis, 2017. https://vuir.vu.edu.au/33749/.

Full text
Abstract:
The general aim of clinical diffusion-weighted MRI (DWI) is the inference of tissue structure properties, particularly pathology, from measurements of diffusion attenuation under conditions of varying diffusion times and b-values. Models of water diffusion in tissue have been proposed to serve this purpose. Diffusion models can be broadly split into two types, phenomenological and structural. Phenomenological models aim to provide reliable mathematical descriptions of DWI signals, but biophysical interpretation of their model parameters is limited. The recent trend is towards compartment models that are based on assumptions about tissue geometry. Compartment models have proven successful in brain imaging, where they predict the diffusion signal more accurately and provide estimates of specific neural tissue features, such as fiber orientation distribution and axon diameter. However, compartment models are generally lacking for non-neural tissue. This thesis investigates compartment models of diffusion in four types of non-neural tissue (prostate, breast, spheroids and lymph nodes).
APA, Harvard, Vancouver, ISO, and other styles
13

Cui, Xiao. "Social Network Analysis Based on a Hierarchy of Communities." Thesis, 2016. https://vuir.vu.edu.au/31048/.

Full text
Abstract:
With the rapid growth of users in Social Networking Services (SNSs), data is generated in thousands of terabytes every day. This data contains lots of hidden information and patterns. The analysis of such data is not a trivial task. A great deal of effort has been put into it. Analysing users' behaviour in social networks can help researchers to better understand what happens in the real world and create huge commercial value for social networks themselves.
APA, Harvard, Vancouver, ISO, and other styles
14

Subramani, Sudha. "Extracting Actionable Knowledge from Domestic Violence Discourse on Social Media." Thesis, 2019. https://vuir.vu.edu.au/39603/.

Full text
Abstract:
Respect for human rights is the cornerstone of strong communities, based on the principles of dignity, equality, and recognition of inherent value of each individual. Domestic Violence, ranging from physical abuse to emotional manipulation, is worldwide considered as the violation of the elementary rights to which all human beings are entitled to. As one might expect, the consequences for its victims are often severe, far-reaching, and long-lasting, causing major health, welfare, and economic burden. Domestic Violence is also one of the most prevailing forms of violence, and due to the social stigma surrounding the issue particularly challenging to address. With the emergence and expansion of Social Media, the substantial shift in the support-seeking and the support-provision pattern has been observed. The initial barriers in approaching healthcare professionals, i.e. personal reservations, or safety concerns, have been effectively addressed by virtual environments. Social Media platforms have quickly become crucial networking hubs for violence survivors as well as at-risk individuals to share experiences, raise concerns, offer advice, or express sympathy. As a result, the specialized support services groups have been established with the aim of pro-active reach-out to potential victims in time-critical situations. Given the high-volume, highvelocity and high-variety of Social Media data, the manual posts evaluation has not only become inefficient, but also unfeasible in the long-term. The conventional automated approaches reliant on pre-defined lexicons, and hand-crafted feature engineering proved limited in their classification performance capability when exposed to the challenging nature of Social Media discourse. At the same time, Deep Learning the state-of-the-art sub-field of Machine Learning has shown remarkable results on text classification tasks. Given its relative recency and algorithmical complexity, the implementation of Deep Learningbased models has been vastly under-utilised in practical applications. In particular, no prior work has addressed the problem of fine-grained user-generated content classification with Deep Learning in Domestic Violence domain. The study introduces novel 3-part framework aimed at (i) binary detection of critical situations; (ii) multi-class content categorization; and (ii) Abuse Types and Health Issues extraction from Social Media discourse. The classification performance of state-of-the-art models is improved through the domain-specific word embeddings development, capable of precise relationships between the words recognition. The prevalent patterns of abuse, and the associated health conditions are efficiently extracted to shed the light on violence scale and severity from directly affected individuals. The approach proposed marks a step forward towards effective prevention and mitigation of violence within the society.
APA, Harvard, Vancouver, ISO, and other styles
15

He, Jinyuan. "Automated Heart Arrhythmia Detection from Electrocardiographic Data." Thesis, 2020. https://vuir.vu.edu.au/41284/.

Full text
Abstract:
Heart arrhythmia is a severe heart problem, which threatens people’s lives by pre- venting their hearts from pumping enough blood into vital organs. Arrhythmia has been a major worldwide health problem for years, accounting for nearly 12% of global deaths every year. The research of automated heartbeat classification is highly demanded, which provides a cost-effective screening for heart arrhythmia and allows at-risk patients to receive timely treatments. To construct an effective automated heartbeat classification model from ECG recordings for arrhythmia de- tection, several key challenges must be addressed, including data quality, heartbeat segmentation range, data imbalance problem, intra and inter-patients variations, identification of supraventricular ectopic heartbeats from normal heartbeats, and model interpretability. This thesis comprehensively discusses these challenges and proposes four practical models to gradually tackle the heartbeat classification task. Specifically, in Chapter 3, a model named D-ECG is proposed to solve the problems suffered by previous methods of applying a standalone classifier and us- ing a static feature set to classify all heartbeat types. D-ECG introduces the dynamic ensemble selection techniques in heartbeat classification for the first time and incorporates a result regulator to improve the disease heartbeats detection performance. Although the dynamic ensemble selection technique has introduced visible improvements in the heartbeat classification task, they also brought some disadvantages. The dynamic selection nature, which determines the best classifiers according to the sample to be predicted, can result in a delay of the model predic- tion, making the model less practical in online detection scenarios. In Chapter 4, the author proposes a novel pyramid-like model to tackle this problem. The model adopts a dual-channel classification strategy and customizes a binary classification algorithm that takes neighbor-related information into account to assist disease heartbeats detection. Compared to the D-ECG framework, the pyramid-like model can provide more timely response to an unknown heartbeat while maintaining a good classification performance as the D-ECG framework. It has the potential to be applied in online detection scenarios. In Chapter 5, the author examines the recent advances brought by deep neural networks and proposes a DNN-based solution named Multi-channels Convolution Neural Network (MCHCNN) to solve the problems of current deep-learning based heartbeat classification models. As an improvement, the proposed network accepts raw ECG heartbeat and heart rhythm (RR-intervals) as inputs and uses different sizes of convolution filters in parallel to capture temporal and frequency patterns from ECG signals. The experimental results have shown visible improvements brought by MCHCNN. However, there is still a long way before MCHCNN can make practical impacts because its performance of S-type heartbeats detection is still relatively low. To tackle this problem, the author investigates the potential causes to the problem and proposes an advanced two-step DNN-based classification framework in Chapter 6. Due to the observed difficulty of detecting S-type heart- beats from N -type heartbeats, the proposed framework trains a deep dual-channel convolutional neural network (DDCNN) which accepts segmented heartbeats as input in the first step to classify V-type, F-type and Q-type heartbeats. At this stage, S-type and N-type heartbeats are not the targets, so they are put into one bundle to be studied in the next step. In the second step, a central-towards LSTM supportive model (CLSM) is specially designed to distinguish S-type heart- beats from N-type ones. The RR-intervals of a heartbeat and its neighbors are arranged in sequence form, serving as the input to CLSM. In particular, CLSM learns and extracts hidden temporal dependency between heartbeats by processing the input RR-interval sequence in central-towards directions. Instead of using raw individual RR-intervals, the abstractive, mutual-connected temporal information provides stronger and more stable support for identifying the problematic S-type heartbeats. Besides, as an improvement as well as a necessary driver for activating the CLSM, a rule-based data augmentation method is also proposed to supply high-quality synthetic samples for the under-represented S-type RR-interval se- quences. Extensive experiments are conducted to provide a comprehensive evaluation for each proposed model. The results prove that the research of heartbeat classification presented in this thesis brings practical ideas and solutions to the arrhythmia detection problem.
APA, Harvard, Vancouver, ISO, and other styles
16

Petukhov, Boris. "Operational scheduling with business modelling and genetic algorithms." Thesis, 2020. https://vuir.vu.edu.au/42038/.

Full text
Abstract:
Development and maintenance of effective schedules is paramount to the overall success of project management. Scheduling in complex and large problem domains is resource consuming and challenging, and becomes especially difficult when project conditions often change within relatively short periods. This research contributes to knowledge in the program management area by putting forward a new approach that entails automation and optimisation of operational scheduling to enable organisations to run their workstreams in a controlled and predictable fashion to achieve the desired outcomes within expected timeframes and resource constraints. The approach put forward in this research combines theoretical knowledge, technology-based scheduling implementation and genetic algorithm optimisations in a single framework to generate optimised schedules. The approach entailed the development of a new planning and scheduling method based on business modelling and genetic algorithms. This new method, called Operational Scheduling with Business Modelling and Genetic Algorithms has been recognised with the award of an Australian Standard Patent, and offers an integrated operational scheduling approach that allows its users to follow a clear path and address their day-to-day problems at the level of complexity required. This method allows for artificial intelligence implementations based on genetic algorithms, which develop the initially proposed scheduling solutions to the optimal schedules that could be generated for given problem scenarios. The method starts from essential planning and scheduling where relatively simple scheduling is performed and then moves into domain- specific scheduling, which requires unrestricted, customised and complex implementations. In doing so, it constructs business models of the problem domain, identifies hard and soft constraints, implements automatic scheduling procedures to generate initial schedule samples, and performs genetic algorithms’ crossover, mutation, fitness valuation to produce optimal scheduling solutions. The method was applied in a number of case studies where it was found the optimisation delivered efficiency gains of between 8 per cent and 20 per cent of the total operational costs, which some cases resulted in significant monetary savings.
APA, Harvard, Vancouver, ISO, and other styles
17

Dow, Malcolm James. "Disabled person's control, communication and entertainment aid: an investigation of the feasibility of using speech control and natural language understanding to control a manipulator and a software application and development environment." Thesis, 1994. https://vuir.vu.edu.au/17907/.

Full text
Abstract:
The work reported in this thesis is a feasibility study of the possibilities and practical problems of applying speech control and natural language understanding techniques to the use of a computer by a physically disabled person. Solutions are proposed for the overcoming of some of the difficulties and limitations of the available equipment, and guidance given for the application of such systems to real tasks. The use of voice control with a low cost industrial robot is described. The limitations introduced by the speech control hardware, such as restricted vocabulary size and artificial manner of speaking are partially overcome by software extensions to the operating system and the application of natural language understanding techniques. The application of voice control and audio response to common application packages and a programming environment are explored. Tools are developed to aid the construction of natural language understanding systems. These include an extension to the use of an existing context-free parser generator to enable it to handle context-sensitive grammars, and an efficient parallel parser which is able to find all possible parses of a sentence simultaneously. Machine readable dictionary construction is investigated, incorporating the analysis of complex words in terms of their root forms using affix transformations, and the incorporation of semantic information using a variety of techniques, such as semantic fields, the previously mentioned affix transforms, and object-oriented semantic trees. The software developed for the system is written in Borland Pascal on an IBM compatible P C , and is produced in the form of library modules and a toolkit to facilitate its application to any desired task.
APA, Harvard, Vancouver, ISO, and other styles
18

Darbyshire, Paul. "Modeling Group Communication in a Complex System for Achieving Group Goals." Thesis, 2013. https://vuir.vu.edu.au/25840/.

Full text
Abstract:
This thesis investigates the effect of communication as a function of time on a multi-agent simulation based on a military distillation utilizing reinforcement learning for a group of agents. The original contribution to knowledge is a new model of cooperative learning developed as an enhanced Q-learning update function which also includes learning events communicated by other agents. Further contributions lie in the detailed analysis of simulation results establishing evidence of the cause-effect relationship between communication and improved performance. The improvement in performance is visualized by utilizing surface plot diagrams of the agents state-action matrix. These diagrams show the how group communications reinforce effective actions for the agents at an early stage in the simulation.
APA, Harvard, Vancouver, ISO, and other styles
19

Browne, Peter. "The Application of Machine Learning to Enhance Performance Analysis in Australian Rules football." Thesis, 2020. https://vuir.vu.edu.au/42283/.

Full text
Abstract:
In this thesis, machine learning techniques are applied to enhance the development and implementation of methodologies in performance analysis. Ecological dynamics is used as a theoretical framework to underpin these methodologies. Australian Rules football is used as an exemplar to understand the influence and interaction of constraints on player and team dynamics. There is extensive theoretical research on the interaction of constraints in sport, however common analysis techniques have typically only explored one or two constraints and therefore do not fully reflect the complexity of the competition environment. To better understand the competition environment, the nexus of constraints must be considered in the analysis of sport. This thesis aims to address this gap. Firstly, this thesis explores how the use of ecological dynamics may aid the implementation of an interdisciplinary approach to sports performance research. These considerations are applied to Australian Football field and goal kicking, by exploring how multiple constraints interact and impact skilled performance, and how these differ between competition tiers. Furthermore, differences between analysis techniques are identified and aspects such as feasibility and interpretability are highlighted to facilitate an improved translation of research to the applied setting. Additionally, this analysis is furthered by exploring event sequences, determining not only the influence of multiple constraints around a disposal but also the preceding events. This thesis aims to advance the application of methodologies that explore multiple constraints and sequences of events, in order to enhance knowledge of the competition and training environments.
APA, Harvard, Vancouver, ISO, and other styles
20

Cai, Jing. "Minimization of number of gait trials for tripping probability tests using artificial neural networks." Thesis, 2001. https://vuir.vu.edu.au/17891/.

Full text
Abstract:
Minimum toe clearance ( M T C ) data has been used to quantify the probability of tripping (PT) during gait (Best, Begg and James, 1999). MTC data collection is very time consuming and there has been no research conducted to devise a methodology that has the potential to predict long-term histogram characteristics of MTC data (e.g. mean, standard deviation, skewness and kurtosis), based on the characteristics of MTC data collected from fewer gait trials. The aim of this study is to apply a novel technology, artificial neural network (ANN), to predict stabilized MTC characteristics (mean, M; standard deviation, SD; skewness, S; kurtosis, K) from relatively fewer gait trials. data of 24 subjects (age range: 19-79 years) were collected during normal walking on a treadmill for 30 minutes. Thirty-one back-propagation neural networks (BPNs) were developed using various combinations of input variables to predict 30-minute MTC characteristics. The network performance was evaluated using the percentage of error (POE) of the test results (i.e. difference between desired and predicted results divided by the desired result).
APA, Harvard, Vancouver, ISO, and other styles
21

Margono, Hendro. "Analysis of the Indonesian Cyberbullying through Data Mining: The Effective Identification of Cyberbullying through Characteristics of Messages." Thesis, 2019. https://vuir.vu.edu.au/39499/.

Full text
Abstract:
The use of social networks sites such as Facebook, Twitter, YouTube, Instagram, and LinkedIn has increased rapidly in the last decade. It has been pointed out in the international data that more than 83% of people between the age of 18 and 29 have used social networking sites (Best et al., 2014). Social networks are also a powerful medium that can be used for positive purposes, such as communication and information sharing, and can provide easy access to fresh news. On the other hand, social network sites can be used for negative purposes such as harassment and bullying. Bullying on social networks is usually called cyberbullying. Cyberbullying has emerged as a significant issue and become an important topic in social network analysis, as more than 10% of parents globally have stated that their child has been cyberbullied (Gottfried, 2012). Ipsos reported that in Indonesia 91% of parents stated their children were bullied on social media in 2012 (Gottfried, 2012). Moreover, 58% of Indonesian adolescents ranging in age from 12 to 21 reported that they often suffered online harassment and humiliation (Dipa, 2016). Therefore, to be able to understand this phenomenon, the use of machine learning methods in data mining techniques can potentially assist in analysing cyberbullying issues. However, there are several points to be taken into consideration in the rapid use of various vocabularies for cyberbullying, the patterns of harmful words used in cyberbullying messages, and the scale of the data. The purpose of this research is to identify the indicators of cyberbullying within the written content, and to propose and develop effective models of analysis with the goal of detecting the incidence of cyberbullying activities on social networks. Therefore, this research has addressed concerns about the measurement of cyberbullying and aimed to develop a reliable and valid measurable tool. Through developing systematic measurement and techniques, this research has enhanced an effective analysis model to discover the patterns of insulting words which can assist in accurately detecting cyberbullying messages. The research in this thesis has developed the analysis model using association rules and classification techniques. These techniques have been used for effective identification of cyberbullying messages on social networks. Furthermore, this research has discovered interesting patterns of insulting words which can assist in identifying cyberbullying messages. The experimental results have also indicated that the proposed method can predict the messages precisely into cyberbullying or non-cyberbullying. Moreover, 80.37% of the total data has been detected as cyberbullying. Overall, this thesis makes a significant contribution in identifying new characteristics for cyberbullying recognition, in developing the analysis method for social issues and in advancing the parameters to determine the strength of the relationship between data in relation to data mining techniques. The research in this thesis presents the analysis results and contributes to our understanding of various cyberbullying patterns. Also, the results can be developed further in future research.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhang, Jingyuan. "Web geospatial visualisation for clustering analysis of epidemiological data." Thesis, 2014. https://vuir.vu.edu.au/25917/.

Full text
Abstract:
Public health is a major factor that in reducing of disease round the world. Today, most governments recognise the importance of public health surveillance in monitoring and clarifying the epidemiology of health problems. As part of public health surveillance, public health professionals utilise the results of epidemiological analysis to reform health care policy and health service plans. There are many health reports on epidemiological analysis within government departments, but the public are not authorised to access these reports because of commercial software restrictions. Although governments publish many reports of epidemiological analysis, the reports are coded in epidemiology terminology and are almost impossible for the public to fully understand. In order to improve public awareness, there is an urgent need for government to produce a more easily understandable epidemiological analysis and to provide an open access reporting system with minimum cost. Inevitably, it poses challenges to IT professionals to develop a simple, easily understandable and freely accessible system for public use. It is not only required to identify a data analysis algorithm which can make epidemiological analysis reports easily understood but also to choose a platform which can facilitate the visualisation of epidemiological analysis reports with minimum cost. In this thesis, there were two major research objectives: the clustering analysis of epidemiological data and the geospatial visualisation of the results of the clustering analysis. SOM, FCM and k-means, the three commonly used clustering algorithms for health data analysis, were investigated. After a number of experiments, k-means has been identified, based on Davies-Bouldin index validation, as the best clustering algorithm for epidemiological data. The geospatial visualisation requires a Geo-Mashups engine and geospatial layer customisation. Because of the capacity and many successful applications of free geospatial web services, Google Maps has been chosen as the geospatial visualisation platform for epidemiological reporting.
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Ye. "Robust Text Mining in Online Social Network Context." Thesis, 2018. https://vuir.vu.edu.au/38645/.

Full text
Abstract:
Text mining is involved in a broad scope of applications in diverse domains that mainly, but not exclusively, serve political, commercial, medical and academic needs. Along with the rapid development of the Internet technology in recent thirty years and the advent of online social media and network in a decade, text data is obliged to entail features of online social data streams, for example, the explosive growth, the constantly changing content and the huge volume. As a result, text mining is no longer merely oriented to textual content itself, but requires consideration of surroundings and combining theories and techniques of stream processing and social network analysis, which give birth to a wide range of applications used for understanding thoughts spread over the world , such as sentiment analysis, mass surveillance and market prediction. Automatically discovering sequences of words that represent appropriate themes in a collection of documents, topic detection closely associated with document clustering and classification. These two tasks play integral roles in revealing deep insight into the text content in the whole text mining framework. However, most existing detection techniques cannot adapt to the dynamic social context. This shows bottlenecks of detecting performance and deficiencies of topic models. In this thesis, we take aim at text data stream, investigating novel techniques and solutions for robust text mining to tackle arising challenges associated with the online social context by incorporating methodologies of stream processing, topic detection and document clustering and classification. In particular, we have advanced the state-of-theart by making the following contributions: 1. A Multi-Window based Ensemble Learning (MWEL) framework is proposed for imbalanced streaming data that comprehensively improves the classification performance. MWEL ensures that the ensemble classifier is maintained up to date and adaptive to the evolving data distribution by applying a multi-window monitoring mechanism and efficient updating strategy. 2. A semi-supervised learning method is proposed to detect latent topics from news streams and the corresponding social context with a constraint propagation scheme to adequately exploit the hidden geometrical structure as supervised information in given data space. A collective learning algorithm is proposed to integrate the textual content into the social context. A locally weighted scheme is afterwards proposed to seek an improvement of the algorithm stability. 3. A Robust Hierarchical Ensemble (RHE) framework is introduced to enhance the robustness of the topic model. It, on the one hand, reduces repercussions caused by outliers and noises, and on the other overcomes inherent defects of text data. RHE adapts to the changing distribution of text stream by constructing a flexible document hierarchy which can be dynamically adjusted. A discussion of how to extract the most valuable social context is conducted with experiments for the purpose of removing some noises from the surroundings and efficiency of the proposed.
APA, Harvard, Vancouver, ISO, and other styles
24

Vo, Nguyen. "A New Conceptual Automated Property Valuation Model for Residential Housing Market." Thesis, 2014. https://vuir.vu.edu.au/25793/.

Full text
Abstract:
Property market not only plays a major role in the Australian real estate economy but also holds a large portion of the country’s overall economic activities. In the state of Victoria, Australia alone, residential property values surpassed one trillion dollars in 2012. A typical weekend property auctions in Victoria could see tens of millions of dollars change hands. Residential property evaluation is important to banks or mortgage lenders, real-estates, policy-makers, home buyers and those involved in the housing industry. A tool which can predict prices is essential to the housing market. Residential properties in Victoria are re-valued manually every two years by the Department of Sustainability and Environment, Victoria, Australia (DSE) with up to 30%± uncertainty of the market values. Municipal councils use the values established by DSE to determine property rates and land tax liabilities. According to rpdata.com, there are currently five types of Automated Valuation Models (AVMs) used in residential property valuation in Australia: sales comparison approach, cost approach, hedonic, income capitalisation approach and price indexation. The calculation backbone for these AVMs is still based on traditional statistics approach. At the time of writing this thesis, only a handful of researchers in the world have used Artificial Neural Network (ANN) in AVM to estimate residential property prices. In this research work, a Conceptual Automated Property Valuation Model (CAPVM) using ANNs was proposed to evaluate residential property price. The ultimate goal was to produce long-term house price forecast for urban Victoria. The CAPVM was first optimised and then its residential property price forecast capability was investigated. Optimisation of CAPVM was achieved by determining the best number of the hidden layers, the hidden neurons and the input variables, and finding the best value of training error threshold. CAPVM was excellent in predicting 86.39% of residential property prices within the accuracy margin of 10%± error of the actual sale price, a better performance than DSE’s manual valuations and National Australia Bank’s published figures. It successfully modelled the annual changes in residential property prices for hard to predict periods 2007-2008 during the global financial crisis and 2010-2012 residential property boom when the interest rates were on a downwards trend. CAPVM also outperformed the prediction performance of multiple regression analysis.
APA, Harvard, Vancouver, ISO, and other styles
25

Tang, Feiyi. "Link-Prediction and its Application in Online Social Networks." Thesis, 2017. https://vuir.vu.edu.au/35048/.

Full text
Abstract:
Alongside the continuous development of Internet technologies, traditional social networks are running online to provide more services so as to unite the community. In the meantime, conventional web-based information systems are trying hard to utilise social networking elements to develop a virtual community so as to increase their popularity. The combination of these two domains has become what people knew as the ‘online social networks’. There is much to do to reveal the knowledge behind the screen as massive amounts of user-generated data is created every second. Many people from different disciplines are using their tools and techniques to analyse and build knowledge to try understanding the evolution of it. Link Prediction, with the essence of calculating similarities of two nodes, is one of the most common techniques to analyse an online social network. It is worth mentioning that while using Link Prediction to explain online social network, we consider it as a graph with nodes and edges connecting one another where nodes represent individuals and edges represent the relations between them. Link Prediction can be utilised in many ways in this domain, where one of the most common ways is predicting links/edges that may appear in the future of an evolving network where links/edges represent connections. The meaning of these connections vary under different circumstance, such as an academia social network where they may represent co-author relationships among researchers. Therefore, one of the most common applications of Link Prediction in an online social network will be the recommendation system. Many works have been done to analyse social-oriented online networks and many turns into applications with great success such as Facebook and Twitter. However, this thesis concentrates on investigating a particular type of online social network where there is still a large gap waiting to be filled - the online academia social network. The objective of this thesis is to provide a more sensible way for people to understand the evolution of this network and develop models and algorithms that solving issues in regards to the needs of the users in this system of finding valuable research partners. Further the object is to building up an environment for future researchers to share knowledge and to carry on the work as a community. To be specific, this thesis contains four main chapters, and they are connected in some ways to develop solutions for the issues coming out during the research processes.
APA, Harvard, Vancouver, ISO, and other styles
26

Teng, Luyao. "Research on Joint Sparse Representation Learning Approaches." Thesis, 2019. https://vuir.vu.edu.au/40024/.

Full text
Abstract:
Dimensionality reduction techniques such as feature extraction and feature selection are critical tools employed in artificial intelligence, machine learning and pattern recognitions tasks. Previous studies of dimensionality reduction have three common problems: 1) The conventional techniques are disturbed by noise data. In the context of determining useful features, the noises may have adverse effects on the result. Given that noises are inevitable, it is essential for dimensionality reduction techniques to be robust from noises. 2) The conventional techniques separate the graph learning system apart from informative feature determination. These techniques used to construct a data structure graph first, and keep the graph unchanged to process the feature extraction or feature selection. Hence, the result of feature extraction or feature selection is strongly relying on the graph constructed. 3) The conventional techniques determine data intrinsic structure with less systematic and partial analyzation. They maintain either the data global structure or the data local manifold structure. As a result, it becomes difficult for one technique to achieve great performance in different datasets. We propose three learning models that overcome prementioned problems for various tasks under different learning environment. Specifically, our research outcomes are listing as followings: 1) We propose a novel learning model that joints Sparse Representation (SR) and Locality Preserving Projection (LPP), named Joint Sparse Representation and Locality Preserving Projection for Feature Extraction (JSRLPP), to extract informative features in the context of unsupervised learning environment. JSRLPP processes the feature extraction and data structure learning simultaneously, and is able to capture both the data global and local structure. The sparse matrix in the model operates directly to deal with different types of noises. We conduct comprehensive experiments and confirm that the proposed learning model performs impressive over the state-of-the-art approaches. 2) We propose a novel learning model that joints SR and Data Residual Relationships (DRR), named Unsupervised Feature Selection with Adaptive Residual Preserving (UFSARP), to select informative features in the context of unsupervised learning environment. Such model does not only reduce disturbance of different types of noise, but also effectively enforces similar samples to have similar reconstruction residuals. Besides, the model carries graph construction and feature determination simultaneously. Experimental results show that the proposed framework improves the effect of feature selection. 3) We propose a novel learning model that joints SR and Low-rank Representation (LRR), named Sparse Representation based Classifier with Low-rank Constraint (SRCLC), to extract informative features in the context of supervised learning environment. When processing the model, the Low-rank Constraint (LRC) regularizes both the within-class structure and between-class structure while the sparse matrix works to handle noises and irrelevant features. With extensive experiments, we confirm that SRLRC achieves impressive improvement over other approaches. To sum up, with the purpose of obtaining appropriate feature subset, we propose three novel learning models in the context of supervised learning and unsupervised learning to complete the tasks of feature extraction and feature selection respectively. Comprehensive experimental results on public databases demonstrate that our models are performing superior over the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Talpur, Anmoila. "Mining Tourist Behavior: A study of Tourist Sequential Activity Pattern through Location Based Social Networks." Thesis, 2019. https://vuir.vu.edu.au/40555/.

Full text
Abstract:
Much of the current research in tourism has focused on tourists’ behavior analyses in order to help management constructing effective tourism policies and strategic planning to cater for a diverse range of tourists. Insight into tourist movement and activity patterns is deemed beneficial for the tourism sector in many ways, such as designing better travel packages for tourists, maximizing the tourist activity participation and meeting the tourist demands. Existing works in this field have only focused on finding tourists’ travel trajectories; however, they have not been able to provide comprehensive and complete information about the actual anticipated activities at visited locations. This is probably due to the limitation of traditional data collection and analysis approaches. This research proposes to adopt mobile social media data for effective capturing of tourist activity information and utilizes advanced data mining techniques for extracting valuable insights into tourist behavior. The proposed methods and findings of the study have the potential to support tourism managers and policy makers in making better decisions in tourism destination management.
APA, Harvard, Vancouver, ISO, and other styles
28

Du, Jiahua. "Advanced Review Helpfulness Modeling." Thesis, 2020. https://vuir.vu.edu.au/41279/.

Full text
Abstract:
In recent years, online shopping has gained immense popularity due to its feedback mechanism. By composing online comments, previous buyers share opinions and expe-riences regarding the items that they have purchased. These user-generated reviews, in turn, provide valuable information to potential customers in regards to deciding which products to purchase. The reviews also help vendors understand customer needs and improve product quality. Yet despite these benefits, the unprecedentedly rapid growth of user-generated content has overwhelmed human ability in online review scrutiny. On-line reviews that possess varying content further impedes useful knowledge distillation. The large volume of online reviews that are uneven in quality puts growing pressure on automatic approaches for effective review utilization and informative content prioritiza-tion. Review helpfulness prediction leverages machine learning methods to identify and recommend helpful reviews to customers. In particular, review characteristics form the backbone of helpfulness information acquisition. Prior literature has observed and as-sociated a large body of determinants with review helpfulness. However, these deter-minants heavily rely on the domain knowledge of experts. The selection of and the interaction between the determinants also remain understudied, leaving ample room for exploration. The general lack of systematic experiment protocols among the existing methods further harms the task’s reproducibility, comparability, and generalizability. This thesis aims to automatically model helpfulness information from online user- generated reviews. The thesis proposes effective modeling techniques and novel so-lutions to tackle the aforementioned challenges, with more emphasis on sophisticated feature learning and interaction. The thesis has made the following contributions to standardize the research field and advance the accuracy in helpfulness prediction. 1. A comprehensive survey is conducted to identify frequently used content-based determinants for automatic helpfulness prediction. A computational framework is developed to empirically evaluate the identified features across domains. Three selection scenarios are considered for feature behavior analysis. The domain-specific and domain-independent feature selection guidelines are summarized to facilitate future research prototyping. The implementation details of the study are discussed to standardize the task of automatic helpfulness prediction. 2. A deep neural framework is designed to enrich the interaction between review texts and star ratings during automatic helpfulness prediction. A gated convolu-tional component is introduced to learns content representations. A gated em- bedding method is proposed for encoding sophisticated yet adaptive rating infor- mation. An element alignment mechanism is proposed to explicitly capture the text-rating interaction. Ablation studies and qualitative analysis are conducted to discover insights into the interactive behavior of star ratings. 3. An end-to-end neural architecture is proposed to contextualize automatic helpful- ness prediction using review neighbors. Four weighting schemes are designed to encode a review’s surrounding neighbors as its context information into content representation learning. Three types of reviews neighbors of varied length are considered during context construction. Finally, discussions on the experimental results and the trade-o between model complexity and performance are given, along with case studies, to understand the proposed architecture.
APA, Harvard, Vancouver, ISO, and other styles
29

Santhiranayagam, Braveena K. "Machine-Learning Applications to Gait Biomechanics using Inertial Sensor Signals." Thesis, 2016. https://vuir.vu.edu.au/34110/.

Full text
Abstract:
Minimum toe clearance (MTC) above the walking surface is a critical representation of toe-trajectory control related to tripping risk. Reliable and precise MTC measurements are obtained in the laboratory using 3D motion capture technology. Real-world gait monitoring using body-mounted sensors presents considerable data processing challenges when estimating kinematic parameters, including MTC. This Thesis represents the first study employing machine-learning to estimate young and older adults’ toe-height at MTC using inertial data captured from a foot-mounted sensor. Age-group specific Generalized Regression Neural Network (GRNN) models estimated MTC with root-mean-square-error (RMSE) of 6.6 mm with 9 optimum inertial-signal features for the young and 7.1 mm with 5 features for the older during treadmill walking. These RMSE values are approximately one third of the previously reported (Mariani et al., 2012; McGrath et al., 2011) and GRNN modeling also performed well as reflected in no significant difference between 3D measured reference and model estimated MTC_Height. The GRNN model specific to older adults showed good generalizability when applied to data from slower and dual task walking.
APA, Harvard, Vancouver, ISO, and other styles
30

Sweeting, Alice. "Discovering the Movement Sequences of Elite and Junior Elite Netball Athletes." Thesis, 2017. https://vuir.vu.edu.au/34111/.

Full text
Abstract:
This thesis investigated the movement sequences of elite and junior-elite female netball athletes using a local positioning system (LPS). Study one determined the indoor validity of an LPS, specifically the Wireless ad hoc System for Positioning (WASP), for measuring distance, velocity and angular velocity whilst sprinting and walking five nonlinear courses. The criterion measure used to assess WASP validity was Vicon, a motion analysis system. During all sprinting and walking drills, WASP had an acceptable accuracy for measuring total distance covered (coefficient of variation, CV; < 5.2%). Similarly, WASP had an acceptable accuracy across all sprinting and walking drills for measuring mean velocity (CV; < 6.5%). During all sprinting drills, WASP had acceptable accuracy for measuring mean and peak angular velocity (CV; < 3%). A increased bias was observed during all walking drills, compared to sprinting, likely due to radio-frequency (RF) interference from the metal-clad indoor stadium where validation trials were conducted. Researchers and practitioners may use WASP to accurately quantify the non-linear movement of athletes during indoor court-based sports although should be aware of the increased bias during walking movement.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Daniel. "Analysis and Assessment on Effects of Different Therapies in Cancer Treatment Based on Fuzzy Cognitive Maps." Thesis, 2019. https://vuir.vu.edu.au/41295/.

Full text
Abstract:
Cancer is the second leading cause of death worldwide. Even though cancer death rates have slightly decreased in the last decades, the painful experience of cancer diagnosis and treatment still occurs every day globally. It is critically important to develop advanced computing technologies to better understand the effectiveness and management of cancer treatment. Nevertheless, most of the present tools for analysis and assessment therapy effects in cancer treatment are based on immediate relative factors and laboratory reports. The causal relationship of the key factors is not recorded or modelled, thus not analysed and communicated effectively. Fuzzy cognitive map (FCM), as a medical decision support tool, has been applied in medical practice and overall appreciated in recent decades. In this thesis, clinical cancer cases were analysed and assessed with the help of FCM. It is particularly applied to visualise the knowledge and experience about effects of different types of therapies, including the alternative therapies of sonodynamic and photodynamic therapy and traditional Chinese medicine modalities. Through the cases study with the help of FCM, the model can clearly show that the effects or outcomes of cancer treatments are critically influenced by the causally related factors. The analysis and assessment results demonstrated that FCMs can visually represent the cognitive knowledge, particularly the causal relationship among key factors in the combination of different cancer therapies, while the individuals’ causal influence factors showing certain degrees of capability for driving different effective outcomes. This modelling will enable further analysis and communication of the rationales of different intervention or decision makings from different practitioners and specialists.
APA, Harvard, Vancouver, ISO, and other styles
32

Rangarajan, Sarathkumar. "QOS-aware Web service discovery, selection, composition and application." Thesis, 2020. https://vuir.vu.edu.au/42153/.

Full text
Abstract:
Since the beginning of the 21st century, service-oriented architecture (SOA) has emerged as an advancement of distributed computing. SOA is a framework where software modules are developed using straightforward interfaces, and each module serves a specific array of functions. It delivers enterprise applications individually or integrated into a more significant composite Web services. However, SOA implementation faces several challenges, hindering its broader adaptation. This thesis aims to highlight three significant challenges in the implementation of SOA. The abundance of functionally similar Web services and the lack of integrity with non-functional features such as Quality of Service (QoS) leads to the difficulties in the prediction of QoS. Thus, the first challenge to be addressed is to find an efficient scheme for the prediction of QoS. The use of software source code metrics is a widely accepted alternative solution. Source code metrics are measured at a micro level and aggregated at the macro level to represent the software adequately. However, the effect of aggregation schemes on QoS prediction using source code metrics remains unexplored. The inequality distribution model, the Theil index, is proposed in this research to aggregate micro level source code metrics for three different datasets and compare the quality of QoS prediction. The experiment results show that the Theil index is a practical solution for effective QoS prediction. The second challenge is to search and compose suitable Web services with- out the need for expertise in composition tools. Currently, the existing approaches need system engineers with extensive knowledge of SOA techniques. A keyword-based search is a common approach for information retrieval which does not require an understanding of a query language or the underlying data structure. The proposed framework uses a schema-based keyword search over the relational database for an efficient Web service search and composition. Experiments are conducted with the WS-Dream data set to evaluate Web service search and composition framework using adequate performance parameters. The results of a quality constraints experiments show that the schema-based keyword search can achieve a better success rate than the existing approaches. Building an efficient data architecture for SOA applications is the third challenge as real-world SOA applications are required to process a vast quantity of data to produce a valuable service on demand. Contemporary SOA data processing systems such as the Enterprise Data Warehouse (EDW) lack scalability. A data lake, a productive data environment, is proposed to improve data ingestion for SOA systems. The data lake architecture stores both structured and unstructured data using the Hadoop Distributed File System (HDFS). Experiment results compare the data ingestion time of data lake and EDW. In the evaluation, the data lake-based architecture is implemented for personalized medication suggestion system. The data lake shows that it can generate patient clusters more concisely than the current EDW-based approaches. In summary, this research can effectively address three significant challenges for the broader adaptation of SOAs. The Theil index-based data aggregation model helps QoS prediction without the dependence on the Web service registry. Service engineers with less knowledge of SOA techniques can exploit a schema-based keyword search for a Web service search and composition. The data lake shows its potential to act as a data architecture for SOA applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Kittel, Aden. "Decision-making Assessment and Development in Australian Football Umpires: Evaluation of 360° VR." Thesis, 2020. https://vuir.vu.edu.au/42042/.

Full text
Abstract:
Many skills underpin the performance of sporting officials, however decision-making is regarded as the most critical. There are finite on-field opportunities to develop decision-making of sporting officials in training and competition, as a consequence, video-based approaches are typically used to assess and develop decision-making skill. Existing methods such as the use of match broadcast video may not be an ecologically valid method to present decision-making scenarios. With technological advancements, using virtual reality may improve the ecological validity of video-based approaches to improve decision-making. Study 1 systematically reviewed existing research utilising video-based testing to assess decision-making in officials, which often differentiates between skill levels to demonstrate construct validity. Study 1 identified several limitations including common use of match broadcast video, limited reporting of reliability, and studies often solely reporting number of decisions rather than performance accuracy. Comparison between video-based and in-game decision-making performance was rarely conducted. This study provided the foundation to further examine the efficacy of video- based tests in sporting officials. Study 2 developed two valid and reliable video-based tests, based on the recommendations of Study 1. As match broadcast video is the most common video-based testing method for officials, it was compared with 360° VR to assess decision-making accuracy. Both 360° VR and match broadcast video-based tests demonstrated construct validity and high reliability (r = 0.89). Stronger ecological validity was evident in 360° VR than match broadcast, as participants rated 360° VR to be more representative of in-game decision-making processes. Study 3 aimed to determine the relationship between decision-making accuracy in both video-based tests (360° VR and match broadcast) and in-game of elite Australian football umpires, given that this limitation of the research was identified in Study 1. Study 3 used validated video-based tests from Study 2. There were no significant relationships observed for decision-making accuracy between in-game and video-based testing. Studies 2 and 3 provide findings on testing, however it is unclear whether 360° VR or match broadcast is more effective for developing decision-making. Study 4 assessed the effectiveness of a video-based training program using 360° VR or match broadcast to develop decision-making in amateur Australian football umpires using a randomised control study design. Decision-making was assessed using the valid and reliable tests of Study 2 before, immediately following, and one month following training (retention test). The 360° VR group exhibited significantly higher decision-making accuracy (p < 0.05) than the control group at retention testing, with no between-group differences observed for the match broadcast group. Participants rated 360° VR as more relevant and enjoyable than match broadcast. In summary, this thesis aimed to develop and evaluate the effectiveness of 360° VR as a video-based testing and training tool in Australian football umpires. Although 360° VR and match broadcast appear to have strong construct validity and reliability, currently, there is limited transfer to in-game performance. Further, based on these results, it is not definitive whether 360° VR is a more effective training tool than match broadcast. The findings of this thesis indicate 360° VR may be more ecologically valid than match broadcast and warrants further investigation.
APA, Harvard, Vancouver, ISO, and other styles
34

Rasool, Raihan Ur. "CyberPulse: A Security Framework for Software-Defined Networks." Thesis, 2020. https://vuir.vu.edu.au/42172/.

Full text
Abstract:
Software-Defined Networking (SDN) technology provides a new perspective in traditional network management by separating infrastructure plane from the control plane which facilitates a higher level of programmability and management. While centralized control provides lucrative benefits, the control channel becomes a bottleneck and home to numerous attacks. We conduct a detailed study and find that crossfire Link Flooding Attacks (LFA) are one of the most lethal attacks for SDN due to the utilization of low-rate traffic and persistent attacking nature. LFAs can be launched by the malicious adversaries to congest the control plane with low-rate traffic which can obstruct the flow rule installation and can ultimately bring down the whole network. Similarly, the adversary can employ bots to generate low-rate traffic to congest the control channel, and ultimately bring down the control plane and data plane connection causing service disruption. We present a systematic and comparative study on the vulnerabilities of LFAs on all the SDN planes, elaborate in detail the LFA types, techniques, and their behavior in all the variant of SDN. We then illustrate the importance of a defense mechanism employing a distributed strategy against LFAs and propose a Machine Learning (ML) based framework namely CyberPulse. Its detailed design, components, and their interaction, working principles, implementation, and in-depth evaluation are presented subsequently. This research presents a novel approach to write anomaly patterns and makes a significant contribution by developing a pattern-matching engine as the first line of defense against known attacks at a line-speed. The second important contribution is the effective detection and mitigation of LFAs in SDN through deep learning techniques. We perform twofold experiments to classify and mitigate LFAs. In the initial experimental setup, we utilize Artificial Neural Networks backward propagation technique to effectively classify the incoming traffic. In the second set of experiments, we employ a holistic approach in which CyberPulse demonstrates algorithm agnostic behavior and employs a pre-trained ML repository for precise classification. As an important scientific contribution, CyberPulse framework has been developed ground up using modern software engineering principles and hence provides very limited bandwidth and computational overhead. It has several useful features such as large-scale network-level monitoring, real-time network status information, and support for a wide variety of ML algorithms. An extensive evaluation is performed using Floodlight open-source controller which shows that CyberPulse offers limited bandwidth and computational overhead and proactively detect and defend against LFA in real-time. This thesis contributes to the state-of-the-art by presenting a novel framework for the defense, detection, and mitigation of LFA in SDN by utilizing ML-based classification techniques. Existing solutions in the area mandate complex hardware for detection and defense, but our presented solution offers a unique advantage in the sense that it operates on real-time traffic scenario as well as it utilizes multiple ML classification algorithms for LFA traffic classification without necessitating complex and expensive hardware. In the future, we plan to implement it on a large testbed and extend it by training on multiple datasets for multiple types of attacks.
APA, Harvard, Vancouver, ISO, and other styles
35

Tao, Xuehong. "Argumentative Learning with Intelligent Agents." Thesis, 2014. https://vuir.vu.edu.au/25846/.

Full text
Abstract:
Argumentation plays an important role in information sharing, deep learning and knowledge construction. However, because of the high dependency on qualified arguing peers, argumentative learning has only had limited applications in school contexts to date. Intelligent agents have been proposed as virtual peers in recent research and they exhibit many benefits for learning. Argumentation support systems have also been developed to support learning through human-human argumentation. Unfortunately these systems cannot conduct automated argumentations with human learners due to the difficulties in modeling human cognition. A gap exists between the needs of virtual arguing peers and the lack of computing systems that are able to conduct human−computer argumentation. This research aimed to fill the gap by designing computing models for automated argumentation, develop a learning system with virtual peers that can argue automatically and study argumentative learning with virtual peers.
APA, Harvard, Vancouver, ISO, and other styles
36

Gamage, Nilantha. "Daily streamflow estimation using remote sensing data." Thesis, 2015. https://vuir.vu.edu.au/34843/.

Full text
Abstract:
Streamflow data are critical for water resource investigations, and their development projects. However, the scarcity of such data, particularly measured streamflow through streamflow gauges, constitutes a serious impediment to the successful implementation of development projects. In the absence of such measured streamflow data, streamflow estimation using measured meteorological data represents a viable alternative. Nevertheless, this alternative is not always possible due to the unavailability of required meteorological data. In the face of such data limitations, many have advocated the use of remote sensing (RS) data to estimate streamflow. The aim of this study was to generate daily streamflow time series data using remote sensing data through catchment process modelling and statistical modelling.
APA, Harvard, Vancouver, ISO, and other styles
37

Meany, Michael M. "The performance of comedy by artificial intelligence agents." Thesis, 2014. https://vuir.vu.edu.au/25825/.

Full text
Abstract:
The PhD project is composed of two parts: a creative project (thirty-five per cent of the total research project); and an exegesis (sixty-five per cent). The creative project employs a pair of chat-bots, natural language processing artificial intelligence agents, to act as comedian and straight-man in a comedy performance based on a topic supplied by the user in a web-based interface. This is an interdisciplinary project that draws on the domains of humour theory, creativity theory, creative writing, and human-computer interaction theory to illuminate the practice of comedy scriptwriting process in a new-media environment.
APA, Harvard, Vancouver, ISO, and other styles
38

Supriya, Supriya. "Brain Signal Analysis and Classification by Developing New Complex Network Techniques." Thesis, 2020. https://vuir.vu.edu.au/40551/.

Full text
Abstract:
Brain signal analysis has a crucial role in the investigation of the neuronal activity for diagnosis of brain diseases and disorders. The electroencephalogram (EEG) is the most efficient biomarker for the analysis of brain signal that assists in the diagnosis of brain disorder medication and also plays an essential role in all the neurosurgery related to the brain. EEG findings illustrate the meticulous condition, and clinical content of the brain dysfunctions, and has an undisputed importance role in the detection of epilepsy condition and sleep disorders and dysfunctions allied to alcohol. The clinicians visually study the EEG recording to determine the manifestation of abnormalities in the brain. The visual EEG assessment is tiresome, fallible, and also high-priced. In this dissertation, a number of frameworks have been developed for the analysis and classification of EEG signals by addressing three different domains named: Epilepsy, Sleep staging, and Alcohol Use Disorder. Epilepsy is a non-contagious chronic disease of the brain that affects around 65 million people worldwide. The sudden onset tendency of the epileptic attacks vulnerable their sufferers to injuries. It is also challenging for the clinical staff to detect the epileptic-seizure activity early enough for determining the semiology associated with the seizure onset. For that reason, automated techniques that can accurately detect the epilepsy from EEG are of great importance to epileptic patients and especially to those patients who are resistive to therapies and medications. In this dissertation, four different techniques (named Weighted Visibility Network, Weighted Horizontal Visibility Network, Weighted Complex Network, and New Weighted Complex Network) have been developed for the automated identification of epileptic activity from the EEG signals. Most of the developed schemes attained 100% classification outcomes in their experimental evaluation for the identification of seizure activity from non-seizure activity. A sleep disorder can increase the menace of seizure incidence or severity, cognitive tasks impairments, mood deviation, diminution in the functionality of the immune system and other brain anomalies such as insomnia, sleep apnoea, etc. Hence, sleep staging is essential to discriminate among distinct sleep stages for the diagnosis of sleep and its disorders. EEG provides vital and inimitable information regarding the sleeping brain. The study of EEG has documented deformities in sleep patterns. This research has developed an innovative graph- theory based framework named weighted visibility network for sleep staging from EEG signals. The developed framework in this thesis, outperforms with 97.93% overall classification accuracy for categorizing distinct sleep states Alcoholism causes memory issues as well as motor skill defects by affecting the different portions of the brain. Excessive use of alcohol can cause sudden cardiac death and cardiomyopathy. Also, alcohol use disorder leads to respiratory infections, Vision impairment, liver damage, and cancer, etc. Research study demonstrates the use of EEG for diagnosis the patient with a high menace of developmental impediments with alcohol. In this current Ph.D. project, I developed a weighted graph-based technique that analyses EEG to distinguish between alcoholic subject and non-alcoholic person. The promising classification outcome demonstrates the effectiveness of the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
39

Shields, Philip John. "Nurse-led ontology construction: A design science approach." Thesis, 2016. https://vuir.vu.edu.au/32620/.

Full text
Abstract:
Most nursing quality studies based on the structure-process-outcome paradigm have concentrated on structure-outcome associations and have not explained the nursing process domain. This thesis turns the spotlight on the process domain and visualises nursing processes or ‘what nurses do’ by using ‘semantics’ which underpin Linking Of Data (LOD) technologies such as ontologies. Ontology construction has considerable limitations that make direct input of nursing process semantics difficult. Consequently, nursing ontologies being constructed to date use nursing process semantics collected by non-clinicians. These ontologies may have undesirable clinical implications when they are used to map nurse processes to patient outcomes. To address this issue, this thesis places nurses at the centre of semantic collection and ontology construction.
APA, Harvard, Vancouver, ISO, and other styles
40

Sun, Le. "Data stream mining in medical sensor-cloud." Thesis, 2016. https://vuir.vu.edu.au/31032/.

Full text
Abstract:
Data stream mining has been studied in diverse application domains. In recent years, a population aging is stressing the national and international health care systems. Along with the advent of hundreds and thousands of health monitoring sensors, the traditional wireless sensor networks and anomaly detection techniques cannot handle huge amounts of information. Sensor-cloud makes the processing and storage of big sensor data much easier. Sensor-cloud is an extension of Cloud by connecting the Wireless Sensor Networks (WSNs) and the cloud through sensor and cloud gateways, which consistently collect and process a large amount of data from various sensors located in different areas. In this thesis, I will focus on analysing a large volume of medical sensor data streams collected from Sensor-cloud. To analyse the Medical data streams, I propose a medical data stream mining framework, which is targeted on tackling four main challenges ...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography