Tesis sobre el tema "Fields of Research – 280000 Information, Computing and Communication Sciences – 280100 Information Systems"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 45 mejores tesis para su investigación sobre el tema "Fields of Research – 280000 Information, Computing and Communication Sciences – 280100 Information Systems".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Yang, Chun Chieh. "Evaluating online support for mobile phone selection : using properties and performance criteria to reduce information overload : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Science in Information Systems at Massey University, Auckland, New Zealand". Massey University, 2008. http://hdl.handle.net/10179/844.

Texto completo
Resumen
The mobile phone has been regarded as one of the most significant inventions in the field of communications and information technology over the past decade. Due to the rapid growth of mobile phone subscribers, hundreds of phone models have been introduced. Therefore, customers may find it difficult to select the most appropriate mobile phone because of information overload. The aim of this study is to investigate web support for customers who are selecting a mobile phone. Firstly, all the models of mobile phones in the New Zealand market were identified by visiting shops and local websites. Secondly, a list of all the features of these mobile phones was collated from local shops, websites and magazines. This list was categorised into mobile phone properties and performance criteria. An experiment then compared three different selection support methods: A (mobile phone catalogue), B (mobile phone property selection) and C (mobile phone property and performance criteria selection). The results of the experiment revealed that selection support methods B and C had higher overall satisfaction ratings than selection support method A; both methods B and C had similar satisfaction ratings. The results also suggested that males and females select their mobile phones differently, though there was no gender preference in selection support methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Wang, Lei. "Effectiveness of text-based mobile learning applications: case studies in tertiary education : a thesis presented to the academic faculty, submitted in partial fulfilment of the requirements for the degree of Master of Information Sciences in Information Technology, Massey University". Massey University, 2009. http://hdl.handle.net/10179/1092.

Texto completo
Resumen
This research focuses on developing a series of mobile learning applications for future 'beyond' classroom learning environments. The thesis describes the general use pattern of the prototype and explores the key factors that could affect users‘ attitudes towards potential acceptance of the mobile learning applications. Finally, this thesis explores the user acceptance of the mobile learning applications; and investigates the mobility issue and the comparison of applying learning activities through mobile learning and e-learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Zhang, Yang. "An empirical study on the relationship between identity-checking steps and perceived trustworthiness in online banking system use : submitted in partial fulfilment of the requirements for the Degree of Master of Information Sciences in Information Technology". Massey University, 2009. http://hdl.handle.net/10179/982.

Texto completo
Resumen
Online banking systems have become more common and widely used in daily life, bringing huge changes in modern banking transaction activities and giving us a greater opportunity to access the banking system anytime and anywhere. At the same time, however, one of the key challenges that still remain is to fully resolve the security concerns associated with the online banking system. Many clients feel that online banking is not secure enough, and to increase its security levels, many banks simply add more identity-checking steps or put on more security measures to some extent to give users the impression of a secure online banking system. However, this is easier to be said than done, because we believe that more identity-checking steps could compromise the usability of the online banking system, which is an inevitable feature in design of usable and useful online banking systems. Banks can simply enhance their security level with more sophisticated technologies, but this does not seem to guarantee the online banking system is in line with its key usability concern. Therefore, the research question raised in this thesis is to establish the relationships between usability, security and trustworthiness in the online banking system. To demonstrate these relationships, three experiments were carried out using the simulation of an online banking logon procedure to provide a similar online banking experience. Post questionnaires were used to measure the three concepts, i.e. usability, security and trustworthiness. The resulting analyses revealed that simply adding more identity-checking steps in the online banking system did not improve the customers? perceived security and trustworthiness, nor the biometric security technique (i.e., fingerprints) did enhance the subjective ratings on the perceived security and trustworthiness. This showed that the systems designer needs to be aware that the customer?s perception of the online banking system is not the same as that conceived from a technical standpoint.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jonnavithula, Lalitha. "Improving the interfaces of online discussion forums to enhance learning support : a thesis presented in partial fulfilment of the requirements for the degree of Master of Information Science in Information Systems at Massey University, Palmerston North, New Zealand". Massey University, 2008. http://hdl.handle.net/10179/968.

Texto completo
Resumen
This thesis describes a research work aimed at improving the interfaces of online discussion forums (ODFs) in relation to their functional support to enhance learning. These ODFs form part of almost all Learning Management Systems (LMSs) such as WebCT, Moodle and Blackboard, which are widely used in education nowadays. Although ODFs are identified as valuable sources to learning, their interfaces are limited in terms of providing support to students, such as in the areas of managing their postings as well as in facilitating them to quickly locate and obtain specified information. In addition, these systems lack features to support inter-institutional cooperation that could potentially increase knowledge sharing between students and educators of different institutions. The interface design objective of this study therefore was to explore and overcome the limitations identified as above, and enhance the effectiveness and efficiency of ODFs’ support to learning. Using a task centered design approach; the required features were developed, and implemented in a working prototype called eQuake (electronic Question answer knowledge environment). eQuake is a shared online discussion forum system developed as an add-on to a well-known open source e-learning platform (Moodle). This system was intended for use among interinstitutional students in New Zealand tertiary institutions that teach similar courses. The improved interface functionalities of eQuake are expected to enhance learning support in terms of widening communication among users, increasing knowledge base, providing existing matching answer(s) quickly to students, and exposing students to multiple perspectives. This study considers such improvements to ODF interfaces as vital to enable users to enjoy the benefits of technology-mediated environment. The perceived usefulness and ease-of-use of improved features in eQuake were evaluated using a quantitative experimental research method. The evaluation was conducted at three tertiary institutions in New Zealand, and the overall results indicated positive response, although some suggestions for improvement have been made in the evaluation. This thesis presents a review of the related literature, describes the design and development of a user interface, followed by its implementation in eQuake, and a description of the evaluation. The thesis concludes with recommendations for better interface design of ODFs and provides suggestions for future research in this area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Engelbrecht, Judith Merrylyn. "Electronic clinical decision support (eCDS) in primary health care: a multiple case study of three New Zealand PHOs : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand". Massey University, 2009. http://hdl.handle.net/10179/1107.

Texto completo
Resumen
Health care providers internationally are facing challenges surrounding the delivery of high quality, cost effective services. The use of integrated electronic information systems is seen by many people working in the health sector as a way to address some of the associated issues. In New Zealand the primary health care sector has been restructured to follow a population based care model and provides services through not-for-profit Primary Health Organisations (PHOs). PHOs, together with their District Health Boards (DHBs), contributing service providers, and local communities, are responsible for the care of their enrolled populations. The Ministry of Health (MoH) is streamlining information sharing in this environment through improvements to computer based information systems (IS). By providing health professionals with improved access to required information within an appropriate time frame, services can be targeted efficiently and effectively and patient health outcomes potentially improved. However, the adoption of IS in health care has been slower than in other industries. Therefore, a thorough knowledge of health care professionals’ attitudes to, and use of, available IS is currently needed to contribute to the development of appropriate systems. This research employs a multiple case study strategy to establish the usage of IS by three New Zealand PHOs and their member primary health care providers (PHPs), with a focus on the role of IS in clinical decision support (CDS). A mixed method approach including semi-structured interviews and postal surveys was used in the study. Firstly, the research develops and applies a survey tool based on an adaptation of an existing framework, for the study of IT sophistication in the organisations. This provides the foundation for an in-depth study of the use of computerised CDS (eCDS) in the PHO environment. Secondly, a conceptual model of eCDS utilisation is presented, illustrating the variation of eCDS use by member general practitioner (GP) practices within individual organisations. Thirdly, five areas of importance for improving eCDS utilisation within PHO’s are identified, contributing information of use to organisations, practitioners, planners, and systems developers. Lastly, the research provides a structure for the study of the domain of eCDS in PHOs by presenting a research approach and information specific for the area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Blakey, Jeremy Peter. "Database training for novice end users : a design research approach : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Albany, New Zealand". Massey University, 2008. http://hdl.handle.net/10179/880.

Texto completo
Resumen
Of all of the desktop software available, that for the implementation of a database is some of the most complex. With the increasing number of computer users having access to this sophisticated software, but with no obvious way to learn the rudiments of data modelling for the implementation of a database, there is a need for a simple, convenient method to improve their understanding. The research described in this thesis represents the first steps in the development of a tool to accomplish this improvement. In a preliminary study using empirical research a conceptual model was used to improve novice end users’ understanding of the relational concepts of data organisation and the use of a database software package. The results showed that no conclusions could be drawn about either the artefact used or the method of evaluation. Following the lead of researchers in the fields of both education and information systems, a design research process was developed, consisting of the construction and evaluation of a training artefact. A combination of design research and a design experiment was used in the main study described in this thesis. New to research in information systems, design research is a methodology or set of analytical techniques and perspectives, and this was used to develop a process (development of an artefact) and a product (the artefact itself). The artefact, once developed, needed to be evaluated for its effectiveness, and this was done using a design experiment. The experiment involved exposing the artefact to a small group of end users in a realistic setting and defining a process for the evaluation of the artefact. The artefact was the tool that would facilitate the improvement of the understanding of data modelling, the vital precursor to the development of a database. The research was conducted among a group of novice end users who were exposed to the artefact, facilitated by an independent person. In order to assess whether there was any improvement in the novices’ understanding of relational data modelling and database concepts, they then completed a post-test. Results confirmed that the artefact, trialled through one iteration, was successful in improving the understanding of these novice end users in the area of data modelling. The combination of design research and design experiment as described above gave rise to a new methodology, called experimental design research at this early juncture. The successful outcome of this research will lead to further iterations of the design research methodology, leading in turn to the further development of the artefact which will be both useful and accessible to novice users of personal computers and database software. This research has made the following original contributions. Firstly, the use of the design research methodology for the development of the artefact, which proved successful in improving novice users’ understanding of relational data structures. Secondly, the novel use of a design experiment in an information systems project, which was used to evaluate the success of the artefact. And finally, the combination of the developed artefact followed by its successful evaluation using a design experiment resulted in the hybrid experimental design research methodology. The success of the implementation of the experimental design research methodology in this information systems project shows much promise for its successful application to similar projects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mohanarajah, Selvarajah. "Designing CBL systems for complex domains using problem transformation and fuzzy logic : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand". Massey University, 2007. http://hdl.handle.net/10179/743.

Texto completo
Resumen
Some disciplines are inherently complex and challenging to learn. This research attempts to design an instructional strategy for CBL systems to simplify learning certain complex domains. Firstly, problem transformation, a constructionist instructional technique, is used to promote active learning by encouraging students to construct more complex artefacts based on less complex ones. Scaffolding is used at the initial learning stages to alleviate the difficulty associated with complex transformation processes. The proposed instructional strategy brings various techniques together to enhance the learning experience. A functional prototype is implemented with Object-Z as the exemplar subject. Both objective and subjective evaluations using the prototype indicate that the proposed CBL system has a statistically significant impact on learning a complex domain. CBL systems include Learner models to provide adaptable support tailored to individual learners. Bayesian theory is used in general to manage uncertainty in Learner models. In this research, a fuzzy logic based locally intelligent Learner model is utilized. The fuzzy model is simple to design and implement, and easy to understand and explain, as well as efficient. Bayesian theory is used to complement the fuzzy model. Evaluation shows that the accuracy of the proposed Learner model is statistically significant. Further, opening Learner model reduces uncertainty, and the fuzzy rules are simple and resemble human reasoning processes. Therefore, it is argued that opening a fuzzy Learner model is both easy and effective. Scaffolding requires formative assessments. In this research, a confidence based multiple test marking scheme is proposed as traditional schemes are not suitable for measuring partial knowledge. Subjective evaluation confirms that the proposed schema is effective. Finally, a step-by-step methodology to transform simple UML class diagrams to Object-Z schemas is designed in order to implement problem transformation. This methodology could be extended to implement a semi-automated translation system for UML to Object Models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Feary, Mark S. "Statistical frameworks and contemporary Māori development". Lincoln University, 2008. http://hdl.handle.net/10182/664.

Texto completo
Resumen
Māori have entered a period of development that, more than ever before, requires them to explore complex options and make careful decisions about the way forward. This complexity stems from three particular areas. First, from having essentially two sets of rights, as New Zealanders and as Māori, and being active in the struggle to retain those rights. Second, from trying to define and determine development pathways that are consistent with their traditional Māori values, and which align with their desire to participate in and enjoy a modern New Zealand and a global society. Third, from attempting development within a political and societal environment that is governed by a different and dominant culture. Māori, historically and contemporarily, have a culture that leads them to very different views of the world and development pathways than pakeha New Zealanders (D. Marsden, 1994, p. 697). Despite concerted effort and mis placed belief the Māori world view has survived and is being adopted by Māori youth. The Māori worldview sometimes collides with the view of the governing pakeha culture of New Zealand, which values rights, assets and behaviours differently. Despite these differences and the complexities it remains important to measure progress and inform debate about best practice and future options. In this regard, statistical information is crucial, and is generally recognised as one of the currencies of development (World Summit of the Information Society, 2003). Māori increasingly desire to measure and be informed about the feasibility and progress of their development choices in a way that is relevant to their values and culture. Where a Māori view of reality is not present there is a high risk that decisions and actions will reflect a different worldview, will fail to deal with cultural complexities, and ultimately will not deliver the intended development outcomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhao, Yue. "Modelling avian influenza in bird-human systems : this thesis is presented in the partial fulfillment of the requirement for the degree of Masters of Information Science in Mathematics at Massey University, Albany, New Zealand". Massey University, 2009. http://hdl.handle.net/10179/1145.

Texto completo
Resumen
In 1997, the first human case of avian influenza infection was reported in Hong Kong. Since then, avian influenza has become more and more hazardous for both animal and human health. Scientists believed that it would not take long until the virus mutates to become contagious from human to human. In this thesis, we construct avian influenza with possible mutation situations in bird-human systems. Also, possible control measures for humans are introduced in the systems. We compare the analytical and numerical results and try to find the most efficient control measures to prevent the disease.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chetsumon, Sireerat. "Attitudes of extension agents towards expert systems as decision support tools in Thailand". Lincoln University, 2005. http://hdl.handle.net/10182/1371.

Texto completo
Resumen
It has been suggested 'expert systems' might have a significant role in the future through enabling many more people to access human experts. It is, therefore, important to understand how potential users interact with these computer systems. This study investigates the effect of extension agents' attitudes towards the features and use of an example expert system for rice disease diagnosis and management(POSOP). It also considers the effect of extension agents' personality traits and intelligence on their attitudes towards its use, and the agents' perception of control over using it. Answers to these questions lead to developing better systems and to increasing their adoption. Using structural equation modelling, two models - the extension agents' perceived usefulness of POSOP, and their attitude towards the use of POSOP, were developed (Models ATU and ATP). Two of POSOP's features (its value as a decision support tool, and its user interface), two personality traits (Openness (0) and Extraversion (E)), and the agents' intelligence, proved to be significant, and were evaluated. The agents' attitude towards POSOP's value had a substantial impact on their perceived usefulness and their attitude towards using it, and thus their intention to use POSOP. Their attitude towards POSOP's user interface also had an impact on their attitude towards its perceived usefulness, but had no impact on their attitude towards using it. However, the user interface did contribute to its value. In Model ATU, neither Openness (0) nor Extraversion (E) had an impact on the agents' perceived usefulness indicating POSOP was considered useful regardless of the agents' personality background. However, Extraversion (E) had a negative impact on their intention to use POSOP in Model ATP indicating that 'introverted' agents had a clear intention to use POSOP relative to the 'extroverted' agents. Extension agents' intelligence, in terms of their GPA, had neither an impact on their attitude, nor their subjective norm (expectation of 'others' beliefs), to the use of POSOP. It also had no association with any of the variables in both models. Both models explain and predict that it is likely that the agents will use POSOP. However, the availability of computers, particularly their capacity, are likely to impede its use. Although the agents believed using POSOP would not be difficult, they still believed training would be beneficial. To be a useful decision support tool, the expert system's value and user interface as well as its usefulness and ease of use, are all crucially important to the preliminary acceptance of a system. Most importantly, the users' problems and needs should be assessed and taken into account as a first priority in developing an expert system. Furthermore, the users should be involved in the system development. The results emphasise that the use of an expert system is not only determined by the system's value and its user interface, but also the agents' perceived usefulness, and their attitude towards using it. In addition, the agents' perception of control over using it is also a significant factor. The results suggested improvements to the system's value and its user interface would increase its potential use, and also providing suitable computers, coupled with training, would encourage its use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Johnston, Christopher Troy. "VERTIPH : a visual environment for real-time image processing on hardware : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Systems Engineering at Massey University, Palmerston North, New Zealand". Massey University, 2009. http://hdl.handle.net/10179/1219.

Texto completo
Resumen
This thesis presents VERTIPH, a visual programming language for the development of image processing algorithms on FPGA hardware. The research began with an examination of the whole design cycle, with a view to identifying requirements for implementing image processing on FPGAs. Based on this analysis, a design process was developed where a selected software algorithm is matched to a hardware architecture tailor made for its implementation. The algorithm and architecture are then transformed into an FPGA suitable design. It was found that in most cases the most efficient mapping for image processing algorithms is to use a streamed processing approach. This constrains how data is presented and requires most existing algorithms to be extensively modified. Therefore, the resultant designs are heavily streamed and pipelined. A visual notation was developed to complement this design process, as both streaming and pipelining can be well represented by data flow visual languages. The notation has three views each of which represents and supports a different part of the design process. An architecture view gives an overview of the design's main blocks and their interconnections. A computational view represents lower-level details by representing each block by a set of computational expressions and low-level controls. This includes a novel visual representation of pipelining that simplifies latency analysis, multiphase design, priming, flushing and stalling, and the detection of sequencing errors. A scheduling view adds a state machine for high-level control of processing blocks. This extended state objects to allow for the priming and flushing of pipelined operations. User evaluations of an implementation of the key parts of this language (the architecture view and the computational view) found that both were generally good visualisations and aided in design (especially the type interface, pipeline and control notations). The user evaluations provided several suggestions for the improvement of the language, and in particular the evaluators would have preferred to use the diagrams as a verification tool for a textual representation rather than as the primary data capture mechanism. A cognitive dimensions analysis showed that the language scores highly for thirteen of the twenty dimensions considered, particularly those related to making details of the design clearer to the developer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Deng, Yanbo. "Using web services for customised data entry". Master's thesis, Lincoln University. Environment, Society and Design Division, 2007. http://theses.lincoln.ac.nz/public/adt-NZLIU20080313.185408/.

Texto completo
Resumen
Scientific databases often need to be accessed from a variety of different applications. There are usually many ways to retrieve and analyse data already in a database. However, it can be more difficult to enter data which has originally been stored in different sources and formats (e.g. spreadsheets, other databases, statistical packages). This project focuses on investigating a generic, platform independent way to simplify the loading of databases. The proposed solution uses Web services as middleware to supply essential data management functionality such as inserting, updating, deleting and retrieval of data. These functions allow application developers to easily customise their own data entry applications according to local data sources, formats and user requirements. We implemented a Web service to support loading data to the Germinate database at the New Zealand Institute of Crop & Food Research (CFR). We also provided language specific client toolkits to help developers invoke the Web service. The toolkits allow applications to be easily customised for different platforms. In addition, we developed sample applications to help end users load data from their project data sources via the Web service. The Web service approach was evaluated through user and developer trials. The feedback from the developer trial showed that using Web services as middleware is a useful approach to allow developers and competent end users to customise data entry with minimal effort. More importantly, the customised client applications enabled end users to load data directly from their project spreadsheets and databases. It significantly reduced the effort required for exporting or transforming the source data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Pacey, H. A. "The benefits and barriers to GIS for Māori". Lincoln University, 2005. http://hdl.handle.net/10182/655.

Texto completo
Resumen
A Geographic Information System visually communicates both spatial and temporal analyses and has been available for at least twenty years in New Zealand. Using a Kaupapa Māori Research framework, this research investigates the benefits and barriers for Māori if they were to adopt GIS to assist their development outcomes. Internationally, indigenous peoples who have adopted GIS have reported they have derived significant cultural development benefits, including the preservation and continuity of traditional knowledge and culture. As Māori development continues to expand in an increasing array of corporate, scientific, management and cultural arenas, the level of intensity required to keep abreast of developments has also expanded. GIS has been used by some roopū to assist their contemporary Māori development opportunities; has been suggested as a cost effective method for spatial research for Waitangi Tribunal claims; has supported and facilitated complex textual and oral evidence, and has also been used to assist negotiation and empowerment at both central and local government level. While many successful uses are attributed to GIS projects, there are also precautionary calls made from practitioners regarding the obstacles they have encountered. Overall, whilst traditional knowledge and contemporary technology has been beneficially fused together, in some instances hidden or unforeseen consequences have impeded or imperilled seamless uptake of this new technology. Challenges to the establishment of a GIS range from the theoretical (mapping cultural heritage) to the practical (access to data) to the pragmatic (costs and resources). The multiple issues inherent in mapping cultural heritage, indigenous cartography and, in particular, the current lack of intellectual property rights protection measures, are also potential barriers to successful, long-term integration of GIS into the tribal development matrix. The key impediments to GIS establishment identified by surveyed roopū were lack of information and human resources, and prioritisation over more critical factors affecting tangata whenua. Respondents also indicated they would utilise GIS if the infrastructure was in place and the cost of establishment decreased. Given the large amount of resources to be invested into GIS, and the opportunity to establish safe practices to ensure continuity of the GIS, it is prudent to make informed decisions prior to investment. As an applied piece of Kaupapa Māori research, a tangible outcome in the form of an establishment Guide is presented. Written in a deliberately novice-friendly manner, the Guide traverses fundamental issues surrounding the establishment of a GIS including investment costs and establishment processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Rutherford, Paul. "Usability of navigation tools in software for browsing genetic sequences". Diss., Lincoln University, 2008. http://hdl.handle.net/10182/948.

Texto completo
Resumen
Software to display and analyse DNA sequences is a crucial tool for bioinformatics research. The data of a DNA sequence has a relatively simple format but the length and sheer volume of data can create difficulties in navigation while maintaining overall context. This is one reason that current bioinformatics applications can be difficult to use. This research examines techniques for navigating through large single DNA sequences and their annotations. Navigation in DNA sequences is considered here in terms of the navigational activities: exploration, wayfinding and identifying objects. A process incorporating user-centred design was used to create prototypes involving panning and zooming of DNA sequences. This approach included a questionnaire to define the target users and their goals, an examination of existing bioinformatics applications to identify navigation designs, a heuristic evaluation of those designs, and a usability study of prototypes. Three designs for panning and five designs for zooming were selected for development. During usability testing, users were asked to perform common navigational activities using each of the designs. The “Connected View” design was found to be the most usable for panning while the “Zoom Slider” design was best for zooming and most useful zooming tool for tasks involving browsing. For some tasks the ability to zoom was unnecessary. The research provides important insights into the expectations that researchers have of bioinformatics applications and suitable methods for designing for that audience. The outcomes of this type of research can be used to help improve bioinformatics applications so that they will be truly usable by researchers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Liu, MingHui. "Navel orange blemish identification for quality grading system : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Computer Science at Massey University, Albany, New Zealand". Massey University, 2009. http://hdl.handle.net/10179/1175.

Texto completo
Resumen
Each year, the world’s top orange producers output millions of oranges for human consumption. This production is projected to grow by as much as 64 million in 2010 and so the demand for fast, low-cost and precise automated orange fruit grading systems is only deemed to become more increasingly important. There is however an underlying limit to most orange blemish detection algorithms. Most existing statistical-based, structural-based, model-based and transform-based orange blemish detection algorithms are plagued by the following problem: any pixels in an image of an orange having about the same magnitudes for the red, green and blue channels will almost always be classified as belonging to the same category (either a blemish or not). This however presents a big problem as the RGB components of the pixels corresponding to blemishes are very similar to pixels near the boundary of an orange. In light of this problem, this research utilizes a priori knowledge of the local intensity variations observed on rounded convex objects to classify the ambiguous pixels correctly. The algorithm has the effect of peeling-off layers of the orange skin according to gradations of the intensity. Therefore, any abrupt discontinuities detected along successive layers would significantly help identifying skin blemishes more accurately. A commercial-grade fruit inspection and distribution system was used to collect 170 navel orange images. Of these images, 100 were manually classified as good oranges by human inspection and the rest are blemished ones. We demonstrate the efficacy of the algorithm using these images as the benchmarking test set. Our results show that the system garnered 96% correctly classified good oranges and 97% correctly classified blemished oranges. The proposed system is easily customizable as it does not require any training. The fruit quality bands can be adjusted to meet the requirements set by the market standards by specifying an agreeable percentage of blemishes for each band.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Thornber, Michael John. "Square pegs and round holes: application of ISO 9000 in healthcare". 2002. http://hdl.handle.net/2292/2180.

Texto completo
Resumen
This research examines the application of the ISO 9000 model for quality management in healthcare. Exploratory case study is made of three healthcare provider organisations: community health service; independent practitioner association; Maori health network. Three research models are developed to examine identified gaps and areas of interest in healthcare quality management literature. The first model relates to differences between generic standards and specification standards. The second model relates to the fit of healthcare service delivery systems and ISO 9000. The third model relates to exploration of the linkages and co-ordination of an integrated care delivery network. One proposition and two hypotheses are developed in relation to the models, and are closely associated with gaps in healthcare service quality knowledge. Strong support is found for the first hypothesis though not the second hypothesis, and there are also some unexpected results. There is strong support that the process of implementing the ISO 9000 model will enhance healthcare management performance, even though the outcomes are unpredictable. There are indications supporting the notion that implementation of the ISO 9000 model will increase effective linkages and co-ordination within integrated care delivery networks. The body of evidence accumulated during the study did not, however, permit a valid conclusion regarding the hypothesis. The findings of the study can be extended to other healthcare service areas and through interpretation and extrapolation they add value to healthcare service quality research in general. In particular, the findings of the three case studies in this research suggest that future models for healthcare service quality should include a comprehensive generic model for quality management of individual and integrated healthcare service organisations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Tan, Felix B. "Business-IT Alignment and Shared Understanding Between Business and IS Executives: A Cognitive Mapping Investigation". 2001. http://hdl.handle.net/2292/2228.

Texto completo
Resumen
Whole document restricted, see Access Instructions file below for details of how to access the print copy.
Achieving and sustaining business-IT alignment in organisations continues to be a management challenge into the new millennium. As organisations strive toward this end, researchers are attempting to better understand the alignment phenomenon. Empirical research into business-IT alignment is dominated by studies examining the relationship between business strategy, information technology and performance. Investigations into the factors enabling or inhibiting alignment are emerging. This research has traditionally taken a behavioural perspective. There is evidence of little research that examines the issue through a cognitive lens. This thesis builds on and extends the study of business-IT alignment by investigating the cognition of the key stakeholders of the alignment process - business and IS executives. Drawing on Personal Construct Theory (Kelly, 1955), this study uses a cognitive mapping methodology known as the repertory grid technique to investigate two questions: i) is there a positive relationship between business-IT alignment and shared understanding between business and IS executives?; and ii) are there differences in the cognitive maps of business and IS executives in companies that report high business-IT alignment and those that report low business-IT alignment? Shared understanding is defined as cognition that is held in common between and that which is distributed amongst business and IS executives. It is portrayed in the form of a cognitive map for each company. The study proposes that business-IT alignment is directly related to the shared understanding between business and IS executives and that the cognitive maps of these executive groups are less diverse in companies that report a high level of alignment. Eighty business and IS executives from six companies were interviewed. Cognitive maps were elicited from the research participants from which diversity between cognitive maps of business and IS executives are measured. A collective cognitive map was produced to illustrate the quality of the shared understanding in each company. The state of business-IT alignment in each company was also measured. The results of the study suggest that there is a strong positive link between business-IT alignment and shared understanding between business and IS executives. As expected, companies with a high-level of business-IT alignment demonstrate high quality shared understanding between its business and IS executives as measured and portrayed by their collective cognitive maps. The investigation further finds significant diversity in the structure and content of the cognitive maps of these executive groups in companies reporting a low-level of alignment. This study concludes that shared understanding, between business and IS executives, is important to business-IT alignment. Reconciling the diversity in the cognitive maps of business and IS executives is a step toward achieving and sustaining alignment. Practical approaches to developing shared understanding are proposed. A methodology to aid organisations in assessing shared understanding between their business and IS executives is also outlined. Finally research on business-IT alignment continues to be a fruitful and important field of IS research. This study suggests that the most interesting issues are at the interface between cognition and behaviour. The process of business-IT alignment in organisations is characterised by the individuality and commonality in the cognition of key stakeholders, its influence on the behaviour of these members and hence the organisational action taken.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Day, Karen Jean. "Supporting the emergence of a shared services organisation: Managing change in complex health ICT projects". 2008. http://hdl.handle.net/2292/2476.

Texto completo
Resumen
Although there is a high risk of failure in the implementation of ICT projects (which appears to extend to health ICT projects), we continue to implement health information systems in order to deliver quality, cost-effective healthcare. The purpose of the research was to participate in and study the change management as a critical success factor in health ICT projects, and to examine people’s responses to change so as to develop understanding and theory that could be used in future change management programmes. The research was conducted within the context of a large infrastructure project that resulted from the emergence of a shared services organisation (from two participating District Health Boards in Auckland, New Zealand). Action research (AR) formed the basis of the methodology used, and provided the foundation for a change management programme: the AR intervention. Grounded theory (GT) was used for some of the data analysis, the generation of themes by means of constant comparison and the deeper examination of the change process using theoretical sampling. AR and GT together supported the development of theory regarding the change process associated with health ICT projects. Health ICT projects were revealed in the findings as exhibiting the properties of complex adaptive systems. This complexity highlighted the art of change management as a critical success factor for such projects. The fabric of change emerged as a composite of processes linked to project processes and organisational processes. The turning point in the change process from the before state to the after state is marked by a capability crisis which requires effective patterns of leadership, sensitive targeting of communication, effective learning, and management of increased workload and diminishing resources during the course of health ICT projects. A well managed capability crisis period as a component of change management can substantially contribute to health ICT project success.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Kirk, Diana Caroline. "Flexible software process model". 2007. http://hdl.handle.net/2292/4228.

Texto completo
Resumen
Many different kinds of process are used to develop software intensive products, but there is little agreement as to which processes give the best results under which circumstances. Practitioners and researchers believe that project outcomes would be improved if the development process was constructed according to project-specific factors. In order to achieve this goal, greater understanding of the factors that most affect outcomes is needed. To improve understanding, researchers build models of the process and carry out studies based on these models. However, current models contain many ambiguities and assumptions, and so it is not clear what the results of the studies mean. The statement of this thesis is that it is possible to create an abstraction of the software development process that will provide a mechanism for comparing software processes and software process models. The long term goal of the research is to provide planners with a means of tailoring the development process on a project by project basis, with the aim of reducing risk and improving outcomes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Gutierrez, Jairo A. "Multi-Vendor System Network Management: A Roadmap for Coexistence". 1997. http://hdl.handle.net/2292/1970.

Texto completo
Resumen
Whole document restricted, see Access Instructions file below for details of how to access the print copy.
As computer networks become more complex, and more heterogeneous (often involving systems from multiple vendors), the importance of integrated network management increases. This thesis summarises the efforts of research carried out 1 ) to identify the characteristics and requirements of an Integrated Network Management Environment (INME) and its individual components, 2) to propose a model to represent the INME, 3) to demonstrate the validity of the model, 4) to describe the steps needed to formally specify the model, and 5) to suggest an implementation plan for the INME. One of the key aspects of this thesis is the introduction of three different and complementary models used to integrate the emerging OSI management standards with the proven-and-tried network management solutions promoted by the Internet Activities Board. The Protocol-Oriented Network Management Model is used to represent the existing network management supported by the INME: ie, OSI and Internet-based systems. The Element-Oriented Network Management Model represents the components that are used within individual network systems. It describes the managed objects, and the platform application program interfaces (APIs). This model also includes the translation mechanisms needed to support the interaction between OSI managers and Internet agents. The Interoperability Model is used to represent the underlying communications infrastructure supporting network management. The communications between agents and managers is represented with this model by using the required protocol stacks (OSI or TCP/IP), and by depicting the interconnection between the entities using the network management functions. This three-pronged classification provides a richer level of abstraction facilitating the coexistence of the standard network management systems, allowing different levels of modeling. complexity, and improving the access to managed objects. The ultimate goal of this thesis is to describe a framework that assists developers of network management applications in the process of integrating their solutions to an open systems network management platform. This framework will also help network managers to minimise the risks involved in the transition from first generation network management systems to more integrated alternatives as they become available.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Costain, Gay. "Cognitive Support during Object-Oriented Software Development: The Case of UML Diagrams". 2008. http://hdl.handle.net/2292/2603.

Texto completo
Resumen
The Object Management Group (OMG) accepted the Unified Modelling Language (UML) as a standard in 1997, yet there is sparse empirical evidence to justify its choice. This research aimed to address that lack by investigating the modification of programs for which external representations, drawn using the UML notations most commonly used in industry, were provided. The aim of the research was to discover if diagrams using those UML notations provided the modifying programmer with cognitive support. The application of the use of modelling to assist program modification was chosen as a result of interviews that were carried out in New Zealand and North America to discover how workers in the software industry used modelling, and if so, whether UML notation satisfied their needs. The most preferred UML diagrams were identified from the interviews. A framework of modelling use in software development was derived. A longitudinal study at a Seattle-based company was the source that suggested that program modification should be investigated. The methodology chosen for the research required subjects to modify two non-trivial programs, one of which was supplied with UML documentation. There were two aspects to the methodology. First, the subjects’ performances with and without the aid of UML documentation were compared. Modifying a program is an exercise in problem solving which is a cognitive activity. If the use of UML improved subjects’ performances then it could be said that the UML had aided the subjects’ cognition. Second, concurrent verbal protocols were collected whilst the subjects modified the programs. The protocols for the modification with UML documentation, for ten of the more successful subjects, were transcribed and analysed according to a framework derived from the literature. The framework listed the possible cognitive steps involved in problem solving where cognition could be distributed to and from external representations. The categories of evidence that would confirm cognitive support were also derived from the literature. The experiments confirmed that programmers from similar backgrounds varied widely in ability and style. Twenty programmers modified both an invoice application and a diary application. There was some indication that the UML diagrams aided performance. The analyses of all ten of the transcribed subjects showed evidence of UML cognitive support.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Berkowitz, Zeev. "A methodology for business processes identification: developing instruments for an effective enterprise system project". 2006. http://hdl.handle.net/2292/4346.

Texto completo
Resumen
Whole document restricted, see Access Instructions file below for details of how to access the print copy.
Since the mid 1990s, thousands of companies around the world have implemented Enterprise Systems (ES), which are considered to be the most important development in the corporate use of information technology. By providing computerized support to business processes spanning both the enterprise and the supply chain, these systems have become an indispensable tool utilized by organizations to accomplish and maintain efficient and effective operational performance. However, there are many cases in which ES implementation has failed in terms of the required time and budget, and more importantly, in terms of functionality and performance. One of the main causes of these failures is the misidentification and improper selection of business processes to be implemented into the ES, which are a crucial element of the system's implementation life cycle. In order to achieve effective implementation, a ‘necessary and sufficient’ set of business processes must be designed and implemented. Implementing an excessive set of business processes is costly; yet implementing an insufficient set is ruinous. The heuristic identification of the set of business processes, based on requirement elicitation, is flawed; there is no guarantee that all the necessary processes have been captured (Type I error), and/or that superfluous processes have been selected for implementation (Type II error). The existing implementation methods do not include a methodology to address this vital issue. This thesis aims to resolve this problem and to provide a methodology that will generate a necessary and sufficient set of business processes in a given organization, based on its specific characteristics, which will be used as a baseline for implementing an ES. A proper definition of the business processes and their associated properties is proposed and detailed. The properties are then used as parameters to generate the complete set of all the possible business processes in the organization; from this set, necessary and sufficient processes are selected. The methodology exposes the fundamental level of business processes, which are then used as a baseline for further phases in the implementation process. The proposed methodology has been tested through the analysis of companies that have implemented ES. In each of these cases, the identification of business processes utilizing the proposed methodology has proven to provide superior results to those obtained through all other implemented practices, producing a better approximation of their existing business processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ma, Hui. "Distribution design for complex value databases : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University". 2007. http://hdl.handle.net/10179/747.

Texto completo
Resumen
Distribution design for databases usually addresses the problems of fragmentation, allocation and replication. However, the main purposes of distribution are to improve performance and to increase system reliability. The former aspect is particularly relevant in cases where the desire to distribute data originates from the distributed nature of an organization with many data needs only arising locally, i.e., some data are retrieved and processed at only one or at most very few locations. Therefore, query optimization should be treated as an intrinsic part of distribution design. Due to the interdependencies between fragmentation, allocation and distributed query optimization it is not efficient to study each of the problems in isolation to get overall optimal distribution design. However, the combined problem of fragmentation, allocation and distributed query optimization is NP-hard, and thus requires heuristics to generate efficient solutions. In this thesis the foundations of fragmentation and allocation in databases on query processing are investigated using a query cost model. The considered databases are defined on complex value data models, which capture complex value, object-oriented and XML-based databases. The emphasis on complex value databases enables a large variety of schema fragmentation, while at the same time it imposes restrictions on the way schemata can be fragmented. It is shown that the allocation of locations to the nodes of an optimized query tree is only marginally affected by the allocation of fragments. This implies that optimization of query processing and optimization of fragment allocation are largely orthogonal to each other, leading to several scenarios for fragment allocation. Therefore, it is reasonable to assume that optimized queries are given with subqueries having selection and projection operations applied to leaves. With this assumption some heuristic procedures can be developed to find an “optimal” fragmentation and allocation. In particular, cost-based algorithms for primary horizontal and derived horizontal fragmentation, vertical fragmentation are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Giles, Jonathan Andrew. "Improving Centruflow using semantic web technologies : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Palmerston North, New Zealand". 2007. http://hdl.handle.net/10179/801.

Texto completo
Resumen
Centruflow is an application that can be used to visualise structured data. It does this by drawing graphs, allowing for users to explore information relationships that may not be visible or easily understood otherwise. This helps users to gain a better understanding of their organisation and to communicate more effectively. In earlier versions of Centruflow, it was difficult to develop new functionality as it was built using a relatively unsupported and proprietary visualisation toolkit. In addition, there were major issues surrounding information currency and trust. Something had to be done, and this was a sub-project of this thesis. The main purpose of this thesis however was to research and develop a set of mathematical algorithms to infer implicit relationships in Centruflow data sources. Once these implicit relationships were found, we could make them explicit by showing them within Centruflow. To enable this, relationships were to be calculated based on providing users with the ability to 'tag' resources with metadata. We believed that by using this tagging metadata, Centruflow could offer users far more insight into their own data. Implementing this was not a straight-forward task, as it required a considerable amount of research and development to be undertaken to understand and appreciate technologies that could help us in our goal. Our focus was primarily on technologies and approaches common in the semantic web and 'Web 2.0' areas. By pursuing semantic web technologies, we ensured that Centruflow would be considerably more standards-compliant than it was previously. At the conclusion of our development period, Centruflow had been rather substantially 'retrofitted', with all proprietary technologies replaced with equivalent semantic web technologies. The result of this is that Centruflow is now positioned on the forefront of the semantic web wave, allowing for far more comprehensive and rapid visualisation of a far larger set of readily-available data than what was possible previously. Having implemented all necessary functionality, we validated our approach and were pleased to find that our improvements led to a considerably more intelligent and useful Centruflow application than was previously available. This functionality is now available as part of 'Centruflow 3.0', which will be publicly released in March 2008. Finally, we conclude this thesis with a discussion on the future work that should be undertaken to improve on the current release.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Ferrarotti, Flavio Antonio. "Expressibility of higher-order logics on relational databases : proper hierarchies : a dissertation presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Wellington, New Zealand". 2008. http://hdl.handle.net/10179/799.

Texto completo
Resumen
We investigate the expressive power of different fragments of higher-order logics over finite relational structures (or equivalently, relational databases) with special emphasis in higher-order logics of order greater than or equal three. Our main results concern the study of the effect on the expressive power of higher-order logics, of simultaneously bounding the arity of the higher-order variables and the alternation of quantifiers. Let AAi(r,m) be the class of (i + 1)-th order logic formulae where all quantifiers are grouped together at the beginning of the formulae, forming m alternating blocks of consecutive existential and universal quantifiers, and such that the maximal-arity (a generalization of the concept of arity, not just the maximal of the arities of the quantified variables) of the higher-order variables is bounded by r. Note that, the order of the quantifiers in the prefix may be mixed. We show that, for every i [greater than or equal to] 1, the resulting AAi hierarchy of formulae of (i + 1)-th order logic is proper. This extends a result by Makowsky and Pnueli who proved that the same hierarchy in second-order logic is proper. In both cases the strategy used to prove the results consists in considering the set AUTOSAT(F) of formulae in a given logic F which, represented as finite structures, satisfy themselves. We then use a similar strategy to prove that the classes of [Sigma superscript i subscript m union Pi superscript i subscript m] formulae in which the higher-order variables of all orders up to i+1 have maximal-arity at most r, also induce a proper hierarchy in each higher-order logic of order i [greater than or equal to] 3. It is not known whether the correspondent hierarchy in second-order logic is proper. Using the concept of finite model truth definitions introduced by M. Mostowski, we give a sufficient condition for that to be the case. We also study the complexity of the set AUTOSAT(F) and show that when F is one of the prenex fragments [Sigma superscript 1 subscript m] of second-order logic, it follows that AUTOSAT(F) becomes a complete problem for the corresponding prenex fragment [Sigma superscript 2 subscript m] of third-order logic. Finally, aiming to provide the background for a future line of research in higher-order logics, we take a closer look to the restricted second-order logic SO[superscript w] introduced by Dawar. We further investigate its connection with the concept of relational complexity studied by Abiteboul, Vardi and Vianu. Dawar showed that the existential fragment of SO[superscript w] is equivalent to the nondeterministic inflationary fixed-point logic NFP. Since NFP captures relational NP, it follows that the existential fragment of SO[superscript w] captures relational NP. We give a direct proof, in the style of the proof of Fagin’s theorem, of this fact. We then define formally the concept of relational machine with relational oracle and prove the exact correspondence between the prenex fragments of SO[superscript w] and the levels of the relational polynomial-time hierarchy. This allows us to stablish a direct connection between the relational polynomial hierarchy and SO without using the Abiteboul and Vianu normal form for relational machines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Lin, Tai-Yu. "Cognitive trait model for adaptive learning environments : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information System [i.e. Systems], Massey University, Palmerston North, New Zealand". 2007. http://hdl.handle.net/10179/1451.

Texto completo
Resumen
Among student modelling researches, domain-independent student models have usually been a rarity. They are valued because of reusability and economy. The demand on domain-independent student models is further increased by the need to stay competitive in the so-called knowledge economy nowadays and the widespread practice of lifelong learning. On the other hand, the popularity of student-oriented pedagogy triggers the need to provide cognitive support in virtual learning environments which in turn requires student models that create cognitive profiles of students. This study offers an innovative student modelling approach called cognitive trait model (CTM) to address both the needs mentioned above. CTM is a domain-independent and persistent student model that goes beyond traditional concept of student model. It is capable of taking the role of a learning companion who knows about the cognitive traits of the student and can supply this information when the student first starts using a new learning system. The behaviour of the students in the learning systems can then be used to update CTM. Three cognitive traits are included in the CTM in this study, they are working memory capacity, inductive reasoning ability and divergent associative learning. For the three cognitive traits, their domain-independence and persistence are studied and defined, their characteristics are examined, and behaviour patterns that can be used to indicate them are extracted. In this study, a learning system is developed to gather behaviour data of students. Several web-based psychometric tools are also developed to gather the psychometric data about the three cognitive traits of students. In the evaluations, Cognitive trait modelling is then applied on the behaviour data and the results are compared with the psychometric data. The findings prove the effectiveness of CTM and reveal important insights about the three cognitive traits.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Yang, Jingyu. "Improving effectiveness of dialogue in learning communities : a thesis presented in partial fulfilment of the requirements for the degree of Ph. D. in Information Systems at Massey University, Palmerston North, New Zealand". 2009. http://hdl.handle.net/10179/1410.

Texto completo
Resumen
In a learning community, conventional discussion forums are integral to webbased interventions in traditional classrooms as well as on-line learning environments. Despite popular belief that they are a great success in fostering deep and meaningful discussions and support active learning; research has found that there are millions of messages posted by users to express such an opinion, but it is hard to be directly delivered to all users. Finally there are millions of postings in databases across the country stored away and never reused. This thesis introduces a PhD student’s current research work. It proposes a distributed intelligent discussion forum system dedicated to supporting both students and teachers. The system is developed with the primary goal of reducing the number of problems associated with conventional discussion forum systems in web-based environments and improving the effectiveness of dialogue between students with each other and with teachers so that it can enhance each individual student’s ability to share and learn knowledge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Todd, Elisabeth-Ann Gynn. "Learning about user interface design through the use of user interface pattern languages : a thesis dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, New Zealand". 2010. http://hdl.handle.net/10179/1708.

Texto completo
Resumen
The focus of this research is to investigate the potential of user interface (UI) pattern languages in assisting students of Human Computer Interaction (HCI) to learn the principles of UI design. A graphical representation named a UI-pattern model was developed. It arose from the evaluation of four existing pattern languages. The UI-pattern model is an enhanced form of UI pattern list that represents a specific UI. It was recognised that the UI-pattern model has the potential to help students learn about pattern language structure. It was also realised that UI-pattern modelling can be used to incrementally improve pattern languages through the generative process proposed by Alexander (1979). A UI pattern language Maturity Model (UMM) has been developed. This model can be used by educators when selecting and/or modifying existing UI pattern languages so that they are more appropriate for student use. A method for developing detailed UI designs that utilises a UI pattern language has been developed with the aim of providing students with an ‘authentic’ real-world UI design experience, as envisaged by constructivist educational theory (Jonassen 1999). This UI design method (TUIPL) guides the students’ development of user interface conceptual models. To establish the authenticity of TUIPL three case studies were undertaken out with developers who had differing levels of UI design experience. A series of studies investigated how HCI students used TUIPL to guide the development of UI-pattern models and canonical abstract prototypes. The studies also ascertained the students’ views on using three different forms of UI pattern (illustrated, narrative and diagrammed). Data was collected by observation, questionnaires and completed exercises. The results indicate that the students developed an understanding of pattern language structure, were positive about their experience building UI-pattern models and canonical abstract prototypes, and that patterns aided communication. The learning outcomes were encouraging and students responded positively to using a UI pattern language.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Goh, Tiong Thye. "A framework for multiplatform e-learning systems : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information System [sic] at Massey University, Palmerston North, New Zealand". 2007. http://hdl.handle.net/10179/1576.

Texto completo
Resumen
A multiplatform e-learning system is an e-learning system that can deliver learning content to different accessing devices such as PCs, PDAs and mobile phones. The main objective of the research is to formulate a framework for multiplatform e-learning systems. This thesis focuses on the formulation, competency and constitution of the multiplatform e-learning systems framework and the implementation of a multiplatform e-learning system. In conjunction with the main objective, the research also addresses the factors that influence learner satisfaction during their engagement with a multiplatform e-learning system. In addition, the research investigates the relationships between these factors in influencing learner satisfaction. The research also intends to validate the assertion that multiplatform e-learning systems are better than non-adaptive e-learning systems. A comparative evaluation between a traditional e-learning system and a multiplatform e-learning system from end user (learner) perspective was conducted. The evaluation instrument is based on multiplatform e-learning system questionnaires (MELQ). A total of forty participants took part in the evaluation. Four participants took part in the initial pilot evaluation while thirty six participants took part in the final evaluation. Data analysis and statistical results indicate that there are potential gains in learner satisfaction score in multiplatform e-learning systems over traditional e-learning systems. The results also show that the gain is most significant in mobile devices than in desktop PCs. Statistical analysis reveals that all the factors that influence the learner satisfaction are significant and they have different levels of influence over learner satisfaction. These factors can be further organized into primary factors and secondary factors. These findings and the methodology of evaluation can play an important role for e-learning systems designer to improve the adaptation process and to enhance the level of learner satisfaction in multiplatform e-learning systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Koehler, Henning. "On fast and space-efficient database normalization : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand". 2007. http://hdl.handle.net/10179/806.

Texto completo
Resumen
A common approach in designing relational databases is to start with a relation schema, which is then decomposed into multiple subschemas. A good choice of sub- schemas can often be determined using integrity constraints defined on the schema. Two central questions arise in this context. The first issue is what decompositions should be called "good", i.e., what normal form should be used. The second issue is how to find a decomposition into the desired form. These question have been the subject of intensive research since relational databases came to life. A large number of normal forms have been proposed, and methods for their computation given. However, some of the most popular proposals still have problems: - algorithms for finding decompositions are inefficient - dependency preserving decompositions do not always exist - decompositions need not be optimal w.r.t. redundancy/space/update anomalies We will address these issues in this work by: - designing effcient algorithms for finding dependency preserving decompositions - proposing a new normal form which minimizes overall storage space. This new normal form is then characterized syntactically, and shown to extend existing normal forms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Zhao, Fei. "The future of personal area networks in a ubiquitous computing world : a thesis presented in partial fulfillment of the requirements for the degree of Master of Information Sciences in Information Systems at Massey University at Massey University, Auckland, New Zealand". 2008. http://hdl.handle.net/10179/819.

Texto completo
Resumen
In the future world of ubiquitous computing, wireless devices will be everywhere. Personal area networks (PANs), networks that facilitate communications between devices within a short range, will be used to send and receive data and commands that fulfill an individual’s needs. This research determines the future prospects of PANs by examining success criteria, application areas and barrierschallenges. An initial set of issues in each of these three areas is identified from the literature. The Delphi Method is used to determine what experts believe what are the most important success criteria, application areas and barrierschallenges. Critical success factors that will determine the future of personal area networks include reliability of connections, interoperability, and usability. Key application areas include monitoring, healthcare, and smart things. Important barriers and challenges facing the deployment of PAN are security, interference and coexistence, and regulation and standards.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Moretti, Giovanni S. "A calculation of colours: towards the automatic creation of graphical user interface colour schemes : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand". 2010. http://hdl.handle.net/10179/1492.

Texto completo
Resumen
Interface colour scheme design is complex, but important. Most software allows users to choose the colours of single items individually and out of context, but does not acknowledge colour schemes or aid in their design. Creating colour schemes by picking individual colours can be time-consuming, error-prone, and frustrating, and the results are often mediocre, especially for those without colour design skills. Further, as colour harmony arises from the interactions between all of the coloured elements, anticipating the overall eff ect of changing the colour of any single element can be difficult. This research explores the feasibility of extending artistic colour harmony models to include factors pertinent to user interface design. An extended colour harmony model is proposed and used as the basis for an objective function that can algorithmically assess the colour relationships in an interface colour scheme. Its assessments have been found to agree well with human evaluations and have been used as part of a process to automatically create harmonious and usable interface colour schemes. A three stage process for the design of interface colour schemes is described. In the fi rst stage, the designer speci es, in broad terms and without requiring colour design expertise, colouring constraints such as grouping and distinguishability that are needed to ensure that the colouring of interface elements reflects their semantics. The second stage is an optimisation process that chooses colour relationships to satisfy the competing requirements of harmonious colour usage, any designer-specified constraints, and readability. It produces sets of coordinates that constitute abstract colour schemes: they de fine only relationships between coloured items, not real colours. In the third and fi nal stage, a user interactively maps an abstract scheme to one or more real colour schemes. The colours can be fi ne-tuned as a set (but not altered individually), to allow for such "soft" factors as personal, contextual and cultural considerations, while preserving the integrity of the design embodied in the abstract scheme. The colours in the displayed interface are updated continuously, so users can interactively explore a large number of colour schemes, all of which have readable text, distinguishable controls, and conform to the principles of colour harmony. Experimental trials using a proof-of-concept implementation called the Colour Harmoniser have been used to evaluate a method of holistic colour adjustment and the resulting colour schemes. The results indicate that the holistic controls are easy to understand and eff ective, and that the automatically produced colour schemes, prior to fi ne-tuning, are comparable in quality to many manually created schemes, and after fi ne-tuning, are generally better. By designing schemes that incorporate colouring constraints specifi ed by the user prior to scheme creation, and enabling the user to interactively fi ne-tune the schemes after creation, there is no need to specify or incorporate the subtle and not well understood factors that determine whether any particular set of colours is "suitable". Instead, the approach used produces broadly harmonious schemes, and defers to the developer in the choice of the fi nal colours.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Albertyn, Erina Francina. "e-Process selection using decision making methods : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand". 2010. http://hdl.handle.net/10179/1662.

Texto completo
Resumen
The key objective of this research is to develop a selection methodology that can be used to support and aid the selection of development processes for e-Commerce Information Systems (eCIS) effectively using various decision methods. The selection methodology supports developers in their choice of an e-Commerce Information System Development Process (e-Process) by providing them with a few different decision making methods for choosing between defined e-Processes using a set of quality aspects to compare and evaluate the different options. The methodology also provides historical data of previous selections that can be used to further support their specific choice. The research was initiated by the fast growing Information Technology environment, where e-Commerce Information Systems is a relatively new development area and developers of these systems may be using new development methods and have difficulty deciding on the best suited process to use when developing new eCIS. These developers also need documentary support for their choices and this research helps them with these decision-making processes. The e-Process Selection Methodology allows for the comparison of existing development processes as well as the comparison of processes as defined by the developers. Four different decision making methods, the Value-Benefit Method (Weighted Scoring), the Analytical Hierarchy Process, Case-Based Reasoning and a Social Choice method are used to solve the problem of selecting among e-Commerce Development Methodologies. The Value-Benefit Method, when applied to the selection of an e-Process from a set of e-Processes, uses multiple quality aspects. Values are assigned to each aspect for each of the e-Processes by experts. The importance of each of the aspects, to the eCIS, is defined in terms of weights. The selected e-Process is the one with the highest score when the values and weights are multiplied and then summed. The Analytic Hierarchy Process is used to quantify a selection of quality aspects and then these are used to evaluate alternative e-Processes and thus determining the best matching solution to the problem. This process provides for the ranking and determining of the relative worth of each of the quality aspects. Case-Based Reasoning requires the capturing of the resulting knowledge of previous cases, in a knowledge base, in order to make a decision. The case database is built in such a way that the concrete factual knowledge of previous individual cases that were solved previously is stored and can be used in the decision process. Case-based reasoning is used to determine the best choices. This allows the user to either use the selection methodology or the case base database to resolve their problems or both. Social Choice Methods are based on voting processes. Individuals vote for their preferences from a set of e-Processes. The results are aggregated to obtain a final result that indicates which e-Process is the preferred one. The e-Process Selection Methodology is demonstrated and validated by the development of a prototype tool. This tool can be used to select the most suitable solution for a case at hand. The thesis includes the factors that motivated the research and the process that was followed. The e-Process Selection Methodology is summarised as well as the strengths and weaknesses discussed. The contribution to knowledge is explained and future developments are proposed. To conclude, the lessons learnt and reinforced are considered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Mathrani, Sanjay. "A transformational model to understand the impact of enterprise systems for business benefits : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology at Massey University, Albany, New Zealand". 2010. http://hdl.handle.net/10179/1368.

Texto completo
Resumen
Over the years many organizations have implemented an enterprise system (ES), also called enterprise resource planning (ERP) system, to streamline the flow of information and improve organizational effectiveness to produce business benefits which justify the ES investment. The effectiveness of these systems to achieve benefits is an area being proactively researched by both professionals and academia. However, most of these studies focus on ‘what ESs do’ rather than ‘how ESs do it’. The purpose of this study is to better understand how organizations derive benefits from utilization of an ES and its data. This study utilizes a transformational model of how ES data are transformed into knowledge and results to evaluate the impact of ES information on organizational functions and processes and how this can lead to business benefits. The linkage between expected outcomes, utilization of ES data in decision-making processes, and realized or unrealized benefits provides the reason for this study. Findings reveal that the key benefits commercial firms seek from an ES include improving information flow and visibility, integration and automation of functions, cost reductions by reducing inventory, and achieving process efficiencies for both internal and external operations. The various tools and methods businesses use for transforming ES data into knowledge include the use of data warehouses and business intelligence modules that assist in extraction and manipulation of data, and reporting on particular data objects. Web portals are actively utilized to collaborate between stakeholders and access real-time information. Business tools such as KPI reporting, balanced scorecards and dashboards are used to track progress towards realizing benefits and establishing analytical decision making. Findings emphasize that benefit realization from an ES implementation is a holistic process that not only includes the essential data and technology factors, but also includes other factors such as business strategy deployment, people and process management, and skills and competency development. Findings reveal that business organizations generally lack in producing value assessments that often lead to weak business cases and insufficient benefit models which cannot be used for benefit tracking. However, these organizations are now realizing that it is not enough to put in an ES and expect an automatic improvement. Organizations are now establishing analytical and knowledge-leveraging processes to optimize and realize business value from their ES investment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Udumalagala, Gamage Wadduwage Vimani Eranda. "Perceptions of educators regarding the acceptance of multi-user virtual environments as an educational tool : presented in fulfilment of the requirements for the degree of Master of Business Studies at Massey University". 2010. http://hdl.handle.net/10179/1379.

Texto completo
Resumen
The concept of Multi-User Virtual Environments (MUVEs) has opened new avenues in the educational spectrum. Despite its popularity as an educational environment tool, the successful implementation of a virtual classroom is heavily reliant on the educator. This research focuses on the perceptions of educators regarding the acceptance of the MUVE as an educational tool. The Technology Acceptance Model (TAM) was used to identify and evaluate the potential benefits of the MUVE in the domain of education. The qualitative approach was considered to be the suitable approach for this study. Semi-structured interviews were conducted with 22 educators; these interviews included the demonstration of a virtual class located in the Second Life Island known as Jokaydia. The collected data was transcribed using NVivo software, and analysed using constant comparison analysis. The transcribed interviews were provided to another researcher in order to obtain an independent analysis; this created the basis for triangulation of participants’ perceptions. A summary of this analysis was then sent to all participants to confirm its credibility. The conclusions of the study suggest that the combination of MUVEs’ features and strengths will eventually influence the educators to accept the MUVE as an educational tool, although several areas of concern are identified. Future growth in the educational uses of MUVEs is examined, the implications and limitations of the study are discussed, and ideas for future research are elaborated on. Keywords: MUVE, Second Life, education, TAM, ease of use, subjective norm, enjoyment, facilities, compatibility, security and trust, collaboration, awareness, media richness, discovery learning, situated learning, role playing, controlled environment, immersiveness.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Diaz, Andrade Antonio. "Interaction between existing social networks and information and communication technology (ICT) tools : evidence from rural Andes". 2007. http://hdl.handle.net/2292/2357.

Texto completo
Resumen
This exploratory and interpretive research examines the anticipated consequences of information and communication technology (ICT) on six remote rural communities, located in the northern Peruvian Andes, which were provided with computers connected to the Internet. Instead of looking for economic impacts of the now-available technological tools, this research investigates how local individuals use (or not) computers, and analyses the mechanisms by which computer-mediated information, obtained by those who use computers, is disseminated through their customary face-to-face interactions with their compatriots. A holistic multiple-case study design was the basis for the data collection process. Data were collected during four-and-half months of fieldwork. Grounded theory informed both the method of data analysis and the technique for theory building. As a result of an inductive thinking process, two intertwined core themes emerged. The first theme, individuals’ exploitation of ICT, is related to how some individuals overcome some difficulties and try to make the most of the now available ICT tools. The second theme, complementing existing social networks through ICT, reflects the interaction between the newly ICT-mediated information and virtual networks and the local existing social networks. However, these two themes were not evenly distributed across the communities studied. The evidence revealed that dissimilarities in social cohesion among the communities and, to some extent, disparities in physical infrastructure are contributing factors that explain the unevenness. But social actors – named as ‘activators of information’ – become the key triggers of the disseminating process for fresh and valuable ICT-mediated information throughout their communities. These findings were compared to the relevant literature to produce theoretical generalisations. As a conclusion, it is suggested any ICT intervention in a developing country requires at least three elements to be effective: a tolerable physical infrastructure, a strong degree of social texture and an activator of information.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Zhao, Jane Qiong. "Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand". 2007. http://hdl.handle.net/10179/718.

Texto completo
Resumen
A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Li, Xin. "Development of a framework for evaluating the quality of instructional design ontologies : a thesis presented in partial fulfilment of the requirements for the degree of Master of Management in Information Systems at Massey University, Wellington, New Zealand". 2009. http://hdl.handle.net/10179/1288.

Texto completo
Resumen
Instructional Design (ID) ontology can be used to formally represent knowledge about the teaching and learning process, which contributes to automatic construction of personalised eLearning experiences. While ID ontologies have been continuously improved and developed over recent years, there are concerns regarding what makes a quality ID ontology. This study proposes a framework for evaluating the quality of an ID ontology by synthesising the crucial elements considered in the ID ontologies developed to date. The framework would allow a more precise evaluation of different ID ontologies, by demonstrating the quality of each ontology with respect to the set of crucial elements that arise from the ontology. This study also gives an overview of the literature on ID ontology, as well as the implications for future research in this area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Jan, Zaid. "Intelligent medical device integration with real time operating system : a thesis submitted to the School of Engineering in partial fulfilment of the requirements for the degree of Master of Engineering, Department of Electronics and Computer Syetem [i.e. Systems] Engineering at Massey University, [Albany], New Zealand". 2009. http://hdl.handle.net/10179/1501.

Texto completo
Resumen
Many commercial devices now being produced have the ability to be remotely monitored and controlled. This thesis aims to develop a generic platform that can easily be extended to interface with many different kinds of devices for remote monitoring and control via a TCP/IP connection. The deployment will be concentrated on Medical devices but can be extended to all serial device interfaces. The hardware to be used in the development of this platform is an ARM Cortex M3 based Micro-Controller board which has to be designed to meet the requirement set by the Precept Health the founder of this platform. The design was conducted at Massey University in collaboration with senior engineer from the company. The main task in achieving the aim was the development of the necessary software layers to implement remote monitoring and control. The eCosCentric real-time embedded operating system was used to form a generic base for developing applications to monitor and control specific devices. The majority of the work involved in this project was the deployment of the operating system to the Micro-Controller. During the development process, several hardware issues were discovered with the Ethernet interface and were corrected. Using the generic platform, an application was developed to allow the reading of Bi-Directional pass through a communication protocol from 4 isolated serial input channels, to an Ethernet channel using TCP protocol.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Li, Steven. "An investigation of system integrations and XML applications within a NZ government agency : a thesis submitted in partial fulfillment of the requirements for the degree of Master of Information Systems at Massey University, New Zealand". 2009. http://hdl.handle.net/10179/1627.

Texto completo
Resumen
With the evolution of Information Technology, especially the Internet, system integration is becoming a common way to expand IT systems within and beyond an enterprise network. Although system integration is becoming more and more common within large organizations, however, the literature review had found IS research in this area had not been sufficient, especially for the development of integration solutions within large organizations. It has made research like this one conducted within a large NZ government agency necessary. Four system integration projects were selected and studied using case study research methodology. The case study was designed and conducted using guidelines mainly from the well-known R. K. Yin’s (2002) “Case Study Research” book. The research was set to seek answers for a series of research questions, which were related to requirements of system integration and challenges for solution development. Special attention had been given to XML applications, as system integration and XML were found to be coupled in many system integrations and frameworks during the literature review. Data were first gathered from all four projects one by one, and then the bulk of analysis was done on the summarized data. Various analysis methods including chain-of-evidence, root-cause-analysis and pattern-matching were adopted. The principles of interpretive research proposed by Klein and Myers (1999) and triangulation were observed. In conclusions, a set of models have been derived from the research, namely a model for clarifying integration requirements; a model for integration solution architecture; a model for integration development life cycle and a model of critical success factor for integration projects. A development framework for small to medium size integration projects has also been proposed based on the models. The research also found XML application indeed would play an important role for system integration; the critical success factors for XML application included suitable development tools, development skills and methodologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zhu, Jihai. "Low-complexity block dividing coding method for image compression using wavelets : a thesis presented in partial fulfillment of the requirements for the degree of Master of Engineering in Computer Systems Engineering at Massey University, Palmerston North, New Zealand". 2007. http://hdl.handle.net/10179/704.

Texto completo
Resumen
Image coding plays a key role in multimedia signal processing and communications. JPEG2000 is the latest image coding standard, it uses the EBCOT (Embedded Block Coding with Optimal Truncation) algorithm. The EBCOT exhibits excellent compression performance, but with high complexity. The need to reduce this complexity but maintain similar performance to EBCOT has inspired a significant amount of research activity in the image coding community. Within the development of image compression techniques based on wavelet transforms, the EZW (Embedded Zerotree Wavelet) and the SPIHT (Set Partitioning in Hierarchical Trees) have played an important role. The EZW algorithm was the first breakthrough in wavelet based image coding. The SPIHT algorithm achieves similar performance to EBCOT, but with fewer features. The other very important algorithm is SBHP (Sub-band Block Hierarchical Partitioning), which attracted significant investigation during the JPEG2000 development process. In this thesis, the history of the development of wavelet transform is reviewed, and a discussion is presented on the implementation issues for wavelet transforms. The above mentioned four main coding methods for image compression using wavelet transforms are studied in detail. More importantly the factors that affect coding efficiency are identified. The main contribution of this research is the introduction of a new low-complexity coding algorithm for image compression based on wavelet transforms. The algorithm is based on block dividing coding (BDC) with an optimised packet assembly. Our extensive simulation results show that the proposed algorithm outperforms JPEG2000 in lossless coding, even though it still leaves a narrow gap in lossy coding situations
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Barczak, Andre Luis Chautard. "Feature-based rapid object detection : from feature extraction to parallelisation : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Sciences at Massey University, Auckland, New Zealand". 2007. http://hdl.handle.net/10179/742.

Texto completo
Resumen
This thesis studies rapid object detection, focusing on feature-based methods. Firstly, modifications of training and detection of the Viola-Jones method are made to improve performance and overcome some of the current limitations such as rotation, occlusion and articulation. New classifiers produced by training and by converting existing classifiers are tested in face detection and hand detection. Secondly, the nature of invariant features in terms of the computational complexity, discrimination power and invariance to rotation and scaling are discussed. A new feature extraction method called Concentric Discs Moment Invariants (CDMI) is developed based on moment invariants and summed-area tables. The dimensionality of this set of features can be increased by using additional concentric discs, rather than using higher order moments. The CDMI set has useful properties, such as speed, rotation invariance, scaling invariance, and rapid contrast stretching can be easily implemented. The results of experiments with face detection shows a clear improvement in accuracy and performance of the CDMI method compared to the standard moment invariants method. Both the CDMI and its variant, using central moments from concentric squares, are used to assess the strength of the method applied to hand-written digits recognition. Finally, the parallelisation of the detection algorithm is discussed. A new model for the specific case of the Viola-Jones method is proposed and tested experimentally. This model takes advantage of the structure of classifiers and of the multi-resolution approach associated with the detection method. The model shows that high speedups can be achieved by broadcasting frames and carrying out the computation of one or more cascades in each node.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Henderson, Sarah. "How do people manage their documents?: an empirical investigation into personal document management practices among knowledge workers". 2009. http://hdl.handle.net/2292/5230.

Texto completo
Resumen
Personal document management is the activity of managing a collection of digital documents performed by the owner of the documents, and consists of creation/acquisition, organisation, finding and maintenance. Document management is a pervasive aspect of digital work, but has received relatively little attention from researchers. The hierarchical file system used by most people to manage their documents has not conceptually changed in decades. Although revolutionary prototypes have been developed, these have not been grounded in a thorough understanding of document management behaviour and therefore have not resulted in significant changes to document management interfaces. Improvements in understanding document management can result in productivity gains for knowledge workers, and since document management is such a common activity, small improvements can deliver large gains. The aim of this research was to understand how people manage their personal document collections and to develop guidelines for the development of tools to support personal document management. A field study was conducted that included interviews, a survey and file system snapshot. The interviews were conducted with ten participants to investigate their document management strategies, structures and struggles. In addition to qualitative analysis of semi-structured interviews, a novel investigation technique was developed in the form of a file system snapshot which collects information about document structures and derives a number of metrics which describe the document structure. A survey was also conducted, consisting of a questionnaire and a file system snapshot, which enabled the findings of the field study to be validated, and to collect information from a greater number of participants. The results of this research culminated in (1) development of a conceptual framework highlighting the key personal document management attitudes, behaviours and concerns; (2) model of basic operations that any document management system needs to provide; (3) identification of piling, filing and structuring as three key document management strategies; (4) guidelines for the development of user interfaces to support document management, including specific guidelines for each document management strategy. These contributions both improve knowledge of personal document management on which future research can build, and provide practical advice to document management system designers which should result in the development of more usable system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Verhaart, Michael Henry. "The virtualMe : a knowledge acquisition framework : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Information Systems at Massey University, Palmerston North, New Zealand". 2008. http://hdl.handle.net/10179/851.

Texto completo
Resumen
Throughout life, we continuously accumulate data, information and knowledge. The ability to recall much of this accumulated knowledge commonly deteriorates with time, though some forms part of what is referred to as tacit knowledge. In the context of education, students access and interact with a teacher’s knowledge in order to create their own, and may have their own data, information and knowledge that could be added to teacher’s knowledge for everyone’s benefit. The realization that students can contribute to enhancing personal knowledge is an important cornerstone in developing a mentor (teacher, tutor and facilitator) focused knowledge system. The research presented in this thesis discusses an integrated framework that manages an individual’s personal data, information and knowledge and enables it to be enhanced by others, in the context of a blended teaching and learning environment. Existing related models, structures, systems and current practices are discussed. The core outcomes of this thesis include: • the virtualMe framework that can be utilized when developing Web based teaching and learning systems; • the sniplet content model that can be used as the basis for sharing information and knowledge; • an annotation framework used to manage knowledge acquisition; and • a multimedia object (MMO) model that: o allows for related media artefacts to be intuitively grouped in a logical collection; o includes a meta-data schema that encompasses other metadata structures, and manages context and referencing; and o includes a model allowing component parts to be reaggregated if they are separated. The virtualMe framework provides the ability to retain context while transferring the content from one person to another and from one place to another. The framework retains the content’s original context and then allows the receiver to customise the content and metadata so that the content becomes that person’s knowledge. A mechanism has been created for such contextual transfer of content (context retained by the metadata).
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Wells, Linda Susan Mary. "Getting evidence to and from general practice consultations for cardiovascular risk management using computerised decision support". 2009. http://hdl.handle.net/2292/4959.

Texto completo
Resumen
Abstract Background Cardiovascular disease (CVD) has an enormous impact on the lives and health of New Zealanders. There is substantial epidemiological evidence that supports identifying people at high risk of CVD and treating them with lifestyle and drug-based interventions. If fully implemented, this targeted high risk approach could reduce future CVD events by over 50%. Recent studies have shown that a formal CVD risk assessment to the systematically identify high risk patients is rarely done in routine New Zealand general practice and audits of CVD risk management have shown large evidence-practice gaps. The CVD risk prediction score recommended by New Zealand guidelines for identifying high CVD risk patients was derived from the US Framingham Heart Study using data collected between the 1960s and 1980s. This score has only modest prediction accuracy and there are particular concerns about it’s validity for New Zealand sub-populations such as high risk ethnic groups or people with diabetes. Aims The overall aims of this thesis were to investigate the potential of a computerised decision support system (CDSS) to improve the assessment and management of CVD risk in New Zealand general practice while simultaneously developing a sustainable cohort study that could be used for validating and improving CVD risk prediction scores and related research. Methods An environmental scan of the New Zealand health care setting’s readiness to support a CDSS was conducted .The epidemiological evidence was reviewed to assess the effect of decision support systems on the quality of health care and the types and functionality of systems most likely to be successful. This was followed by a focused systematic review of randomised trials evaluating the impact of CDSS on CVD risk assessment and management practices and patient CVD outcomes in primary care. A web-based CDSS (PREDICT) was collaboratively developed. This rules-based provider-initiated system with audit and feedback and referral functionalities was fully integrated with general practice electronic medical records in a number of primary health organisations (PHOs). The evidence-based content was derived from national CVD and diabetes guidelines. When clinicians used PREDICT at the time of a consultation, treatment recommendations tailored to the patient’s CVD and diabetes risk profile were delivered to support decision-making within seconds. Simultaneously, the patient’s CVD risk profiles were securely stored on a central server. With PHO permission, anonymised patient data were linked via encrypted patient National Health Index numbers to national death and hospitalisation data. Three analytical studies using these data are described in this thesis. The first evaluated changes in GP risk assessment practice following implementation of PREDICT; the second investigated patterns of use of the CDSS by GPs and practice nurses; and the third describes the emerging PREDICT cohort and a preliminary validation of risk prediction scores. Results Given the rapid development of organised primary care since the 1990’s, the high degree of general practice computerisation and the New Zealand policy (health, informatics, privacy) environment, the introduction of a CDSS into the primary care setting was deemed feasible. The evidence for the impact of CDSS in general has been moderately favourable in terms of improving desired practice. Of the randomised trials of CDSS for assessing or managing CVD risk, about two-thirds reported improvements in provider processes and two-fifths reported some improvements in intermediate patient outcomes. No adverse effects were reported. Since 2002, the PREDICT CDSS has been implemented progressively in PHOs within Northland and the three Auckland regional District Health Board catchments, covering a population of 1.5 million. A before-after audit conducted in three large PHOs showed that CVD risk documentation increased four fold after the implementation of PREDICT. To date, the PREDICT dataset includes around 63,000 risk assessments conducted on a cohort of over 48,000 people by over 1000 general practitioners and practice nurses. This cohort has been followed from baseline for a median of 2.12 years. During that time 2655 people died or were hospitalised with a CVD event. Analyses showed that the original Framingham risk score was reasonably well calibrated overall but underestimated risk in high risk ethnic groups. Discrimination was only modest (AUC 0.701). An adjusted Framingham score, recommended by the New Zealand Guideline Group (NZGG) overestimated 5-year event rates by around 4-7%, in effect lowering the threshold for drug therapy to about 10% 5-year predicted CVD risk. The NZGG adjusted score (AUC 0.676) was less discriminating than the Framingham score and over-adjusted for high risk ethnic groups. For the cohort aged 30-74 years, the NZGG-recommended CVD risk management strategy identified almost half of the population as eligible for lifestyle management +/- drug therapy and this group generated 82% of all CVD events. In contrast the original Framingham score classified less than one-third of the cohort as eligible for individualised management and this group generated 71% of the events that occurred during follow-up. Implications This research project has demonstrated that a CDSS tool can be successfully implemented on a large scale in New Zealand general practice. It has assisted practitioners to improve the assessment and management of CVD at the time of patient consultation. Simultaneously, PREDICT has cost-effectively generated one of the largest cohorts of Māori and non-Māori ever assembled in New Zealand. As the cohort grows, new CVD risk prediction scores will be able to be developed for many New Zealand sub-populations. It will also provide clinicians and policy makers with the information needed to determine the trade-offs between the resources required to manage increasing proportions of the populations and the likely impact of management on preventing CVD events.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía