Dissertations / Theses on the topic 'WKB Model'

To see the other types of publications on this topic, follow the link: WKB Model.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'WKB Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Campbell, Peter R. M. "An ocean medium pulse propagation model based on linear systems theory and the WKB approximation." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Negulescu, Claudia. "Asymptotical models and numerical schemes for quantum systems." Toulouse 3, 2005. http://www.theses.fr/2005TOU30221.

Full text
Abstract:
Cette thèse s'intéresse à la modélisation mathématique et à la simulation numérique du transport électronique dans des dispositifs semiconducteurs nanométriques. Différents modèles de transport, destinés à la description des diverses régions d'un transistor MOSFET, sont introduits et analysés. Une attention particulière est portée sur la modélisation des effets quantiques ayant lieu dans ces dispositifs (système auto-consistant de Schrödinger/Poisson avec des conditions aux bords ouvertes)
The present PhD thesis is concerned with the mathematical modelling and the numerical simulation of the electron transport in nanoscale semiconductor devices. Different transport models are introduced and analyzed, aimed to describe the various regions of a MOSFET transistor. We focus our attention particularly on the modelling of quantum effects taking place in such devices (self-consistent Schrödinger-Poisson system with open boundary conditions)
APA, Harvard, Vancouver, ISO, and other styles
3

Ozkok, Yusuf Ibrahim. "Web Based Ionospheric Forecasting Using Neural Network And Neurofuzzy Models." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606031/index.pdf.

Full text
Abstract:
This study presents the implementation of Middle East Technical University Neural Network (METU-NN) models for the ionospheric forecasting together with worldwide usage capability of the Internet. Furthermore, an attempt is made to include expert information in the Neural Network (NN) model in the form of neurofuzzy network (NFN). Middle East Technical University Neurofuzzy Network (METU-NFN) modeling approach is developed which is the first attempt of using a neurofuzzy model in the ionospheric forecasting studies. The Web based applications developed in this study have the ability to be customized such that other NN and NFN models including METU-NFN can also be adapted. The NFN models developed in this study are compared with the previously developed and matured METU-NN models. At this very early stage of employing neurofuzzy models in this field, ambitious objectives are not aimed. Applicability of the neurofuzzy systems on the ionospheric forecasting studies is only demonstrated. Training and operating METU-NN and METU-NFN models under equal conditions and with the same data sets, the cross correlation of obtained and measured values are 0.9870 and 0.9086 and the root mean square error (RMSE) values of 1.7425 TECU and 4.7987 TECU are found by operating METU-NN and METU-NFN models respectively. The results obtained by METU-NFN model is close to those found by METU-NN model. These results are reasonable enough to encourage further studies on neurofuzzy models to benefit from expert information. Availability of these models which already attracted intense international attention will greatly help the related scientific circles to use the models. The models can be architecturally constructed, trained and operated on-line. To the best of our knowledge this is the first application that gives the ability of on-line model usage with these features. Applicability of NFN models to the ionospheric forecasting is demonstrated. Having ample flexibility the constructed model enables further developments and improvements. Other neurofuzzy systems in the literature might also lead to better achievements.
APA, Harvard, Vancouver, ISO, and other styles
4

Wan, Bo. "Improved Usage Model for Web Application Reliability Testing." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23135.

Full text
Abstract:
Testing the reliability of an application usually requires a good usage model that accurately captures the likely sequences of inputs that the application will receive from the environment. The models being used in the literature are mostly based on Markov chains. They are used to generate test cases that are statistically close to what the applica-tion is expected to receive when in production. In this thesis, we propose a model for reli-ability testing that is created directly from the log file of a web application. Our proposed model is also based on Markov chains and has two components: one component, based on a modified tree, captures the most frequent behaviors, while the other component is another Markov chain that captures infrequent behaviors. The result is a statistically cor-rect model that shows clearly what most users do on the site. The thesis also presents an evaluation method for estimating the accuracy of vari-ous reliability-testing usage models. The method is based on comparison between ob-served users’ traces and traces inferred from the usage model. Our method gauges the accuracy of the reliability-testing usage model by calculating the sum of goodness-of-fit values of each traces and scaling the result between 0 and 1. Finally, we present an experimental study on the log of a real web site and discuss the way to use proposed usage model to generate test sequences, as well as strength and weakness of the model for reliability testing.
APA, Harvard, Vancouver, ISO, and other styles
5

Renganarayanan, Vidya. "Web agent programming model." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000348.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2001.
Title from title page of source document. Document formatted into pages; contains x, 37 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Silva, Renata Eleuterio da [UNESP]. "As tecnologias da Web Semântica no domínio bibliográfico." Universidade Estadual Paulista (UNESP), 2013. http://hdl.handle.net/11449/93653.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:44Z (GMT). No. of bitstreams: 0 Previous issue date: 2013Bitstream added on 2014-06-13T19:54:59Z : No. of bitstreams: 1 silva_re_me_mar.pdf: 1232145 bytes, checksum: 0769231b6107aa5227dea1ca687c1457 (MD5)
A proposta de uma Web Semântica surgiu como uma alternativa que possibilitaria a interpretação das informações por máquinas, permitindo assim maior qualidade nas buscas e resultados mais relevantes aos usuários. A Web Semântica pode ser utilizada atualmente apenas em domínios restritos, como em sites de comércio eletrônico, devido à dificuldade de representar ontologicamente toda a Web. Objetiva-se verificar como os conceitos, tecnologias, arquiteturas de metadados utilizados pela Web Semântica podem contribuir à construção, modelagem e arquitetura de metadados de catálogos bibliográficos, tomando por base os conceitos definidos no modelo conceitual desenvolvido para a representação do universo bibliográfico denominado Functional Requirements for Bibliographic Records (FRBR), além de explanar sobre a utilização do modelo conceitual como recurso ontológico. A proposta se pauta no estudo de arquiteturas de metadados semânticas, de modo a identificar suas características, funções e estruturas, além de estudar o modelo BIBFRAME (Bibliographic Framework), que se configura como a iniciativa mais recente acerca da implementação de tecnologias da Web à área da Biblioteconomia e Ciência da Informação. Esta pesquisa caracteriza-se por seu caráter teórico-exploratório e foi desenvolvida mediante análise e revisão de literatura sobre seus temas. Os resultados apresentam as principais arquiteturas de metadados utilizadas no contexto da Web Semântica e uma abordagem sobre ontologias, interoperabilidade em sistemas de informação, modelagem de catálogos online, além da apresentação do modelo BIBFRAME, com base em sua importância para a catalogação.
The proposal of a Semantic Web has emerged as an alternative that would allow the interpretation of information by machines, allowing higher quality in the searches and more relevant results to users. Currently, the Semantic Web can only be used in restricted domains, such as e-commerce sites, due to the difficulty of representing the entire Web ontologically. The objective is to see how the concepts, technologies, architectures, and metadata used by the Semantic Web can contribute to build, model and metadata architecture of bibliographic catalogs, based on the concepts defined in the conceptual model developed for the representation of the bibliographic universe called Functional Requirements for Bibliographic Records (FRBR), and explain about the use of the conceptual model and ontological resource. The proposal is guided in the study of semantic metadata architectures, in order to identify its characteristics, functions and structures, in addition to study the model BIBFRAME (Bibliographic Framework), which constitutes the most recent initiative on implementing Web technologies to the Library and Information Science field. This research is characterized by its theoretical and exploratory character was developed through analysis and review of literature on their subjects. The results show the main architectures used in the metadata context of the Semantic Web and an approach to ontology, interoperability in information systems, modeling catalogs online, besides the presentation of the model BIBFRAME, based on their importance to the Cataloging.
APA, Harvard, Vancouver, ISO, and other styles
7

ALHARTHI, KHALID AYED B. "AN ARABIC SEMANTIC WEB MODEL." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1367064711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saleh, Mohamed M. "Characterization of model behavior and its causal foundation /." Online version, 2002. http://bibpurl.oclc.org/web/31241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kuipers, Johannes Alfonsius Maria. "A two-fluid micro balance model of fluidized beds /." Online version, 1990. http://bibpurl.oclc.org/web/29825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Xin. "Research of mixture of experts model for time series prediction /." Online version, 2005. http://bibpurl.oclc.org/web/25080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gueffaz, Mahdi. "ScaleSem : model checking et web sémantique." Phd thesis, Université de Bourgogne, 2012. http://tel.archives-ouvertes.fr/tel-00801730.

Full text
Abstract:
Le développement croissant des réseaux et en particulier l'Internet a considérablement développé l'écart entre les systèmes d'information hétérogènes. En faisant une analyse sur les études de l'interopérabilité des systèmes d'information hétérogènes, nous découvrons que tous les travaux dans ce domaine tendent à la résolution des problèmes de l'hétérogénéité sémantique. Le W3C (World Wide Web Consortium) propose des normes pour représenter la sémantique par l'ontologie. L'ontologie est en train de devenir un support incontournable pour l'interopérabilité des systèmes d'information et en particulier dans la sémantique. La structure de l'ontologie est une combinaison de concepts, propriétés et relations. Cette combinaison est aussi appelée un graphe sémantique. Plusieurs langages ont été développés dans le cadre du Web sémantique et la plupart de ces langages utilisent la syntaxe XML (eXtensible Meta Language). Les langages OWL (Ontology Web Language) et RDF (Resource Description Framework) sont les langages les plus importants du web sémantique, ils sont basés sur XML.Le RDF est la première norme du W3C pour l'enrichissement des ressources sur le Web avec des descriptions détaillées et il augmente la facilité de traitement automatique des ressources Web. Les descriptions peuvent être des caractéristiques des ressources, telles que l'auteur ou le contenu d'un site web. Ces descriptions sont des métadonnées. Enrichir le Web avec des métadonnées permet le développement de ce qu'on appelle le Web Sémantique. Le RDF est aussi utilisé pour représenter les graphes sémantiques correspondant à une modélisation des connaissances spécifiques. Les fichiers RDF sont généralement stockés dans une base de données relationnelle et manipulés en utilisant le langage SQL ou les langages dérivés comme SPARQL. Malheureusement, cette solution, bien adaptée pour les petits graphes RDF n'est pas bien adaptée pour les grands graphes RDF. Ces graphes évoluent rapidement et leur adaptation au changement peut faire apparaître des incohérences. Conduire l'application des changements tout en maintenant la cohérence des graphes sémantiques est une tâche cruciale et coûteuse en termes de temps et de complexité. Un processus automatisé est donc essentiel. Pour ces graphes RDF de grande taille, nous suggérons une nouvelle façon en utilisant la vérification formelle " Le Model checking ".Le Model checking est une technique de vérification qui explore tous les états possibles du système. De cette manière, on peut montrer qu'un modèle d'un système donné satisfait une propriété donnée. Cette thèse apporte une nouvelle méthode de vérification et d'interrogation de graphes sémantiques. Nous proposons une approche nommé ScaleSem qui consiste à transformer les graphes sémantiques en graphes compréhensibles par le model checker (l'outil de vérification de la méthode Model checking). Il est nécessaire d'avoir des outils logiciels permettant de réaliser la traduction d'un graphe décrit dans un formalisme vers le même graphe (ou une adaptation) décrit dans un autre formalisme
APA, Harvard, Vancouver, ISO, and other styles
12

Jones, Richard. "Uncertainty analysis in the Model Web." Thesis, Aston University, 2014. http://publications.aston.ac.uk/21397/.

Full text
Abstract:
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows. To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain. An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Schivo, Stefano. "Statistical Model Checking of Web Services." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/368768.

Full text
Abstract:
In recent years, the increasing interest on service-oriented paradigm has given rise to a series of supporting tools and languages. In particular, COWS (Calculus for Orchestration of Web Services) has been attracting the attention of part of the scientific community for its peculiar effort in formalising the semantics of the de facto standard Web Services orchestration language WS-BPEL. The purpose of the present work is to provide the tools for representing and evaluating the performance of Web Services modelled through COWS. In order to do this, a stochastic version of COWS is proposed: such a language allows us to describe the performance of the modelled systems and thus to represent Web Services both from the qualitative and quantitative points of view. In particular, we provide COWS with an extension which maintains the polyadic matching mechanism: this way, the language will still provide the capability to explicitly model the use of session identifiers. The resulting Scows is then equipped with a software tool which allows us to effectively perform model checking without incurring into the problem of state-space explosion, which would otherwise thwart the computation efforts even when checking relatively small models. In order to obtain this result, the proposed tool relies on the statistical analysis of simulation traces, which allows us to deal with large state-spaces without the actual need to completely explore them. Such an improvement in model checking performances comes at the price of accuracy in the answers provided: for this reason, users can trade-off speed against accuracy by modifying a series of parameters. In order to assess the efficiency of the proposed technique, our tool is compared with a number of existing model checking softwares.
APA, Harvard, Vancouver, ISO, and other styles
14

Schivo, Stefano. "Statistical Model Checking of Web Services." Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/231/1/PhD-Thesis.pdf.

Full text
Abstract:
In recent years, the increasing interest on service-oriented paradigm has given rise to a series of supporting tools and languages. In particular, COWS (Calculus for Orchestration of Web Services) has been attracting the attention of part of the scientific community for its peculiar effort in formalising the semantics of the de facto standard Web Services orchestration language WS-BPEL. The purpose of the present work is to provide the tools for representing and evaluating the performance of Web Services modelled through COWS. In order to do this, a stochastic version of COWS is proposed: such a language allows us to describe the performance of the modelled systems and thus to represent Web Services both from the qualitative and quantitative points of view. In particular, we provide COWS with an extension which maintains the polyadic matching mechanism: this way, the language will still provide the capability to explicitly model the use of session identifiers. The resulting Scows is then equipped with a software tool which allows us to effectively perform model checking without incurring into the problem of state-space explosion, which would otherwise thwart the computation efforts even when checking relatively small models. In order to obtain this result, the proposed tool relies on the statistical analysis of simulation traces, which allows us to deal with large state-spaces without the actual need to completely explore them. Such an improvement in model checking performances comes at the price of accuracy in the answers provided: for this reason, users can trade-off speed against accuracy by modifying a series of parameters. In order to assess the efficiency of the proposed technique, our tool is compared with a number of existing model checking softwares.
APA, Harvard, Vancouver, ISO, and other styles
15

Hou, Jingyu. "Discovering web page communities for web-based data management." University of Southern Queensland, Faculty of Sciences, 2002. http://eprints.usq.edu.au/archive/00001447/.

Full text
Abstract:
The World Wide Web is a rich source of information and continues to expand in size and complexity. Mainly because the data on the web is lack of rigid and uniform data models or schemas, how to effectively and efficiently manage web data and retrieve information is becoming a challenge problem. Discovering web page communities, which capture the features of the web and web-based data to find intrinsic relationships among the data, is one of the effective ways to solve this problem. A web page community is a set of web pages that has its own logical and semantic structures. In this work, we concentrate on the web data in web page format and exploit hyperlink information to discover (construct) web page communities. Three main web page communities are studied in this work: the first one is consisted of hub and authority pages, the second one is composed of relevant web pages with respect to a given page (URL), and the last one is the community with hierarchical cluster structures. For analysing hyperlinks, we establish a mathematical framework, especially the matrix-based framework, to model hyperlinks. Within this mathematical framework, hyperlink analysis is placed on a solid mathematic base and the results are reliable. For the web page community that is consisted of hub and authority pages, we focus on eliminating noise pages from the concerned page source to obtain another good quality page source, and in turn improve the quality of web page communities. We propose an innovative noise page elimination algorithm based on the hyperlink matrix model and mathematic operations, especially the singular value decomposition (SVD) of matrix. The proposed algorithm exploits hyperlink information among the web pages, reveals page relationships at a deeper level, and numerically defines thresholds for noise page elimination. The experiment results show the effectiveness and feasibility of the algorithm. This algorithm could also be used solely for web-based data management systems to filter unnecessary web pages and reduce the management cost. In order to construct a web page community that is consisted of relevant pages with respect to a given page (URL), we propose two hyperlink based relevant page finding algorithms. The first algorithm comes from the extended co-citation analysis of web pages. It is intuitive and easy to be implemented. The second one takes advantage of linear algebra theories to reveal deeper relationships among the web pages and identify relevant pages more precisely and effectively. The corresponding page source construction for these two algorithms can prevent the results from being affected by malicious hyperlinks on the web. The experiment results show the feasibility and effectiveness of the algorithms. The research results could be used to enhance web search by caching the relevant pages for certain searched pages. For the purpose of clustering web pages to construct a community with its hierarchical cluster structures, we propose an innovative web page similarity measurement that incorporates hyperlink transitivity and page importance (weight).Based on this similarity measurement, two types of hierarchical web page clustering algorithms are proposed. The first one is the improvement of the conventional K-mean algorithms. It is effective in improving page clustering, but is sensitive to the predefined similarity thresholds for clustering. Another type is the matrix-based hierarchical algorithm. Two algorithms of this type are proposed in this work. One takes cluster-overlapping into consideration, another one does not. The matrix-based algorithms do not require predefined similarity thresholds for clustering, are independent of the order in which the pages are presented, and produce stable clustering results. The matrix-based algorithms exploit intrinsic relationships among web pages within a uniform matrix framework, avoid much influence of human interference in the clustering procedure, and are easy to be implemented for applications. The experiments show the effectiveness of the new similarity measurement and the proposed algorithms in web page clustering improvement. For applying above mathematical algorithms better in practice, we generalize the web page discovering as a special case of information retrieval and present a visualization system prototype, as well as technical details on visualization algorithm design, to support information retrieval based on linear algebra. The visualization algorithms could be smoothly applied to web applications. XML is a new standard for data representation and exchange on the Internet. In order to extend our research to cover this important web data, we propose an object representation model (ORM) for XML data. A set of transformation rules and algorithms are established to transform XML data (DTD and XML documents with DTD or without DTD) into this model. This model capsulizes elements of XML data and data manipulation methods. DTD-Tree is also defined to describe the logical structure of DTD. It also can be used as an application program interface (API) for processing DTD, such as transforming a DTD document into the ORM. With this data model, semantic meanings of the tags (elements) in XML data can be used for further research in XML data management and information retrieval, such as community construction for XML data.
APA, Harvard, Vancouver, ISO, and other styles
16

Paesel, Keir. "Development of a Model United Nations website." [Denver, Colo.] : Regis University, 2005. http://165.236.235.140/lib/KPaesel2005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Prabhakara, Deepak. "Web Applications Security : A security model for client-side web applications." Thesis, Norwegian University of Science and Technology, Department of Telematics, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8962.

Full text
Abstract:

The Web has evolved to support sophisticated web applications. These web applications are exposed to a number of attacks and vulnerabilities. The existing security model is unable to cope with these increasing attacks and there is a need for a new security model that not only provides the required security but also supports recent advances like AJAX and mashups. The attacks on client-side Web Applications can be attributed to four main reasons – 1) lack of a security context for Web Browsers to take decisions on the legitimacy of requests, 2) inadequate JavaScript security, 3) lack of a Network Access Control and 4) lack of security in Cross-Domain Web Applications. This work explores these four reasons and proposes a new security model that attempts to improve overall security for Web Applications. The proposed security model allows developers of Web Applications to define fine-grained security policies and Web Browsers enforce these rules; analogous to a configurable firewall for each Web Application. The Browser has disallows all unauthorized requests, thus preventing most common attacks like Cross-Site Script Injections, Cross-Frame Scripting and Cross-Site Tracing. In addition the security model defines a framework for secure Cross-Domain Communication, thus allowing secure mashups of Web Services. The security model is backward compatible, does not affect the current usability of the Web Applications and has cross-platform applicability. The proposed security model was proven to protect against most common attacks, by a proof-of-concept implementation that was tested against a comprehensive list of known attacks.

APA, Harvard, Vancouver, ISO, and other styles
18

Ma, Li. "Web error classification and usage based model for Web reliability improvement." Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3258042.

Full text
Abstract:
Thesis (Ph.D. in Computer Science)--S.M.U., 2007.
Title from PDF title page (viewed Mar. 18, 2008). Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1731. Adviser: Jeff Tian. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
19

Zhang, Lu Jansen Bernard J. "A branding model for web search engines." [University Park, Pa.] : Pennsylvania State University, 2009. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-3996/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sun, Yi. "A location model for web services intermediaries." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0000984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kraus, Andreas. "Model Driven Software Engineering for Web Applications." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-79362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kurian, Habel. "A Markov model for web request prediction." Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Eleonora, Brtka. "Model adaptivnog web baziranog sistema za učenje." Phd thesis, Univerzitet u Novom Sadu, Tehnički fakultet Mihajlo Pupin u Zrenjaninu, 2015. https://www.cris.uns.ac.rs/record.jsf?recordId=95380&source=NDLTD&language=en.

Full text
Abstract:
Disertacija se bavi problematikom adaptivnih web-baziranih sistema u oblasti e-učenja. Definisan je model sistema čije su osnovne komponente: učenik, učitelj i obučavajući materijali. Model je proširiv i domenski nezavistan. Istražena je interakcija između komponenti modela, pre svega između učenika i obučavajućih materijala. Razvijen je modul za procenu usaglašenosti potreba učenika sa jedne strane i sadržaja obučavajućih materijala sa druge strane. Korišćene su mere udaljenosti odnosno sličnosti, na taj način postignuta je delimična adaptibilnost modela. Adaptibilnost modela proširena je modulom koji koristi Ako Onda pravila generisana od strane sistema baziranog na Teoriji grubih skupova. Pravila u Ako Onda formi procenjuju uticaj obučavajućih materijala na učenika i shodno proceni vrše adaptaciju. Model je implementiran, testiran i korišćen za vršenje eksperimenata na test skupu obučavajućih materijala i učenika. Pokazano je na koji način se vrši adaptacija u okviru korišćenog sistema.
The dissertation deals with the problem of adaptive Web-based systems in the field of e-learning. The model whose basic components are: the student, the teacher and the learning materials is defined. The model is extensible and domain independent. The interaction between the components of the model is examined, especially among students and learning materials. Module for the conformity assessment between needs of students and the content of the learning materials is developed. The distance measures or similarity measures are used, thus is achieved a partial adaptability of the model. Adaptability of the model was extended by module that uses If Then rules generated by the system based on the Rough sets theory. If Then rules are used to estimate the impact of learning materials to students and after that, is performed the adaptation. The model was implemented, tested and used to carry out experiments on the test set of learning materials and students. It is shown how the adjustments are done.
APA, Harvard, Vancouver, ISO, and other styles
24

Walters, Lourens O. "A web browsing workload model for simulation." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/6369.

Full text
Abstract:
Bibliography: p. 163-167.
The simulation of packet switched networks depends on accurate web workload models as input for network models. We derived a workload model for traffic generated by an individual browsing the web. We derived the workload model by studying packet traces of web traffic generated by individuals browsing the web on a campus network.
APA, Harvard, Vancouver, ISO, and other styles
25

Turanlı, Dehan Aytaç Sıtkı. "A basic web-based distance education model/." [s.l.]: [s.n.], 2005. http://library.iyte.edu.tr/tezler/master/bilgisayaryazilimi/T000337.pdf.

Full text
Abstract:
Thesis (Master)--İzmir Institute Of Technology, İzmir, 2005.
Keywords: Distance education, web based education, model, system approach, questionnaire. Includes bibliographical references (leaves. 147).
APA, Harvard, Vancouver, ISO, and other styles
26

Ghosheh, Emad. "A novel model for improving the maintainability of web-based systems." Thesis, University of Westminster, 2010. https://westminsterresearch.westminster.ac.uk/item/905xy/a-novel-model-for-improving-the-maintainability-of-web-based-systems.

Full text
Abstract:
Web applications incorporate important business assets and offer a convenient way for businesses to promote their services through the internet. Many of these web applica- tions have evolved from simple HTML pages to complex applications that have a high maintenance cost. This is due to the inherent characteristics of web applications, to the fast internet evolution and to the pressing market which imposes short development cycles and frequent modifications. In order to control the maintenance cost, quantita- tive metrics and models for predicting web applications’ maintainability must be used. Maintainability metrics and models can be useful for predicting maintenance cost, risky components and can help in assessing and choosing between different software artifacts. Since, web applications are different from traditional software systems, models and met- rics for traditional systems can not be applied with confidence to web applications. Web applications have special features such as hypertext structure, dynamic code generation and heterogenousity that can not be captured by traditional and object-oriented metrics. This research explores empirically the relationships between new UML design met- rics based on Conallen’s extension for web applications and maintainability. UML web design metrics are used to gauge whether the maintainability of a system can be im- proved by comparing and correlating the results with different measures of maintain- ability. We studied the relationship between our UML metrics and the following main- tainability measures: Understandability Time (the time spent on understanding the soft- ware artifact in order to complete the questionnaire), Modifiability Time(the time spent on identifying places for modification and making those modifications on the software artifact), LOC (absolute net value of the total number of lines added and deleted for com- ponents in a class diagram), and nRev (total number of revisions for components in a class diagram). Our results gave an indication that there is a possibility for a relationship to exist between our metrics and modifiability time. However, the results did not show statistical significance on the effect of the metrics on understandability time. Our results showed that there is a relationship between our metrics and LOC(Lines of Code). We found that the following metrics NAssoc, NClientScriptsComp, NServerScriptsComp, and CoupEntropy explained the effort measured by LOC(Lines of Code). We found that NC, and CoupEntropy metrics explained the effort measured by nRev(Number of Revi- sions). Our results give a first indication of the usefulness of the UML design metrics, they show that there is a reasonable chance that useful prediction models can be built from early UML design metrics.
APA, Harvard, Vancouver, ISO, and other styles
27

Khalil, Faten. "Combining web data mining techniques for web page access prediction." University of Southern Queensland, Faculty of Sciences, 2008. http://eprints.usq.edu.au/archive/00004341/.

Full text
Abstract:
[Abstract]: Web page access prediction gained its importance from the ever increasing number of e-commerce Web information systems and e-businesses. Web page prediction, that involves personalising the Web users’ browsing experiences, assists Web masters in the improvement of the Web site structure and helps Web users in navigating the site and accessing the information they need. The most widely used approach for this purpose is the pattern discovery process of Web usage mining that entails many techniques like Markov model, association rules and clustering. Implementing pattern discovery techniques as such helps predict the next page tobe accessed by theWeb user based on the user’s previous browsing patterns. However, each of the aforementioned techniques has its own limitations, especiallywhen it comes to accuracy and space complexity. This dissertation achieves better accuracy as well as less state space complexity and rules generated by performingthe following combinations. First, we combine low-order Markov model and association rules. Markov model analysis are performed on the data sets. If the Markov model prediction results in a tie or no state, association rules are used for prediction. The outcome of this integration is better accuracy, less Markov model state space complexity and less number of generated rules than using each of the methods individually. Second, we integrate low-order Markov model and clustering. The data sets are clustered and Markov model analysis are performed oneach cluster instead of the whole data sets. The outcome of the integration is better accuracy than the first combination with less state space complexity than higherorder Markov model. The last integration model involves combining all three techniques together: clustering, association rules and low-order Markov model. The data sets are clustered and Markov model analysis are performed on each cluster. If the Markov model prediction results in close accuracies for the same item, association rules are used for prediction. This integration model achievesbetter Web page access prediction accuracy, less Markov model state space complexity and less number of rules generated than the previous two models.
APA, Harvard, Vancouver, ISO, and other styles
28

Nassopoulos, Athanasios. "The three-dimensional ray trajectories of the WKB optical fiber modes." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA267167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Dakela, Sibongiseni. "Web analytics strategy: a model for adopting and implementing advanced Web Analytics." Doctoral thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/10288.

Full text
Abstract:
Includes bibliographical references (leaves 290-306).
Web Analytics (WA) is an evaluative technique originating from and driven by business in its need to get more value out of understanding the usage of its Web sites and strategies therein. It is the measurement, collection, analysis and reporting of Internet data for the purposes of understanding and optimising Web usage for the online visitor, the online customer and the business with Web site presence. Current WA practice is criticised because it involves mostly raw statistics and therefore the practice tends to be inconsistent and misleading. Using grounded action research, personal observations and a review of online references, the study reviews the current state of WA to to propose an appropriate model and guidelines for a Web Analytics adoption and implementation in an electronic commerce organisation dealing with online marketing.
APA, Harvard, Vancouver, ISO, and other styles
30

Köhler, Marcus. "Linklets - Formal Function Description and Permission Model." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83380.

Full text
Abstract:
Linklets are location-independent web services, which consume and produce Linked Data resources. These resources form a web of data - the semantic web - that is an abstraction of the web 2.0. However, enterprises are reluctant to provide valuable Linked Data resources due to missing financial stimuli. Operations are not representable in the semantic web. Linklets aim to solve both problems. Previous work developed a prototype. The goal of this thesis is to enhance it by a component model, a formal description and a permission model. A business model has to be developed. This thesis follows a bottom-up approach. The formalization of the Linklet concept creates a foundation. Then, an improved architecture and its reference implementation are studied. It is evaluated by tests, show cases and economic considerations. The resulting component system is based on web-service component systems, while a sandbox concept is the core of the permission model. The formal description shows limits of OWLs open world assumption. A platform leader strategy is the foundation for the business model. In conclusion, the advantages of the Linklet concept provide a way to enhance and monetize the value of the semantic web. Further research is required; the practical use has to be considered.
APA, Harvard, Vancouver, ISO, and other styles
31

Oliveira, Bruno Takahashi Carvalhas de. "PAWEB - Uma plataforma para desenvolvimento de aplicativos web utilizando o modelo de atores." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-06122012-010811/.

Full text
Abstract:
Existem várias linguagens e plataformas que permitem a programação baseada no modelo de atores, uma solução elegante para a programação concorrente proposta há algumas décadas. Segundo esse modelo, implementa-se o programa na forma de uma série de agentes que são executados em paralelo e se comunicam entre si somente por meio da troca de mensagens, sem a necessidade de memória compartilhada ou estruturas tradicionais de sincronização como semáforos e mutexes. Uma das áreas nas quais esse modelo seria particularmente adequado é a programação de aplicações web, isto é, aplicações cujas lógicas de negócios e de dados residem num servidor e que são acessadas pelo usuário por intermédio de um navegador. Porém, existem muitos obstáculos ao desenvolvimento de aplicações desse tipo, entre eles a falta de linguagens e ferramentas que permitam integrar tanto o servidor quanto o cliente (navegador) no modelo de atores, as dificuldades de conversões de dados que se fazem necessárias quando o servidor e o cliente são desenvolvidos em linguagens diferentes, e a necessidade de contornar as dificuldades inerentes aos detalhes do protocolo de comunicação (HTTP). O PAWEB é uma proposta de uma plataforma para o desenvolvimento e execução de aplicações web que fornece a infraestrutura necessária para que tanto o lado cliente quanto o lado servidor do aplicativo hospedado possam ser escritos numa mesma linguagem (Python), e possam criar e gerenciar atores que trocam mensagens entre si,tanto local quanto remotamente, de maneira transparente e sem a necessidade de implementar conversões de dados ou outros detalhes de baixo nível.
There are several programming languages and platforms that allow the development of systems based on the actor model, an elegant solution for concurrent programming proposed a few decades ago. According to this model, the program is implemented in the form of several agents that run concurrently and only communicate amongst themselves through the exchange of messages, without the need for shared memory or traditional synchronization structures such as semaphores and mutexes. One of the areas where this model would be particularly appropriate would be the development of web applications, that is, applications whose business and database logic reside on the server and are accessed by the user by means of a web browser. However, there are several obstacles to the development of this type of application, amongst them the lack of languages and tools that allow for the integration of both the server and the client (browser) into the actor model, data conversion difficulties arising from using different programming languages on the server and the client, and the need to circumvent the inherent difficulties posed by the details of the communications protocol (HTTP). PAWEB is a proposal for an application development and execution platform that supplies the infrastructure needed so that both the server and client sides of the hosted application can be written in the same language (Python) and so that they may create and manage actors that exchange messages with one another, both locally and remotely, in a transparent manner and without the need to implement data conversions or other low-level mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
32

Hohler, Deborah Dorothea. "Evaluation of habitat suitability models for elk and cattle." Thesis, Connect to this title online, 2004. http://bibpurl.oclc.org/web/9208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Dias, José Manuel Gonçalves. "Finite mixture models : review, applications, and computer-intensive methods /." Groningen, the Netherlands : Research School Systems, Organisation and Management, University of Groningen, 2004. http://bibpurl.oclc.org/web/30814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Gunestas, Murat. "An evidence management model for web services behavior." Fairfax, VA : George Mason University, 2009. http://hdl.handle.net/1920/5631.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2009.
Vita: p. 167. Thesis director: Duminda Wijesekera. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technology. Title from PDF t.p. (viewed Nov. 11, 2009). Includes bibliographical references (p. 159-166). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
35

Kara, Ismihan Refika. "Automated Navigation Model Extraction For Web Load Testing." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613992/index.pdf.

Full text
Abstract:
Web pages serve a huge number of internet users in nearly every area. An adequate testing is needed to address the problems of web domains for more efficient and accurate services. We present an automated tool to test web applications against execution errors and the errors occured when many users connect the same server concurrently. Our tool, called NaMoX, attains the clickables of the web pages, creates a model exerting depth first search algorithm. NaMoX simulates a number of users, parses the developed model, and tests the model by branch coverage analysis. We have performed experiments on five web sites. We have reported the response times when a click operation is eventuated. We have found 188 errors in total. Quality metrics are extracted and this is applied to the case studies.
APA, Harvard, Vancouver, ISO, and other styles
36

Atterer, Richard. "Usability Tool Support for Model-Based Web Development." Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-92963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Zhiwei. "Riemann space model and similarity-based Web retrieval." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60214.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Wong, Mabel C. Y. "Using virtual documents to model personal web spaces." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0025/MQ40752.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chumo, Caroline J. "A model web interface for youth in Tanzania /." Diss., Portal website, 2006. http://www.jeruto.org.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hsieh, Hsiu Ching. "A new model for cross-cultural web design." Thesis, Brunel University, 2008. http://bura.brunel.ac.uk/handle/2438/5388.

Full text
Abstract:
People from different cultures use web interface in different ways, expect different visual representation, navigation, interaction, mental model, and layouts etc., and have different communication patterns and expectation. In the context of globalisation, web localisation becomes a powerful strategy to acquire an audience in a global market. Therefore, web developers and designers have to make adaptations to fit the needs of people from different cultures, and the way cultural factors are integrated into web interface design needs to be improved. Most previous research lacks an appropriate way to apply cultural factors into web development. No empirical study of the web interface has been carried out to support the cross-cultural web design model. It is noted that no single model can support all cross-cultural web communication but a new model is needed to bridge the gap and improve the limitations. Thus the research aim was to build a new model of cross-cultural web design to contribute to effective communication. Following an extensive literature review, a local web audit was conducted, then a series of experiments with users to gather and evaluate data and build and validate the new model. A new model, based on a study of British and Taiwanese users, was formulated and validated demonstrating that content and message remain the core of web design but the performance of the selected users is influenced by the cultural dimension and cultural preferences and this, in turn impacts on the effectiveness of the web communication. For the British user sample, ease of using the website was seen to be strongly related to desirability. Taiwanese users showed preference for visual pleasure but no relationship between efficient performance and desirability. The resultant model contributes to the knowledge of how to design effective web interfaces for British and Taiwanese cultures and is replicable for the purpose of comparing approaches to designing for other cultures.
APA, Harvard, Vancouver, ISO, and other styles
41

Md, Amin Mohd Afandi. "A user acceptance model of web personalization systems." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/98965/1/Mohd%20Afandi%20bin%20Md%20Amin%20Thesis.PDF.

Full text
Abstract:
Research on web personalization techniques for collecting and analysing web data in order to deliver personalized information to users is in an advanced state. Many metrics from the computational intelligence field have been developed to evaluate the algorithmic performance of Web Personalization Systems (WPSs). However, measuring the success of a WPS in terms of user acceptance is difficult until the WPS is deployed in practice. In summary, many techniques exist for delivering personalized information to a user, but a comprehensive measure of the success in WPSs in terms of human interaction and behaviour does not exist. This study aims to develop a framework for measuring user acceptance of WPSs from a user perspective. The proposed framework is based on the unified theory of acceptance and use of technology (UTAUT). The antecedents of user acceptance are described by indicators based on four key constructs, i.e. performance expectancy (PE), effort expectancy (EE), social influence (SI), and facilitating conditions (FC). All these constructs are underpinned by Information Systems (IS) theories that determine the intention to use (BI) and the actual use (USE) of a technology. A user acceptance model was proposed and validated using structural equation modelling (SEM) via the partial least squares path modelling (PLS-PM). Four user characteristics (i.e. gender, age, skill and experience) have been chosen for testing the moderating effects of the four constructs. The relationship between the four constructs in regard to BI and USE has been validated through moderating effects, in order to present an overall view of the extent of user acceptance of a WPS. Results from response data analysis show that the acceptance of a WPS is determined through PE, EE SI, and FC. The gender of a user was found to moderate the relationship between performance expectancy of a WPS and their behavioural intention in using a WPS. The effect of behavioural intention on the use of WPS is higher for a group of females than for males. Furthermore, the proposed model has been tested and validated for its explanation power of the model and effect size. The current study concluded that predictive relevance of intention to use a WPS is more effective than the actual WPS usage, which indicated that intention to use has more prediction power for describing a user acceptance of a WPS. The implications of these measures from the computational intelligent point of view are useful when a WPS is implemented. For example, the designer of a WPS should consider personalized design features that enable the delivery of relevant information, sharing to other users, and accessibility across many platforms, Such features create a better web experience and a complete security policy. These measures can be utilized to obtain a higher attention rate and continued use by a user; the features that define user acceptance of a WPS.
APA, Harvard, Vancouver, ISO, and other styles
42

Ribeiro, Rui Jorge Lucena. "O impacto da web 2.0 nas empresas portuguesas." Master's thesis, Instituto Superior de Economia e Gestão, 2010. http://hdl.handle.net/10400.5/1702.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação
Esta tese procura analisar os factores que afectam a intenção de adopção de tecnologias de Web 2.0 num contexto empresarial, focando-se no caso particular da adopção de blogues. O modelo de aceitação de tecnologia (TAM) foi escolhido como a modelo base do presente estudo para explicar a aceitação dos utilizadores através das suas intenções em colaborar e comunicar on-line e para racionalizar as suas intenções em termos de atitude, utilidade percebida, facilidade de utilização percebida e normas sociais. A pesquisa foi realizada para recolher os dados, as medidas e as hipóteses foram analisadas usando a técnica dos mínimos quadrados parciais (PLS). Os resultados mostram que utilidade percebida e atitude em relação à utilização de blogues influenciam significativamente a intenção dos utilizadores na adopção blogues num contexto empresarial. As implicações dos resultados para a teoria e prática são discutidas.
This thesis attempts to analyze the factors affecting the intention of adopting of Web 2.0 technologies, the research will focus on the adoption of blogs. The technology acceptance model (TAM) was chosen as the basic model of this study to explain the acceptance of users through their intentions to collaborate and communicate online and to rationalize their intentions in terms of attitude, perceived usefulness, ease of use and social norms. The study was conducted to collect data, measures and hypotheses were analyzed using the technique of partial least square (PLS). The results show that perceived usefulness and attitude towards the use of blogs significantly influenced the intention of users in adopting blogs in a business context. The implications of the results are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Galvão, Rosa Maria Brandão Tavares Marcelino. "Estruturas conceptuais e técnicas de gestão bibliográfica." Doctoral thesis, Universidade de Évora, 2014. http://hdl.handle.net/10174/18181.

Full text
Abstract:
A evolução da infraestrutura tecnológica, aliada à disponibilização dos recursos e serviços bibliográficos na Internet/WWW, fizeram emergir a discussão sobre modelos e paradigmas da função, meios e objetivos dos serviços de informação de biblioteca. O universo normativo da informação torna-se heterogéneo e ultrapassa as fronteiras de influência das bibliotecas, impulsionando um movimento internacional de refundação dos princípios, normas e regras do seu âmbito. Esta tese investiga as novas questões técnicas que se colocam no âmbito da organização e acesso à informação, emergentes do enquadramento teórico trazido pelos Requisitos funcionais dos registos bibliográficos (FRBR). No cerne da investigação estão as estruturas normativas e a sua interação com os sistemas, conteúdos, utilizadores e rede, tendo esta tese limitado o seu âmbito ao estudo dos normativos de estrutura, focando-se nos conceitos, princípios e normas de informação e de dados que estão subjacentes aos sistemas de informação bibliográfica. A primeira questão investigada foi a identificação das características intrínsecas a um catálogo FeRBeRizado, para cuja resposta foi estudado o modelo FRBR, verificadas as suas características fundamentais e a aplicabilidade atual, com a análise de um conjunto de catálogos FeRBeRizados. Verificou-se haver necessidade do redesenhar das estruturas lógicas da informação dos catálogos, conducentes a uma remodelação da organização e colocação dos seus conteúdos que permitam uma melhor definição/consolidação dos objetivos do catálogo, e a restituição de uma estrutura sindética mais rica. O modelo FRBR ao desagregar, decompor e remodelar os dados bibliográficos fornece a formalização lógica para a reestruturação da informação bibliográfica. Estes resultados conduziram à segunda e terceira questões sobre se os normativos catalográficos atualmente disponíveis – ISBD e RDA, e os normativos de registo de dados – UNIMARC, são suficientes e adequados para implementar essa estrutura. Analisadas as características fundamentais destes normativos ao nível de potencialidades e limitações de modelação dos dados de acordo com o modelo FRBR, verificou-se que a Descrição bibliográfica Internacional Normalizada (ISBD) apresenta uma tendência de alinhamento conceptual mas não se reestruturou de acordo com o modelo FRBR; já o Resource Description and Access (RDA), transpõe os conceitos teóricos do modelo para a prática catalográfica e apresenta um corte radical com a filosofia, conceitos, terminologia e práticas usadas tradicionalmente na catalogação e na gestão de dados bibliográficos. A estrutura atual do UNIMARC possui os requisitos essenciais para acomodar uma catalogação que implemente o modelo FRBR. No entanto, tal como os outros formatos MARC, é uma norma demasiado extensa, complexa e sem um modelo de dados claro, tendo evoluído de forma incremental, sem possuir as características adequadas aos atuais requisitos de gestão e exploração de dados. Conclui-se pela necessidade inequívoca de os normativos MARC virem a ser, no médio ou longo prazo, substituídos por normas de estrutura de dados melhor alinhadas com o contexto conceptual e tecnológico atual, de que a web semântica é parte fundamental; CONCEPTUAL STRUCTURES AND BIBLIOGRAPHIC MANAGEMENT TECHNIQUES: New issues and perspectives Rosa Maria Brandão Tavares Marcelino Galvão ABSTRACT: The evolution of technological infrastructures, in association with the availability of resources and bibliographic services on the Internet/World Wide Web, has given rise to the discussion of models and paradigms of function, as well as means and goals of library information services. The normative universe of information is heterogeneous and transcends the boundaries of libraries, calling for an international movement to recast the principles, standards and rules of its field. This thesis investigates the new technical issues that arise from the means of accessing and organizing information, in accordance with the theoretical framework introduced by the Functional Requirements of Bibliographic Records (FRBR). The normative structures and their interaction with systems, contents, users and networks are at the core of this study. This research limited its scope to the study of normative structures, focusing on the concepts, principles and standards of information and data that underlie the bibliographic information systems. After verifying and validating the fundamental characteristics and applicability of the FRBR model, we analyzed a set of FRBR-based catalogs. We found it to be necessary to redesign the logical structure of catalog information. Remodelling of the organization and placement of contents would therefore be of critical importance in better elucidating and consolidating catalog objectives, and restoring a richer syndetic structure. By disrupting and reshaping bibliographic data, the FRBR model seems to provide a logical and suitable standard for restructuring bibliographic information. These results led to the subsequent question of whether the cataloging standards currently available – International Standard Bibliographic Description (ISBD), Resource Description and Access (RDA), and data record format UNIMARC –, are sufficient and appropriate to implement this new structure. After careful analysis of the strengths and constraints of these protocols for FRBR data modeling we observed that ISBD has a tendency for conceptual alignment, though it does not restructure data according to the FRBR model. On the other hand, RDA transposes the theoretical concepts from the model to the cataloguing practice. It presents a radical break from the philosophy, concepts, terminology and practices traditionally used in cataloguing and managing bibliographic data. The current structure of UNIMARC features all the core requirements for a cataloguing method that implements the FRBR model. However, like other MARC formats, it is too extensive, complex, and lacks clear evidence-based data. It has evolved incrementally without meeting all the necessary requirements for current data management. We concluded that, given the current challenges, there is a need for MARC protocols to be replaced by new normative data structures. Potential new protocols must be in agreement with the current conceptual and technological contexts, of which the semantic Web is part.
APA, Harvard, Vancouver, ISO, and other styles
44

Spencer, Matthew. "The effects of habitat size on food web structure." Thesis, University of Sheffield, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.481753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ponge, Julien. "Model based analysis of Time-aware Web Services Interactions." Clermont-Ferrand 2, 2008. http://www.theses.fr/2008CLF21840.

Full text
Abstract:
Les services web gagnent de l'importance en tant que cadre facilitant l'intégration d'applications au sein et en dehors des frontières des entreprises. Il est accepté que la description d'un service ne devrait pas seulement inclure l'interface, mais aussi le protocole métier supporté par le service. Dans le cadre de ce travail, nous avons formalisé la catégorie des protocoles incluant des contraintes de temps (appelés protocoles temporisés) et étudié l'impact du temps sur l'analyse de compatibilité et de remplaçabilité. Nous avons formalisé les contraintes suivantes : les contraintes Clnvoke définissent des fenêtres de disponibilités tandis que les contraintes Mlnvoke définissent des délais d'expiration. Nous avons étendu les techniques pour l'analyse de compatibilité et de remplaçabilité entre protocoles temporisés à l'aide d'un mapping préservant la sémantique entre les protocoles temporisés et les automates temporisés, ce qui a défini la classe des automates temporisés de protocoles (PTA). Les PTA possèdent des transitions silencieuses qui ne peuvent pas être supprimées en général, et pourtant ils sont fermés par calcul du complément, ce qui rend décidable les différents types d'analyse de compatibilité et de remplaçabilité. Enfin, nous avons mis en oeuvre notre approche dans le cadre du projet ServiceMosaic, une plate-forme pour la gestion du cycle de vie des services web
APA, Harvard, Vancouver, ISO, and other styles
46

Демська, А. І. "Determining the productivity of UI web systems in the context of use." Thesis, ХНУРЕ, 2019. http://openarchive.nure.ua/handle/document/10042.

Full text
Abstract:
In this paper the importance of the Web-systems in the work of modern business processes is considered. The relevance of the issue of increasing the competitiveness of sites in the context of increasing the number of Internet resources is presented. It is proved that the most important aspect in developing an attractive for the user site is usability – a characteristic that describes how effectively the user can interact with the product. In order to achieve the real goal of usability, certain technologies and methods of assessment are needed, which is being developed by an increasing number of analysts and scientists
APA, Harvard, Vancouver, ISO, and other styles
47

Fatolahi, Ali. "An Abstract Meta-model for Model Driven Development of Web Applications Targeting Multiple Platforms." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23262.

Full text
Abstract:
In this thesis, we present an abstract meta-model for model driven development of web applications targeting multiple platforms. We review the existing technologies and the related work in order to obtain a list of requirements for such an abstract model. The abstract model is built by extending an existing UML-based model for web applications. We demonstrate that it is possible to map this abstract model to more than one specific development platform by providing transformations for these mappings. We also lay out the general outline of a model-driven process based on the proposed abstract model. The abstract model and the model-driven process are supported by a set of tools, case studies and a visual modeling notation. Model-driven techniques have been used in the area of web development to a great extent. Most of the existing approaches are tuned toward specific platforms or develop only certain parts of web applications. These approaches generally use meta-models adapted to their targeted platforms. In order to flexibly target multiple platforms, the level of abstraction of the meta-model must be raised. Such a meta-model must allow the description of relevant features of web applications independently from the specificities of specific platforms. Additionally, transformations mapping from abstract to specific web descriptions must be expressible in a flexible way. In this thesis, we propose such an abstract meta-model. Mappings that transform abstract models to specific platforms are also presented. Different benefits can be foreseen from this approach. By relieving developers from low-level platform-specific related design, the approach has the potential to shift the development task to issues related to business needs. Another benefit is shortened development time. This could help web developers to overcome the problem of schedule delays, which is recognized as one of the top five most-cited problems with large-scale web systems. The approach is specifically suitable for information-intensive web-based systems. These applications typically involve large data stores accessed through a web interface. A distinctive aspect of this approach is its use of a specification of the data mapping as part of its high-level input. More importantly, the common features required to process data and communicate data objects between different layers and components are targeted.
APA, Harvard, Vancouver, ISO, and other styles
48

Ed-douibi, Hamza. "Model-driven round-trip engineering of REST APIs." Doctoral thesis, Universitat Oberta de Catalunya, 2019. http://hdl.handle.net/10803/667111.

Full text
Abstract:
Les API web s'han convertit cada vegada més en un actiu clau per a les empreses, que n'han promogut la implementació i la integració en les seves activitats quotidianes. A la pràctica, la majoria d'aquestes API web són "REST-like", que significa que s'adhereixen parcialment a l'estil arquitectònic conegut com transferència d'estat representacional ('representational state transfer', REST en anglés). De fet, REST és un paradigma de disseny i no proposa cap estàndard. Com a conseqüència, tant desenvolupar com consumir API REST són tasques difícils i costoses per als proveïdors i clients de l'API. L'objectiu d'aquesta tesi és facilitar el disseny, la implementació, la composició i el consum de les API REST, basant-se en tècniques d'enginyeria dirigida per models ('model-driven engineering', MDE en anglés). Aquesta tesi proposa les contribucions següents: EMF-REST, APIDiscoverer, APITester, APIGenerator, i APIComposer. Aquestes contribucions constitueixen un ecosistema que avança l'estat de la qüestió al camp de l'enginyeria de programari automàtica per al desenvolupament i el consum de les API REST.
Las API Web se han convertido en una pieza fundamental para un gran número de compañías, que han promovido su implementación e integración en las actividades cotidianas del negocio. En la práctica, estas API Web son "REST-like", lo que significa que se adhieren parcialmente al estilo arquitectónico conocido como transferencia de estado representacional ('representational state transfer', REST en inglés). De hecho, REST es un paradigma de diseño y no propone ningún estándar. Por ello, tanto el desarrollo como el consumo de API REST son tareas difíciles y que demandan mucho tiempo de los proveedores y los clientes de API. El objetivo de esta tesis es facilitar el diseño, la implementación, la composición y el consumo de API REST, apoyándose en el desarrollo de software dirigido por modelos (DSDM). Esta tesis propone las siguientes contribuciones: EMF-REST, APIDiscoverer, APITester, APIGenerator y APIComposer. Estas contribuciones constituyen un ecosistema que avanza el estado de la cuestión en el área de la ingeniería del software referida a la automatización de las tareas relacionadas con el desarrollo y consumo de API REST.
Web APIs have become an increasingly key asset for businesses, and their implementation and integration in companies' daily activities has thus been on the rise. In practice, most of these Web APIs are "REST-like", meaning that they adhere partially to the Representational State Transfer (REST) architectural style. In fact, REST is a design paradigm and does not propose any standard, so developing and consuming REST APIs end up being challenging and time-consuming tasks for API providers and clients. Therefore, the aim of this thesis is to facilitate the design, implementation, composition and consumption of REST APIs by relying on Model-Driven Engineering (MDE). Likewise, it offers the following contributions: EMF-REST, APIDiscoverer, APITester, APIGenerator and APIComposer. Together, these contributions make up an ecosystem which advances the state of the art of automated software engineering for REST APIs.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, Changpil. "An Evaluation Model for Application Development Frameworks for Web Applications." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1324663059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Hao-cun, and 吳浩存. "A multidimensional data model for monitoring web usage and optimizing website topology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29528215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography