Дисертації з теми "CONTENT ACCESS"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: CONTENT ACCESS.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "CONTENT ACCESS".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Helberger, Natali. "Controlling access to content regulating conditional access in digital broadcasting /." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2005. http://dare.uva.nl/document/78324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Weiss, Ron 1970. "Content-based access to algebraic video." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Fu, Kevin E. (Kevin Edward) 1976. "Integrity and access control in untrusted content distribution networks." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34464.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Vita.
Includes bibliographical references (p. 129-142).
A content distribution network (CDN) makes a publisher's content highly available to readers through replication on remote computers. Content stored on untrusted servers is susceptible to attack, but a reader should have confidence that content originated from the publisher and that the content is unmodified. This thesis presents the SFS read-only file system (SFSRO) and key regression in the Chefs file system for secure, efficient content distribution using untrusted servers for public and private content respectively. SFSRO ensures integrity, authenticity, and freshness of single-writer, many-reader content. A publisher creates a digitally-signed database representing the contents of a source file system. Untrusted servers replicate the database for high availability. Chefs extends SFSRO with key regression to support decentralized access control of private content protected by encryption. Key regression allows a client to derive past versions of a key, reducing the number of keys a client must fetch from the publisher. Thus, key regression reduces the bandwidth requirements of publisher to make keys available to many clients.
(cont.) Contributions of this thesis include the design and implementation of SFSRO and Chefs; a concrete definition of security, provably-secure constructions, and an implementation of key regression; and a performance evaluation of SFSRO and Chefs confirming that latency for individual clients remains low, and a single server can support many simultaneous clients.
by Kevin E. Fu.
Ph.D.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ragab, Hassen Hani. "Key management for content access control in hierarchical environments." Compiègne, 2007. http://www.theses.fr/2007COMP1718.

Повний текст джерела
Анотація:
Le besoin du contrôle d'accès au contenu dans les hiérarchies (CACH) apparaît naturellement dans de nombreux contextes, allant des départements gouvernementaux aux jeux interactifs, et de la diffusion multi-niveaux des données au contrôle d'accès dans les bases de données. Tous ces contextes ont un point commun, c'est d'avoir besoin de s'assurer que les différentes entités n'accèdent qu'aux ressources auxquelles elles sont autorisées. Le contrôle d'accès au contenu consiste à effectuer cette dernière tâche. Contrôler l'accès aux ressources est généralement assuré en chiffrant les ressources du système et en donnant les clés utilisées pour le chiffrement de ces ressources aux utilisateurs qui y sont autorisés. La génération et la gestion de telles clés est un besoin crucial pour le déploiement des systèmes de contrôle d'accès au contenu. De plus, les hiérarchies à large échelle avec des membres qui changent leurs droits d'accès fréquemment exigent un passage à l'échelle performant du mécanisme de gestion de clés. Dans le cadre de cette thèse, nous nous focalisons sur la gestion de clés pour le contrôle d'accès au contenu. Nous commençons par donner les briques de base de la gestion de clés pour le CACH. Par la suite, nous étudions les schémas de gestion de clés existants et les classifions en deux catégories, à savoir l'approche des clés dépendantes et l'approche des clés indépendantes. Par ailleurs, nous proposons un modèle générique pour représenter les schémas de l'approche des clés indépendantes et l'utilisons pour définir des bornes inférieures sur les coûts de gestion des clés. Par la suite, nous proposons un nouveau schéma indépendant de gestion de clé et prouvons son optimalité en montrant que ses performances atteignent les bornes inférieures. L'optimalité de ce schéma constitue l'un des apports les plus importants de cette thèse. Ensuite, nous proposons deux nouveaux schémas efficaces de l'approche des clés dépendantes et nous les évaluons par simulations et par une modélisation avec les processus de Markov. Enfin, nous proposons une variante de ces schémas qui permet de définir des compromis pertinents sur les différents critères de performances
Lots of applications, ranging from interactive online games to business corporations and government departments, and from multi-layered data streaming to databases access control, require ensuring that its users respect some access control restrictions. Content access control in hierarchies (CACH) consists in ensuring, using cryptographic techniques, that the users access application resources to which they are entitled. Content access control is generally ensured by encrypting the system resources and giving the keys to users having access to them. Generating and managing those keys is a crucial requirement for the deployment of content access control systems. Moreover, large scale hierarchies with highly dynamic users present serious scalability issues for key management. In this thesis, we deal with key management for content access control. We start by defining building blocks of key management for CACH. Then, we study the existing key management solutions and classify them into two categories -namely, the dependent keys and independent keys approaches - and propose a key management framework for each category. We further propose a generic model to represent independent-keys key management schemes and use this model to define lower bounds on the key management overhead. Then, we propose a new independent-keys key management scheme and prove that it is optimal by showing that it reaches the overhead lower bounds. The optimality of this scheme constitutes one of the most important results of our thesis. Thereafter, we propose new efficient dependent-keys key management schemes and evaluate them by simulations and Markov process modelling. At last, we propose a variant of our schemes allowing to define trade-offs on the performance criteria. We show that this variant offers a means to define very interesting overhead trade-offs
Стилі APA, Harvard, Vancouver, ISO та ін.
5

He, Kun. "Content privacy and access control in image-sharing platforms." Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0007.

Повний текст джерела
Анотація:
Au cours de ces dernières années, de plus en plus d’utilisateurs choisissent de diffuser leurs photos sur des plateformes de partage d’images. Ces plateformes permettent aux utilisateurs de restreindre l’accès aux images à un groupe de personnes, afin de donner un sentiment de confiance aux utilisateurs vis-à-vis de la confidentialité de ces images. Malheureusement, la confidentialité ne peut être garantie sachant que le fournisseur de la plateforme a accès aux contenus de n’importe quelle image publiée sur sa plateforme. En revanche, si les images sont mises en ligne chiffrées, seules les personnes ayant la possibilité de déchiffrer les images, auront accès aux images. Ainsi, la confidentialité peut être assurée. Trois principales spécificités sont à prendre en compte lors du chiffrement d’une image : le schéma de chiffrement doit être effectué en respectant le format de l’image (e.g. format JPEG), garantir l’indistinguabilité (l’adversaire ne doit obtenir de l’information sur le contenu de l’image à partir de l’image chiffrée), et doit être compatible avec les traitements des images spécifiques à la plateforme de partage d’images. L’objectif principal de cette thèse a été de proposer un tel schéma de chiffrement pour les images JPEG. Nous avons d’abord proposé et implémenté un schéma de chiffrement garantissant la conservation de l’image et l’indistinguabilité. Malheureusement, nous avons montré que sur Facebook, Instagram, Weibo et Wechat, notre solution ne permettait de maintenir une qualité d’images suffisante après déchiffrement. Par conséquent, des codes correcteurs ont été ajoutés à notre schéma de chiffrement, afin de maintenir la qualité des images
In recent years, more and more users prefer to share their photos through image-sharing platforms. Most of platforms allow users to specify who can access to the images, it may result a feeling of safety and privacy. However, the privacy is not guaranteed, since at least the provider of platforms can clearly know the contents of any published images. According to some existing researches, encrypting images before publishing them, and only the authorised users who can decrypt the encrypted image. In this way, user’s privacy can be protected.There are three challenges when proposing an encryption algorithm for the images published on image-sharing platforms: the algorithm has to preserve image format (e.g. JPEG image) after encryption, the algorithm should be secure (i.e. the adversary cannot get any information of plaintext image from the encrypted image), and the algorithm has to be compatible with basic image processing in each platform. In this thesis, our main goal is to propose an encryption algorithm to protect JPEG image privacy on different image-sharing platforms and overcome the three challenges. We first propose an encryption algorithm which can meet the requirements of the first two points. We then implement this algorithm on several widely-used image-sharing platforms. However, the results show that it cannot recover the plaintext image with a high quality after downloading the image from Facebook, Instagram, Weibo and Wechat. Therefore, we add the correcting mechanism to improve this algorithm, which reduces the losses of image information during uploading the encrypted image on each platform and reconstruct the downloaded images with a high quality
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Gopal, Burra 1968. "Integrating content-based access mechanisms with hierarchical file systems." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/282291.

Повний текст джерела
Анотація:
We describe a new file system that provides, at the same time, both name and content based access to files. To make this possible, we introduce the concept of a semantic directory. Every semantic directory has a query associated with it. When a user creates a semantic directory, the file system automatically creates a set of pointers to the files in the file system that satisfy the query associated with the directory. This set of pointers is called the query-result of the directory. To access the files that satisfy the query, users just need to de-reference the appropriate pointers. Users can also create files and sub-directories within semantic directories in the usual way. Hence, users can organize files in a hierarchy and access them by specifying path names, and at the same time, retrieve files by asking queries that describe their content. Our file system also provides facilities for query-refinement and customization. When a user creates a new semantic sub-directory within a semantic directory, the file system ensures that the query-result of the sub-directory is a subset of the query-result of its parent. Hence, users can create a hierarchy of semantic directories to refine their queries. Users can also edit the set of pointers in a semantic directory, and thereby modify its query-result without modifying its query or the files in the file system. In this way, users can customize the results of queries according to their personal tastes, and use customized results to refine queries in the future. Our file system has many other features, including semantic mount-points that allow users to access information in other file systems by content. The file system does not depend on the query language used for content-based access. Hence, it is possible to integrate any content-based access mechanism into our file system.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pohl, Roland. "Qucosa: Quality Content of Saxony." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-32992.

Повний текст джерела
Анотація:
Die sächsischen Hochschulbibliotheken verfügten bisher nur über eigene, auf die jeweilige Einrichtung beschränkte Hochschulschriftenserver. Qucosa, das der kostenlosen Publikation und dauerhaften Archivierung von elektronischen Diplomarbeiten, Dissertationen und anderen Veröffentlichungen dient, eröffnet den sächsischen Hochschulbibliotheken neue Perspektiven und bildet eines der Elemente einer „Digitalen Bibliothek“ in Sachsen. Bereits mehrere sächsische Hochschulen und Forschungsinstitute benutzen Qucosa zur Publizierung der eigenen Forschungsergebnisse. Auch nichtwissenschaftliche, staatliche Einrichtungen werden zukünftig ihre Schriften auf Qucosa im Volltext anbieten.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hermansson, Rickard, and Johan Hellström. "Discretionary Version Control : Access Control for Versionable Documents." Thesis, KTH, Skolan för teknik och hälsa (STH), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-152815.

Повний текст джерела
Анотація:
A common problem in the workplace is sharing digital documents with coworkers. Forsome companies the problem extends to wanting the documentskept internally backedup and controlling which people in the company has rights to read and revise certaindocuments.This paper shows different systems and models for access control, version control,and distribution of the documents that can be used to create asystem that solves theseproblems.One requirement for this system was a user interface where users can upload, down-load and manage access to their documents. Another requirement was a service thathandles version control for the documents, and a way to quickly connect and distributethe documents. The system also needed to be able to handle access control of the ver-sioned documents on document level, referred to as "fine grained access control" in thispaper.These models and systems were evaluated based on aspects of the access control mod-els, version control systems, and distribution systems andprotocols. After evaluating,appropriate selections were made to create a prototype to test the system as a whole.The prototype ended up meeting the goals that Nordicstationset for the project butonly with basic functionality. Functionality for retrieving any version from a docu-ments history, controlling access for the documents at document level, and a simpleweb based user interface for managing the documents.
Att enkelt dela dokument med arbetskollegor är något alla företag har ett behov utav.Ofta är dessa dokument interna och skall hållas inom företaget. Även inom företagetkan det finnas behov av att styra vem som har rätt att läsa ellerrevidera dokumenten.Denna examensarbetesrapport beskriver olika tekniker ochmodeller för accesskon-troll, versionshantering och distribution som kan användas för att implementera ettsystem som kan lösa de nämnda problemen.Ett av kraven för systemet var ett användargränssnitt där användare kan ladda upp ochned sina dokument. Ytterligare krav var att systemet skulleversionshantera dokumenetenoch att användare skall kunna komma åt de olika versionerna.Systemet skulle ocksåkunna hantera åtkomstkontroll på dokumentnivå, något denna examensrapport definerarsom "fine grained access control".För att designa ett sådant system så utredes och utvärderades olika tekniker kringåtkomstkontroll och versionshantering samt distributionav dokumenten. För att testasystemet så utvecklads en prototyp baserad på de valda lösningsmetoderna.Den resulterande prototypen uppfyllde de mål som Nordicstation satte för projektet,dock endast med grundläggande funktionalitet. Stöd för atthämta olika versioner avdokument, kontrollera access till dokumentet nere på dokument nivå och ett webbaseratgränssnitt för att administrera dokumenten.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sambra-Petre, Raluca-Diana. "2D/3D knowledge inference for intelligent access to enriched visual content." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00917972.

Повний текст джерела
Анотація:
This Ph.D. thesis tackles the issue of sill and video object categorization. The objective is to associate semantic labels to 2D objects present in natural images/videos. The principle of the proposed approach consists of exploiting categorized 3D model repositories in order to identify unknown 2D objects based on 2D/3D matching techniques. We propose here an object recognition framework, designed to work for real time applications. The similarity between classified 3D models and unknown 2D content is evaluated with the help of the 2D/3D description. A voting procedure is further employed in order to determine the most probable categories of the 2D object. A representative viewing angle selection strategy and a new contour based descriptor (so-called AH), are proposed. The experimental evaluation proved that, by employing the intelligent selection of views, the number of projections can be decreased significantly (up to 5 times) while obtaining similar performance. The results have also shown the superiority of AH with respect to other state of the art descriptors. An objective evaluation of the intra and inter class variability of the 3D model repositories involved in this work is also proposed, together with a comparative study of the retained indexing approaches . An interactive, scribble-based segmentation approach is also introduced. The proposed method is specifically designed to overcome compression artefacts such as those introduced by JPEG compression. We finally present an indexing/retrieval/classification Web platform, so-called Diana, which integrates the various methodologies employed in this thesis
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Knox, Ian. "Web based regional newspapers : The role of content : A thesis." Thesis, University of Ballarat, 2002. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/43155.

Повний текст джерела
Анотація:
The phenomenon and acceptance of electronic publishing has proliferated in the last five years due to the expansion in the use of the World Wide Web in the general community. The initial fears that newspapers would be decimated by the introduction of this technology have been proven groundless, but despite a high web presence by newspapers world wide, profitable models of cyber papers are elusive. In an online environment traditional relationships between newspaper advertising and editorial may not stand. Despite the considerable body of published literature concerning the movement of print newspapers to an online environment, little was found concerning online content. A need to re-evaluate what content and functions are considered to be desirable by print readers, in an online environment was identified as the main objective of this research. Evaluation the of user attitudes to web based newspapers provides a foundation for future research into areas such as developing effective models for profitable online newspapers. To achieve this objective, the research tools used were a content analysis, an online newspaper user survey and newspaper management personal interviews. The study looked at Victorian regional daily newspapers that also had online versions. By focussing on the regional newspapers, meaningful comparisons could be made between content, staff attitudes and readership interests. The content analysis measured the quantum and nature of the content of the print and online versions of the regional dailies during a one week period. This provided a measure of the type and source of the articles included both in print and online. Newspaper editorial staff interviews contributed a personalised view of content priorities, which was then contrasted with a web based questionnaire which measured user requirements in relation to content and interactivity. It was found from the survey that content alone would not provide a sufficient basis to build a profitable online regional newspaper site. The findings were analysed in relation to the literature, newspaper site content and editorial staff interviews. Despite regularly accessing online newspaper sites, it was found that users are unwilling to pay for the experience. Users indicated a desire for a higher level of interactivity, in addition to the content, which is currently provided, by online regional newspapers. Evaluation of user attitudes to web based newspapers provides a foundation for future research into the development of effective for profitable online newspapers.
Master of Business
Стилі APA, Harvard, Vancouver, ISO та ін.
11

De, Villiers Peter. "CBiX a model for content-based billing in XML environments." Thesis, Port Elizabeth Technikon, 2003. http://hdl.handle.net/10948/208.

Повний текст джерела
Анотація:
The new global economy is based on knowledge and information. Further- more, the Internet is facilitating new forms of revenue generation of which one recognized potential source is content delivery over the Internet. One aspect that is critical to ensuring a content-based revenue stream is billing. While there are a number of content-based billing systems commercially available, as far as can be determined these products are not based on a common model that can ensure interoperability and communication between the billing sys- tems. This dissertation addresses the need for a content-based billing model by developing the CBiX (Content-based Billing in XML Environments) model. This model, developed in a phased approach as a family of billing models, incorporates three aspects. The rst aspect is access control. The second as- pect is pricing, in the form of document, element and inherited element level pricing for content. The third aspect is XML as the platform for information exchange. The nature of the Internet facilitates information interchange, exible web business models and exible pricing. These facts, coupled with CBiX being concerned with billing for content over the Internet, leads to a number of decisions regarding the model: The CBiX model has to incorporate exible pricing. Therefore pricing is evolved through the development of the family of models from doc- ument level pricing to element level pricing to inherited element level pricing. The CBiX model has to be based on a platform for information inter- change that enables content delivery. XML provides a broad family of standards that is widely supported and creating the next generation Internet. XML is therefore selected as the environment for information exchange for CBiX. The CBiX model requires a form of access control that can provide access to content based on user properties. Credential-based Access Control is therefore selected as the method of access control for CBiX, whereby authorization is granted based on a set of user credentials. Furthermore, this dissertation reports on the development of a prototype. This serves a dual purpose: rstly, to assist the author in understanding the technologies and principles involved; secondly, to illustrate CBiX0 and therefore present a proof-of-concept of at least the base model. The CBiX model provides a base to guide and assist developers with regards to the issues involved with developing a billing system for XML- based environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Knox, Ian. "Web based regional newspapers : the role of content : a thesis." University of Ballarat, 2002. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/14587.

Повний текст джерела
Анотація:
The phenomenon and acceptance of electronic publishing has proliferated in the last five years due to the expansion in the use of the World Wide Web in the general community. The initial fears that newspapers would be decimated by the introduction of this technology have been proven groundless, but despite a high web presence by newspapers world wide, profitable models of cyber papers are elusive. In an online environment traditional relationships between newspaper advertising and editorial may not stand. Despite the considerable body of published literature concerning the movement of print newspapers to an online environment, little was found concerning online content. A need to re-evaluate what content and functions are considered to be desirable by print readers, in an online environment was identified as the main objective of this research. Evaluation the of user attitudes to web based newspapers provides a foundation for future research into areas such as developing effective models for profitable online newspapers. To achieve this objective, the research tools used were a content analysis, an online newspaper user survey and newspaper management personal interviews. The study looked at Victorian regional daily newspapers that also had online versions. By focussing on the regional newspapers, meaningful comparisons could be made between content, staff attitudes and readership interests. The content analysis measured the quantum and nature of the content of the print and online versions of the regional dailies during a one week period. This provided a measure of the type and source of the articles included both in print and online. Newspaper editorial staff interviews contributed a personalised view of content priorities, which was then contrasted with a web based questionnaire which measured user requirements in relation to content and interactivity. It was found from the survey that content alone would not provide a sufficient basis to build a profitable online regional newspaper site. The findings were analysed in relation to the literature, newspaper site content and editorial staff interviews. Despite regularly accessing online newspaper sites, it was found that users are unwilling to pay for the experience. Users indicated a desire for a higher level of interactivity, in addition to the content, which is currently provided, by online regional newspapers. Evaluation of user attitudes to web based newspapers provides a foundation for future research into the development of effective for profitable online newspapers.
Master of Business
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ma, Bojiang. "Cognitive spectrum access, multimedia content delivery, and full-duplex relaying in wireless networks." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60167.

Повний текст джерела
Анотація:
Due to the growing number of wireless communication devices and emerging bandwidth-intensive applications, the demand of data usage is increasing rapidly. Utilizing various radio access technologies and multiple frequency bands in wireless networks can provide efficient solutions to meet the growing demand of data. These techniques are promising for the fifth generation (5G) wireless communication systems. However, to fully exploit their benefits, spectrum and spatial reuse, power saving, throughput and utility enhancement are crucial issues. In this thesis, we propose different resource allocation algorithms to address the aforementioned issues in wireless communication networks. First, we study the resource allocation problem for a hybrid overlay/underlay cognitive cellular network. We propose a hybrid overlay/underlay spectrum access mechanism to improve the spectrum and spatial reuse. We formulate the resource allocation problem as a coalition formation game among femtocell users, and analyze the stability of the coalition structure. We propose an efficient algorithm based on the solution concept of recursive core. The proposed algorithm achieves a stable and efficient spectrum allocation. Next, we study the resource allocation problem for multimedia content delivery in millimeter wave (mmWave) based home networks. We characterize different usage scenarios of multimedia content delivery. We formulate a joint power and channel allocation problem, which captures the spectrum and spatial reuse of mmWave communications, based on a network utility maximization framework. The problem is a non-convex mixed integer programming (MIP) problem. We reformulate the non-convex MIP problem into a convex MIP problem and propose a resource allocation algorithm based on the outer approximation method. We also develop an efficient heuristic algorithm which has a substantially lower complexity than the outer approximation based algorithm. Finally, we study full-duplex relay-assisted device-to-device (D2D) communication in mmWave based wireless networks. To design an efficient relay selection and power allocation scheme, we formulate a multi-objective combinatorial optimization problem, which balances the trade-off between power consumption and system throughput. The problem is transformed into a weighted bipartite matching problem. We then propose a joint relay selection and power allocation algorithm, which can achieve a Pareto optimal solution in polynomial time.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Hirai, Tatsuya. "A Study on Access Control Mechanism in Storage Devices for Audiovisual Contents." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/216162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Gamlath, Jayantha. "Barley non-starch polysaccharide content and its relationship with kernel hardness and water uptake." Thesis, University of Ballarat, 2009. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/60004.

Повний текст джерела
Анотація:
Harder kernels in barley are thought to be a factor affecting the modification of the endosperm during malting by restricting water and enzyme movement within the endosperm. The traditional method used in the malting industry to determine barley endosperm vitreousness is by visual assessment. Since this method is subjective, laborious and requires training, an alternative method is needed. Similarly, the causes and factors influencing kernel hardness are uncertain. The prime objectives of this study were: to identify an appropriate method to quantify kernel hardness; investigate the relationship between kernel hardness and endosperm composition; and to investigate the relationship between barley variety and environmental influences on endosperm composition in relation to the kernel hardness of malting barley.
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wenrich, John Richard. "Content Management on the Internet: A look at K-12 schools access to resources." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30755.

Повний текст джерела
Анотація:
The Internet presents a new phenomenon to educators and students in the K-12 environment. It's ease of use and ready access to material provides an overwhelming resource for use in the K-12 classroom. This study looked at content management of Internet resources in the K-12 school environment. Content management is defined as the methods of organizing access to the information available on the Internet allowing the teacher to effectively use resources in a classroom setting. Teachers have managed the material, or content, that they present to students for over a decade. Now that resources available on the Internet are also open to K-12 students, teachers must be aware of the need to manage Internet content, just as they would do for any other content being used in their classroom. This study looked at middle school students in 6th and 7th grades. An experimental design was used to determine if K-12 access to Internet resources provides a higher degree of results when students are presented with managed resources, or when students have open access to Internet resources. Analysis of the results of the study show that there is a significant difference in both the amount and the quality of material that was identified by the group with managed access to Internet content.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Miotto, Riccardo. "Content-based Music Access: Combining Audio Features and Semantic Information for Music Search Engines." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421582.

Повний текст джерела
Анотація:
During the last decade, the Internet has reinvented the music industry. Physical media have evolved towards online products and services. As a consequence of this transition, online music corpora have reached a massive scale and are constantly being enriched with new documents. At the same time, a great quantity of cultural heritage content remains undisclosed because of the lack of metadata to describe and contextualize it. This has created a need for music retrieval and discovery technologies that allow users to interact with all these music repositories efficiently and effectively. Music Information Retrieval (MIR) is the research field that studies methods and tools for improving such interaction as well as access to music documents. Most of the research works in MIR focuses on content-based approaches, which exploit the analysis of the audio signal of a song to extract significant descriptors of the music content. These content descriptors may be processed and used in different application scenarios, such as retrieval, recommendation, dissemination, musicology analysis, and so on. The thesis explores novel automatic (content-based) methodologies for music retrieval which are based on semantic textual descriptors, acoustic similarity, and a combination of the two; we show empirically how the proposed approaches lead to efficient and competitive solutions with respect to other alternative state-of-the-art strategies. Part of the thesis focuses on music discovery systems, that is search engines where users do not look for a specific song or artist, but may have some general criteria they wish to satisfy. These criteria are commonly expressed in the form of tags, that is short phrases that capture relevant characteristics of the songs, such as genre, instrumentation, emotions, and so on. Because of the scale of current collections, manually assigning tags to songs is becoming an infeasible task; for this reason the automatic tagging of music content is now considered a core challenge in the design of fully functional music retrieval systems. State-of-the-art content-based systems for music annotation (which are usually called auto-taggers) model the acoustic patterns of the songs associated with each tag in a vocabulary through machine learning approaches. Based on these tag models, auto-taggers generate a vector of tag weights when annotating a new song. This vector may be interpreted as a semantic multinomial (SMN), that is a distribution characterizing the relevance of each tag to a song, which can be used for music annotation and retrieval. A first original contribution reported in the thesis aims at improving state-of-the-art auto-taggers by considering tag co-occurrences. While a listener may derive semantic associations for audio clips from direct auditory cues (e.g. hearing “bass guitar”) as well as from context (e.g. inferring “bass guitar” in the context of a “rock” song), auto-taggers ignore this context. Indeed, although contextual relationships correlate tags, many state-of-the-art auto-taggers model tags independently. We present a novel approach for improving automatic music annotation by modeling contextual relationships between tags. A Dirichlet mixture model (DMM) is proposed as a second, additional stage in the modeling process to supplement any auto-tagging system that generates a semantic multinomial over a vocabulary of tags. For each tag in the vocabulary, a DMM captures the broader context defined by the tag by modeling tag co-occurrence patterns in the SMNs of songs associated with the tag. When annotating songs, the DMMs refine SMN annotations by leveraging contextual evidence. Experimental results demonstrate the benefits of combining a variety of auto-taggers with this generative context model; it generally outperforms other approaches to context modeling as well. The use of tags alone allows for efficient and effective music retrieval mechanisms; however, automatic tagging strategies may lead to noisy representations that may negatively affect the effectiveness of retrieval algorithms. Yet, search and discovery operations across music collections can be also carried out matching users interests or exploiting acoustic similarity. One major issue in music information retrieval is how to combine such noisy and heterogeneous information sources in order to improve retrieval effectiveness. At this aim, the thesis explores a statistical retrieval framework based on combining tags and acoustic similarity through a hidden Markov model. The retrieval mechanism relies on an application of the Viterbi algorithm which highlights the sequence of songs that best represents a user query. The model is presented for improving state-of-the-art music search and discovery engines by delivering more relevant ranking lists. In fact, through an empirical evaluation we show how the proposed model leads to better performances than retrieval approaches which rank songs according to individual information sources alone or which use a combination of them. Additionally, the high generality of the framework makes it suitable for other media as well, such as images and videos. Besides music discovery, the thesis challenges also the problem of music identification, the goal which is to match different recordings of the same songs (i.e. finding covers of a given query). At this aim we present two novel music descriptors based on the harmonic content of the audio signals. Their main purpose is to provide a compact representation which is likely to be shared by different performances of the same music score. At the same time, they also aim at reducing the storage requirements of the music representation as well as enabling efficient retrieval over large music corpora. The effectiveness of these two descriptors, combined in a single scalable system, has been tested for classical music identification, which is probably the applicative scenario that mostly needs automatic strategies for labeling unknown recordings. Scalability is guaranteed by an index-based pre-retrieval step which handles music features as textual words; in addition, precision in the identification is brought by alignment carried out through an application of hidden Markov models. Results with a collection of more than ten thousand recordings have been satisfying in terms of efficiency and effectiveness.
Nell’ultimo decennio l’avvento di Internet ha reinventato l’industria musicale, in particolare i supporti fisici si sono evoluti verso prodotti e servizi reperibili online. Questa transizione ha portato le collezioni musicali disponibili su Internet ad avere dimensioni enormi e in continua crescita, a causa del quotidiano inserimento di nuovo contenuto musicale. Allo stesso tempo, una buona parte dei documenti musicali tipici del patrimonio culturale rimane inaccessibile, a causa della mancanza di dati che li descrivano e li contestualizzino. Tutto ciò evidenzia la necessità di nuove tecnologie che permettano agli utenti di interagire con tutte queste collezioni musicali in modo effettivo ed efficiente. Il reperimento d’informazioni musicali (i.e. MIR) è il settore di ricerca che studia le tecniche e gli strumenti per migliorare sia questa interazione, sia l’accesso ai documenti musicali. La maggior parte della ricerca effettuata nel MIR riguarda tecniche automatiche basate sul contenuto (i.e. content-based), le quali analizzano il segnale audio di una canzone ed estraggono dei descrittori, che ne caratterizzano, appunto, il contenuto. Questi descrittori possono essere elaborati ed utilizzati in varie applicazioni: motori di ricerca, divulgazione, analisi musicologa e così via. La tesi presenta dei modelli originali content-based per motori di ricerca musicali di vario genere, che si basano, sia su descrittori semantici testuali e su similarità acustica, sia su una loro combinazione. Attraverso esperimenti pratici, dimostreremo come i modelli proposti ottengano prestazioni efficienti e competitive se confrontate con alcuni dei sistemi alternativi presenti nello stato dell’arte. Una buona parte della tesi si concentra sui sistemi di music discovery, ovvero motori di ricerca nei quali gli utenti non cercano una canzone o un’artista specifico, ma hanno perlopiù un criterio generale che vogliono soddisfare. Questi criteri di ricerca sono in genere espressi sottoforma di tag, ovvero annotazioni che caratterizzano gli aspetti rilevanti delle canzoni (e.g. genere, strumenti, emozioni). A causa delle dimensioni raggiunte ormai dalle varie collezioni, l’assegnazione manuale dei tag alle canzoni è però diventata un’operazione impraticabile. Per questa ragione, i modelli che assegnano i tag in modo automatico sono diventati dei punti chiave nella progettazione dei motori di ricerca musicale. I sistemi content-based per l’assegnazione automatica di tag (i.e. auto-tagger) generalmente si basano su approcci di machine learning, che modellano le caratteristiche audio delle canzoni associate ad un certo tag. Questi modelli sono poi utilizzati per annotare le nuove canzoni generando un vettore di pesi, uno per ogni tag nel vocabolario, che misurano la rilevanza che ogni tag ha per quella canzone (i.e. SMN). Un primo contributo originale della tesi ha l’obiettivo di migliorare lo stato dell’arte degli auto-tagger, modellando le co-occorrenze tra i tag. Infatti mentre una persona può associare tag a una canzone sia direttamente (e.g. ascolta lo strumento“basso”), sia dal contesto (e.g. intuisce“basso” sapendo che la canzone `e di genere “rock”), gli auto-tagger diversamente ignorano questo contesto. Infatti, nonostante le relazioni contestuali correlino i tag, la maggior parte degli auto-tagger modella ogni tag in modo indipendente. Il nostro sistema pertanto cerca di migliorare l’assegnazione automatica di tag, modellando le relazioni contestuali che occorrono tra i vari tag di un vocabolario. Per far questo utilizziamo un modello di misture di Dirichlet (DMM) al fine di migliorare qualsiasi auto-tagger che genera delle SMN. Per ogni tag nel vocabolario, una DMM è usata per catturare le co-occorrenze con gli altri tag nelle SMN delle canzoni associate con quel tag. Quando una nuova canzone è annotata, il DMM rifinisce le SMN prodotte da un auto-tagger sfruttando le sue caratteristiche contestuali. I risultati sperimentali dimostrano i benefici di combinare vari auto-tagger con le DMM; in aggiunta, i risultati migliorano rispetto anche a quelli ottenuti con modelli contestuali alternativi dello stato dell’arte. L’uso dei tag permette di costruire efficienti ed effettivi motori di ricerca musicali; tuttavia le strategie automatiche per l’assegnazione di tag a volte ottengono rappresentazioni non precise che possono influenzare negativamente le funzioni di reperimento. Al tempo stesso, le ricerca di documenti musicali può essere anche fatta confrontando gli interessi degli utenti o sfruttando le similarit`a acustiche tra le canzoni. Uno dei principali problemi aperti nel MIR è come combinare tutte queste diverse informazioni per migliorare le funzioni di ricerca. Ponendosi questo obiettivo, la tesi propone un modello di reperimento statistico basato sulla combinazione tra i tag e la similarità acustica mediante un modello di Markov nascosto. Il meccanismo di ricerca si basa su un’applicazione dell’algoritmo di Viterbi, il quale estrae dal modello la sequenza di canzoni che meglio rappresenta la query. L’obiettivo è di migliorare lo stato dell’arte dei sistemi di ricerca musicale e, in particolare, di music discovery fornendo all’utente liste di canzoni maggiormente rilevanti. Gli esperimenti infatti mostrano come il modello proposto risulta migliore sia di algoritmi che ordinano le canzoni utilizzando un’informazione sola, sia di quelli che le combinano in modo diverso. In aggiunta, l’alta generalità a del modello lo rende adatto anche ad altri settori multimediali, come le immagini e i video. In parallelo con i sistemi di music discovery, la tesi affronta anche il problema di identificazione musicale (i.e. music identification), il cui obiettivo è quello di associare tra loro diverse registrazioni audio che condividono lo stesso spartito musicale (i.e. trovare le versioni cover di una certa query). In funzione di questo, la tesi presenta due descrittori che si basano sulla progressione armonica della musica. Il loro scopo principale è quello di fornire una rappresentazione compatta del segnale audio che possa essere condivisa dalle canzoni aventi lo stesso spartito musicale. Al tempo stesso, mirano anche a ridurre lo spazio di memoria occupato e a permettere operazioni di ricerca efficienti anche in presenza di grandi collezioni. La validità dei due descrittori è stata verificata per l’identificazione di musica classica, ovvero lo scenario che maggiormente necessita di strategie automatiche per la gestione di registrazioni audio non catalogate. La scalabilità del sistema è garantita da una pre-ricerca basata su un indice che gestisce i descrittori musicali come fossero parole di un testo; in aggiunta, la precisione dell’identificazione è aumentata mediante un’operazione di allineamento eseguita utilizzando i modelli di Markov nascosti. I risultati sperimentali ottenuti con una collezione di più di diecimila registrazioni audio sono stati soddisfacenti sia da un punto di vista di efficienza sia di efficacia.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ariyarathna, Tibbotuge L. "The Use of Streaming to Access Digital Content in Australia and Challenges to Copyright Law: An End-User Perspective." Thesis, Griffith University, 2020. http://hdl.handle.net/10072/397645.

Повний текст джерела
Анотація:
The rising popularity of streaming has resulted in a revolutionary change to how digital content, such as sound recordings, cinematographic films, and radio and television broadcasts, is used on the internet. Superseding the conventional method of downloading, using streaming to access digital content has challenged copyright law, because it is not clear whether end-user acts of streaming constitute copyright infringement. These prevailing grey areas between copyright and streaming often make end-users feel doubtful about accessing digital content through streaming. It is uncertain whether exercising the right of reproduction is appropriately suited for streaming, given the ambiguities of “embodiment” and scope of “substantial part”. Conversely, the fair dealing defence in Australia cannot be used aptly to defend end-users’ acts of streaming digital content, because the use of streaming to access digital content rarely falls within the defences specified under fair dealing. When considering a temporary copy exception, end-users are at risk of being held liable for infringement when using streaming to access a website that contains infringing digital content, even if they lack any knowledge about the content’s infringing nature. Moreover, the grey areas in circumventing geo-blocking have made end-users hesitant to access websites through streaming because it not clear whether technological protection measures apply to geoblocking. End-users have a severe lack of knowledge about whether they can use circumvention methods, such as virtual private networks, to access streaming websites without being held liable for copyright infringement. Despite the intricacies between copyright and access to digital content, the recently implemented website-blocking laws have emboldened copyright owners while suppressing end-users’ access to digital content. This is because the principles of proportionality and public interest have been given less attention when determining website-blocking injunctions. This thesis examines the challenges posed to Australian copyright law by streaming, from the end-user perspective. It argues that continuous attempts to adapt traditional copyright principles into streaming, a novel technological advancement, are futile. This thesis compares the Australian position with the European Union and United States to draw lessons from them, regarding how they have dealt with streaming and copyright. By critically examining the technological functionality of streaming and the failure of copyright enforcement against the masses, it argues for strengthening end-user rights. Although it is difficult to reach copyright equilibrium by counterpoising copyright owners’ interests with copyright users’ interests, this thesis argues that deploying an appropriate balance is pivotal to expand end-user rights. This analysis of the current copyright law regime, from the end-user standpoint in respect to novel technologies such as streaming, opens up new terrain for future research, on how copyright law should address new technologies to benefit society.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept Account,Finance & Econ
Griffith Business School
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Abruzzo, Vincent G. "Content and Contrastive Self-Knowledge." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/philosophy_theses/108.

Повний текст джерела
Анотація:
It is widely believed that we have immediate, introspective access to the content of our own thoughts. This access is assumed to be privileged in a way that our access to the thought content of others is not. It is also widely believed that, in many cases, thought content is individuated according to properties that are external to the thinker's head. I will refer to these theses as privileged access and content externalism, respectively. Though both are widely held to be true, various arguments have been put forth to the effect that they are incompatible. This charge of incompatibilism has been met with a variety of compatibilist responses, each of which has received its own share of criticism. In this thesis, I will argue that a contrastive account of self-knowledge is a novel compatibilist response that shows significant promise.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Lucchi, Nicola. "The role of Internet access in enabling individual’s rights and freedoms." Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Redovisning och Rättsvetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-21576.

Повний текст джерела
Анотація:
The paper discusses the scientific and policy debate as to whether access to the Internet can be considered so fundamental for human interaction as to deserve a special legal protection. In particular, it examines the impact of computer-mediated communication on the realization of individual’s rights and freedoms as well as on democratization processes. It then considers how Internet content governance is posing regulatory issues directly related to the growing importance of an equitable access to digital information. In this regard, the paper looks at conflicts arising within the systems of rights and obligations attached to communication (and especially content provision) over the Internet. The paper finally concludes by identifying emerging tensions and drawing out the implications for the nature and definitions of rights (e.g. of communication and access, but also of intellectual property ownership) and for regulations and actions taken to protect, promote or qualify those rights. All these points are illustrated by a series of recent examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Campanella, Fabio. "Refractoriness within the semantic system: investigations on the access and the content of semantic memory." Doctoral thesis, SISSA, 2010. http://hdl.handle.net/20.500.11767/4773.

Повний текст джерела
Анотація:
The starting purpose of this project was to investigate some issues related to the mechanisms underlying the efficient access to concepts within the semantic memory systems. These issues were mainly related to the role of refractoriness in explaining the comprehension deficits underlying semantic access. The insights derived from this first approach were then used to formulate and test hypotheses about the organization of the contents of the semantic system itself. The first part of the thesis presents an investigation of the semantic abilities of an unselected case-series of patients affected by tumours to either the left or right temporal lobes in order to detect possible semantic access difficulties. Semantic access deficits are typically attributed to the semantic system becoming temporarily refractory to repeated activation. Previous investigations on the topic were mainly based on single case reports, mainly on stroke patients. The rare examples of group studies suggested moreover the possibility that the syndrome might not be functionally unitary. The tasks used in the study were two word-to-picture matching tasks aimed to control for the typical variables held to be able to distinguish semantic access from degradation syndromes (consistency of access, semantic relatedness, word frequency, presentation rate and serial position). In the group of tumour patients tested access deficits were consistently found in patients with high grade tumours in the left posterior superior temporal lobe. However, the patients were overall only weakly affected by the typical temporal factors (presentation rate and serial position) characterizing an access syndrome as refractory. The pattern of deficit, together with the localization data, suggested that the deficit described is qualitatively different from typical semantic access syndromes and possibly caused by the disconnection of posterior temporal lexical input areas from the semantic system. In the second study we tried to answer the question whether semantic access deficits are caused by the co-occurrence of two causes (refractoriness and a lexicalsemantic disconnection) or whether the presence of refractoriness in itself is sufficient to induce all the behavioural effects described in access syndromes. A second aim of the study was moreover to investigate the precise locus of refractory behaviour, since refractory effects have also been reported in naming tasks in which the possibility exists that the interference might be located at a post-semantic lexical stage of processing. To address these issues a series of three behavioural experiments on healthy subjects was conducted. The tasks used were speeded versions of the same word-to picture matching tasks used in the previous study. A speeded paradigm was adopted in order to induce a mild refractory state also in healthy participants. The results showed that it was possible to induce, in the group of subjects tested, a performance similar to that of refractory semantic access patients. Since no post-semantic stage of processing is assumed to be necessary to perform these tasks it was argued that refractoriness arises due to interference occurring between representations within the semantic system itself. In the second part of the project, the finding that refractoriness arises due to interference involving semantic representations themselves, was used to investigate issues related to the organization of the content within the semantic memory. In particular, a second series of behavioural experiments was performed to investigate whether the way an object is manipulated is indeed a feature that defines manipulable objects at a semantic level. The tasks used were speeded word-to-picture matching tasks similar to those previously described. A significantly greater interference was found in the recognition of objects sharing similar manipulation than in the recognition of objects sharing only visual similarity. Moreover the repeated presentation of objects with similar manipulation created a ‘negative’ serial position effect (with error increasing over presentations), while the repeated presentation of objects sharing only visual similarity created an opposite ‘positive’ serial position effect (learning). The role of manipulability in the semantic representation of manipulable objects was further investigated in the last study of this work. In a second unselected group of brain tumour patients the ability to name living things and artifacts was investigated. Artifacts were manipulable objects, varying in the degree of their manipulability. Results from both behavioural and Voxel-based Lesion Symptom Mapping (VLSM) analyses showed that the only patients showing a selective deficit in naming artifacts (particularly highly manipulable objects) were patients with lesions in the posterior middle and superior portions of the left temporal lobe, an area lying within the basin of those regions involved in processing object-directed actions and previously linked to the processing of manipulable objects in a wide range of studies. The results of these last two studies support ‘property-based networks’ accounts of semantic knowledge rather than ‘undifferentiated network’ accounts. Overall this series of studies represents an attempt to better understand the mechanisms that underlie the access to semantic representations and, indirectly, the structure of representations stored within semantic networks. The insights obtained about the mechanisms of access to stored semantic representations were used as a tool to investigate the structures of the same semantic representations. A combination of different approaches was used (from behavioural speeded interference paradigms on healthy subjects, to neuropsychological case series investigations, as well as Voxel-based Lesion Symptom Mapping technique), to ‘cross-validate’ the results obtained at any level of analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhalehjoo, Negin. "Characterisation of the deformation behaviour of unbound granular materials using repeated load triaxial testing." Thesis, Federation University of Australia, 2018. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/166953.

Повний текст джерела
Анотація:
Unbound Granular Materials (UGMs) are used in the base/subbase layers of flexible pavements for the majority of roads around the world. The deterioration of pavements increases with the increase of traffic loadings. To ensure the long-term performance and serviceability of pavement structures through a realistic design, the precise evaluation and comprehensive characterisation of the resilient and permanent deformation behaviour of pavement materials are essential. The present PhD study aims to investigate the characterisation of the resilient and permanent deformation behaviour of four road base UGMs sourced from quarries in Victoria, Australia, using Repeated Load Triaxial (RLT) testing. The triaxial system used in this study is instrumented with four axial deformation measurement transducers to achieve highly precise measurements and to evaluate the effect of instrumentation on the resilient modulus of UGMs. The resilient Poisson’s ratio of the studied UGMs is also determined using a radial Hall-Effect transducer. Moreover, a series of permanent deformation tests is performed to precisely characterise the axial and radial permanent deformation behaviour of UGMs and investigate the factors that may significantly influence the accumulated axial and radial permanent deformations. Finally, three permanent deformation models incorporated with a time-hardening procedure are employed to predict the magnitude of permanent strain for multiple stress levels of the RLT test. The predictions using the employed models are then compared against the measured values to evaluate the suitability of the models and to identify the model that best predicts the strain accumulation behaviour of the tested UGMs. While this study focuses on the resilient and permanent deformation behaviour of four Victorian UGMs under repeated loading, the knowledge generated from this comprehensive investigation will contribute towards the global development of more reliable methods for evaluating the long-term performance of pavement structures and minimising road maintenance and repair costs.
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Miah, Md Waliur Rahman. "A three tier forensic model for automatic identification of evidence of child exploitation by analysing the content of chat-logs." Thesis, Federation University Australia, 2016. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/101921.

Повний текст джерела
Анотація:
Detection of child exploitation (CE) in Internet chatting by locating evidence in the chat-log is an important issue for the protection of children from prospective online paedophiles. The un-grammatical and informal nature of chat-text makes it difficult for existing formal language processing techniques to handle the problem. The methodology of the current research avoids those difficulties by developing a multi-tier digital forensic model bulit on new ideas of psychological similarity measures and ways of applying them to chat-texts. The model uses text classifiers in the beginning to identify shallow evidence of CE. For locating the particular evidence it is required to identify the behavioural pattern of CE chats consisting of documented CE psychological stages and associate the perpetrators' posts to them. Similarities among the posts of a chat play an important role for the task of differentiating and identifying these stages. To accomplish this task a novel similarity measure is constructed backed by a dictionary with terms associated with each CE stage. Using the new similarity measure is constructed backed by a dictionary with terms associated with each CE stage. Using the new similarity measure in a hieraarchial agglomerative algoritm a new clusterer is built to cluster the posts of a chat-log into the CE stages to learn whether it follows the CE pattern. Inspired by the field of recognition of textual entailment a new soft entailment technique is developed and implemented to locate the specific posts associated with the CE stages. Those specific posts of the perpetrator are extarcted as the particular evidence from the chat-log. It is anticipated that the developed methodology will have many future pratical implementations. It would assist in the development of forensic tools for digital forensic experts in law and enforcement agencies to conveniently locate evidence of online child grooming in a confiscated hard disk drive. Another future implementation would be a parental filter used by parents to protect their children from potential online offenders.
Doctor of Philosphy
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Bernabé, Caro Rocío. "Easy audiovisual content for all: Easy-to-Read as an enabler of easy, multimode access services." Doctoral thesis, TDX (Tesis Doctorals en Xarxa), 2020. http://hdl.handle.net/10803/670406.

Повний текст джерела
Анотація:
L'interès pel desenvolupament de serveis d'accessibilitat per facilitar l'accés a l'contingut audiovisual a persones amb dificultats de lectura i aprenentatge ha augmentat en els últims anys. Un exemple és l'atenció acadèmica que està rebent la Lectura Fàcil (LF). Si bé la LF ha demostrat facilitar l'accés a persones amb dificultats de lectura i aprenentatge en la comunicació escrita, el seu ús en formats multimodals com els audiovisuals és encara escàs. L'objectiu d'aquesta tesi és promoure la creació de contingut audiovisual fàcil de llegir i entendre. A aquest propòsit es va plantejar la següent pregunta: És possible utilitzar la Lectura Fàcil per simplificar contingut audiovisual i així fer-lo més accessible per a les persones amb dificultats de lectura i aprenentatge? La tesi abasta cinc publicacions revisades per parells. L'enfocament és aplicat, i el tema es situa dins de la branca aplicada dels Estudis de Traducció. En aquesta línia, els serveis d'accessibilitat proposats s'estudien considerant el contingut i la tecnologia. És a dir, en la investigació es té en compte que no només el contingut ha de ser fàcil de llegir i entendre, sinó també l'accés a l'servei i el seu funcionament. La metodologia és aplicada (Saldanha i O'Brien, 2014; Williams i Chesterman, 2002). Com a tal, s'utilitzen conceptes i coneixements derivats de la traducció audiovisual i la simplificació textual en l'estudi de la hipòtesi formulada: El contingut audiovisual simplificat segons els principis de la LF és més fàcil de llegir i entendre per a persones amb dificultats de lectura i aprenentatge. L'Article 1 investiga la funció de la LF com a servei d'accessibilitat. L'Article 2 analitza les conseqüències d'integrar els principis de la LF en processos de creació existents prenent com a exemple l'audiodescripció. Finalment, en l'article 3 estudia a la LF dins de la traducció audiovisual per tal de classificar aquest tipus de traduccions segons la classificació semiòtica proposta d'Henrik Gottlieb (2005). Els dos últims articles són estudis de cas amb una sola unitat d'investigació (Williams i Chesterman, 2002). En l'article 4 s'identifiquen paràmetres de LF per crear subtítols. En l'article 5 s'investiga la recepció d'aquests subtítols fàcils per persones amb dificultats de lectura i aprenentatge. Les conclusions obtingudes indiquen que els principis de la LF poden usar-se per a generar contingut audiovisual que sigui més accessible per a persones amb dificultats de lectura i aprenentatge. Els resultats també indiquen que el contingut semàntic i semiòtic d'aquests serveis pot diferir de vegades de el contingut dels serveis d'accessibilitat estàndard. Finalment, a l'considerar productes digitals, els serveis d'accessibilitat fàcils han de complir les normes WCAG. El fet d'haver qualificat aquesta tesi de contribució inicial en àmbit dels serveis d'accessibilitat fàcils comporta que les conclusions extretes no puguin considerar-se concloents. Això no obstant, es considera que els procediments i pràctiques descrits poden traslladar-se a casos similars i fomentar així el desenvolupament de serveis d'accessibilitat fàcils multimodals.
El interés por el desarrollo de servicios de accesibilidad para facilitar el acceso al contenido audiovisual a personas con dificultades de lectura y aprendizaje ha aumentado en los últimos años. Un ejemplo es la atención académica que está recibiendo la Lectura Fácil (LF). Si bien la LF ha demostrado facilitar el acceso a personas con dificultades de lectura y aprendizaje en la comunicación escrita, su uso en formatos multimodales como los audiovisuales es aún escaso. El objetivo de esta tesis es promover la creación de contenido audiovisual fácil de leer y entender. A este propósito se planteó la siguiente pregunta: ¿Es posible usar la Lectura Fácil para simplificar contenido audiovisual y así hacerlo más accesible para las personas con dificultades de lectura y aprendizaje? La tesis abarca cinco publicaciones revisadas por pares. El enfoque es aplicado, y el tema se ubica dentro de la rama aplicada de los Estudios de Traducción. En esta línea, los servicios de accesibilidad propuestos se estudian considerando el contenido y la tecnología. Es decir, en la investigación se tiene en cuenta que no solo el contenido debe ser fácil de leer y entender, sino también el acceso al servicio y su funcionamiento. La metodología es aplicada (Saldanha y O'Brien, 2014; Williams y Chesterman, 2002). Como tal, se utilizan conceptos y conocimientos derivados de la traducción audiovisual y la simplificación textual en el estudio de la hipótesis formulada: El contenido audiovisual simplificado según los principios de la LF es más fácil de leer y entender para personas con dificultades de lectura y aprendizaje. El Artículo 1 investiga la función de la LF como servicio de accesibilidad. El Artículo 2 analiza las consecuencias de integrar los principios de la LF en procesos de creación existentes tomando como ejemplo la audiodescripción. Por último, en el Artículo 3 se estudia a la LF dentro de la traducción audiovisual con el fin de clasificar este tipo de traducciones según la clasificación semiótica propuesta de Henrik Gottlieb (2005). Los dos últimos artículos son estudios de caso con una sola unidad de investigación (Williams y Chesterman, 2002). En el Artículo 4 se identifican parámetros de LF para crear subtítulos. En el Artículo 5 se investiga la recepción de dichos subtítulos fáciles por personas con dificultades de lectura y aprendizaje. Las conclusiones obtenidas indican que los principios de la LF pueden usarse para generar contenido audiovisual que sea más accesible para personas con dificultades de lectura y aprendizaje. Los resultados también señalan que el contenido semántico y semiótico de estos servicios puede diferir a veces del contenido de los servicios de accesibilidad estándar. Por último, al considerarse productos digitales, los servicios de accesibilidad fáciles deben cumplir las normas WCAG. El hecho de haber calificado esta tesis de contribución inicial en ámbito de los servicios de accesibilidad fáciles conlleva que las conclusiones extraídas no puedan considerarse concluyentes. No obstante, se considera que los procedimientos y prácticas descritos pueden trasladarse a casos similares y fomentar así el desarrollo de servicios de accesibilidad fáciles multimodales.
The development of access services that provide a way to overcome cognitive barriers in audiovisual communication is gaining momentum. One example is the academic attention that some text simplification methods, such as Easy-to-Read (E2R), have received in the last few years. While it has been shown that E2R has enabled access for persons with reading and learning difficulties in written communication, its realisation in multimodal formats, like audiovisual contexts, is lagging. This PhD thesis aims to develop an Easy-to-Read audiovisual content by investigating the following research question: can Easy-to-Read be used to simplify audiovisual content to make it more accessible for people with reading and learning difficulties? The PhD encompasses five, peer-reviewed publications. The research conducted has been labelled as applied research within the applied branch of translation studies. In this sense, the thesis considers access services as a whole (content and technology). In other words, it approaches the proposed easy access services by considering that the AV content needs to be Easy-to-Read as well as the ease of access to the service and its operation on the whole. The methodology has been categorised as applied research (Saldanha & O’Brien, 2014; Williams & Chesterman, 2002). As such, it borrows concepts and outputs from the fields of audiovisual translation and text simplification to test the hypothesis stated: that E2R-simplified AV content is easier to read and understand by persons with reading and learning difficulties. The first three articles draw their conclusion from secondary data. By doing this, the following expectations were set: gaining a deeper understanding about Easy-to-Read as an access service (Article 1), and the effects of adding a layer of E2R to existing workflows in the case of audio descriptions (Article 2). For its part, Article 3 aimed to classify E2R as translations within the AVT landscape by drawing upon Gottlieb’s (2005) semiotic classification. Conversely, the last two articles are case studies with a single unit of investigation (Williams & Chesterman, 2002). The fact that the case studies were carried out towards the end of the PhD allowed for additional insight. These included how to identify parameters for creating subtitles (Article 4), and how such Easy-to-Read subtitles are received by end-users with reading and learning difficulties (Article 5). Overall, the conclusions indicate that text simplification recommendations taken from Easy-to-Read can be used to generate audiovisual content that is accessible for audiences with reading and learning difficulties. Such new E2R access services may render the message by using equivalent or different semantic material than their standard counterparts (e.g., subtitles, audio descriptions). Likewise, sometimes E2R access services may differ semiotically from standard access services. Lastly, as digital products, E2R access services ought to be WCAG-compliant. Lastly, this thesis has been labelled as an initial contribution to the field of Easy-to-Read audiovisual content. Though the conclusions withdrawn cannot be regarded as conclusive, the procedures and practices described can be transferred to similar cases and, thus, foster and facilitate the development of easy multimode access services.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cooke, Louise. "Regulating the Internet : policy and practice with reference to the control of Internet access and content." Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/3283.

Повний текст джерела
Анотація:
Organisations, national governments and supranational bodies have all been active in formulating policy measures to regulate access to, and use of, Internet content. The research investigated policy responses formulated and implemented within the European Union, the Council of Europe, the UK government and three UK academic institutions during a five-year period from 1998 to 2003. This investigation took place from a perspective of concern for the potential impact on freedom of expression and freedom of enquiry of such policy initiatives. On a theoretical level, the study aimed to illuminate the process of information policy formulation in this area. Habermas' ideas about the erosion of the public sphere, and the promotion of conditions favourable to an ‘ideal speech' situation, were used as an analogy to the issues posed by the regulation of speech on the Internet. The growth in use of the Internet worldwide as an informational, recreational and communications tool has been accompanied by a moral panic about ‘unacceptable' Internet content. The effectiveness of a range of responses that have been made to control this ‘problematic' medium, including the use of technical, ethical and legal constraints, were examined. Freedom of expression and freedom of access to information were considered, both as a fundamental human right and in the context of a professional ethic for information professionals and academic staff and students. Policy-making by the European Union and the UK government was explored via longitudinal analysis of primary and secondary documentary sources; by the Council of Europe via a combination of documentary analysis of primary and secondary sources and participant observation at a policy-making forum; and at the organisational level via case study research at three UK Higher Education Institutions. This case study research used a combination of documentary analysis and semi-structured interviews with relevant personnel. Findings from the three case studies were triangulated via a questionnaire study carried out with student respondents at each of the Institutions, to explore students' actual use, and misuse, of University computer networks and their attitudes towards attempts to regulate xxi this use. The SPSS computer software package was used to analyse the data collected via the questionnaire study. The re-interpreted policy process model proposed by Rowlands and Turner (1997) and the models of direct and indirect regulation proposed by Lessig (1999) were used as heuristic tools with which to compare the findings of the research. A new model, the reflexive spiral, was designed to illustrate the dynamic, evolving and bi-directional character of the policy formulation processes that were identified. The enquiry was exploratory in nature, allowing theories and explanations to emerge from the data rather than testing a pre-determined set of conclusions. The conclusion is that the democratising potential of the Internet has indeed been constrained by policy measures imposed at a range of levels in an attempt to control the perceived dangers posed by the medium. Regulation of the Internet was found to be a problematic area for organisations, national governments, and international organisations due to its inherently ‘resistant' architectural structure and its transborder reach. Despite this, it was found that, at all levels, the Internet is subject to a multi-tiered governance structure that imposes an increasingly wide range of regulatory measures upon it. The research revealed that of the three re-interpreted policy process models, those of the Garbage Can and the Bureaucratic Imperative were found to be particularly illustrative of the policy formulation process at all levels. The use of Lessig's models of regulation (Ibid) was also found to be applicable to this area, and to be capable of illuminating the many forces impacting on information flow on the Internet. Overall, the measures taken to control and regulate Internet content and access were found to have exerted a negative impact on freedom of expression and freedom of access to information.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mewett, John. "Electrokinetic remediation of arsenic contaminated soils." Thesis, University of Ballarat, 2005. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/68354.

Повний текст джерела
Анотація:
"Arsenic is a common soil contaminant in Australia and worldwide. There is a need to find safe, effective and economic methods to deal with this problem. The soils used in this research were collected from central Victoria. They were contaminated with arsenic by historic gold mining activity or by past sheep dipping practices. This research investigated ten different leaching agents for their effects on three different arsenic contaminated soils. [...] Electrokinetic experiments were conducted on three arsenic contaminated soils. [...] The arsenic in these soils appears to be relatively stable and immobile under oxidising conditions. The soils had a high iron content which assists in the stabilisation of arsenic. This is beneficial with respect to the environmental impact of the arsenic contamination, however, it remains an obstacle to low cost electrokinetic remediation."
Masters of Applied Science
Стилі APA, Harvard, Vancouver, ISO та ін.
27

BRANDAO, EDUARDO RANGEL. "USER BEHAVIOUR TO ACCESS TELEVISION RELATED CONTENT ON SMARTPHONES, TABLETS AND COMPUTERS: AN USER CENTERED DESIGN APPROACH." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=25596@1.

Повний текст джерела
Анотація:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Smartphones, tablets ou computadores utilizados para acessar conteúdos de TV não estão adequados aos seus contextos de uso. Por meio de uma pesquisa descritiva, realizou-se uma análise documental em 57 sites e 17 aplicativos, aplicou-se um questionário online com 156 respondestes e realizou-se 25 entrevistas semiestruturadas. Identificaram-se os formatos de conteúdos oferecidos pelas empresas de TV por meio da internet (vídeos, segunda tela, estendidos e temáticos). Também verificou-se que o smartphone está o tempo todo junto ao corpo (é a primeira tela), mas não é propício para o acesso aos conteúdos da televisão (tela pequena, conexão 3G e conteúdos não otimizados). O tablet é pouco usado, pois tem as mesmas funções do smartphone e não é tão portátil (utilizado somente via WiFi). O computador só é usado quando há a necessidade de foco e aprofundamento, geralmente, em atividades relacionadas aos estudos e trabalho. As pessoas ainda preferem acessar os conteúdos diretamente na TV, mas estão assistindo menos da forma tradicional, pois a internet oferece horários flexíveis. Quando assistem televisão da maneira tradicional, não abrem mão da internet ao mesmo tempo (só que em atividades não relacionadas com a TV). A televisão impulsiona o comportamento de uso da internet, mas o contrário não acontece tanto (embora isso ocorra via redes sociais). As pessoas só associam os vídeos aos conteúdos da TV na internet, a segunda tela não emplacou e os conteúdos estendidos ou temáticos não são vistos como de TV. Como resultado final desta pesquisa, são apresentadas algumas sugestões, com a intenção de contribuir para o projeto de interfaces mais adaptadas ao uso humano.
Smartphones, tablets or computers used to access TV content aren t working in their contexts of use. Through an document analysis in 57 sites and 17 applications, an online questionnaire with 156 responses and 25 semi-structured interviews, it was found the content formats offered by TV companies through the internet (videos, second screen, extended and thematic). Also it was found that the smartphone is all the time with the user (it s already the first screen), but it isn t working to access the television content. The uses of tablet is very low, because it has the same functions of the smartphone and isn t portable (used only via WiFi). The computer is only used when people need to focus, usually in activities related to studies and work. People still prefer to access the content directly on the TV set, but are watching less in a traditional way (because the internet offers flexible hours). When watching TV in the traditional way, people use the internet at the same time (but the activities are unrelated with the TV). Television boosts the internet usage behavior, but the opposite almost don t occur (although this occurs via social networks). People only associate the videos to TV content on the internet, the second screen doesn t work and extended or thematic contents aren t seen as TV. Some suggestions are made to improve the interfaces of smartphones, tablets or computers used to access TV content.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Vicks, Mary E. "An Examination of Internet Filtering and Safety Policy Trends and Issues in South Carolina's K-12 Public Schools." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/329.

Повний текст джерела
Анотація:
School districts have implemented filtering and safety policies in response to legislative and social mandates to protect students from the proliferation of objectionable online content. Subject related literature suggests these policies are more restrictive than legal mandates require and are adversely affecting information access and instruction. There is limited understanding of how filtering and safety policies are affecting teaching and learning because no comprehensive studies have investigated the issues and trends surrounding filtering and safety policy implementation. In order to improve existing safety policies, policymakers need research-based data identifying end user access issues that limit technology integration in the kindergarten-12th grade (K-12) educational setting. This study sought to examine Internet filtering and safety policy implementation issues in South Carolina's K-12 public schools to determine their influence on information access and instruction. A mixed methods research design, which includes both quantitative and qualitative approaches, was used to investigate the research problem. Quantitative data were collected from information technology (IT) administrators who were surveyed regarding filtering and safety policy implementation, and school library media specialists (SLMS) were surveyed concerning the issues they encounter while facilitating information access in a filtered environment. Qualitative data were collected through interviews with a subset of the SLMS population, thereby providing further insight about Internet access issues and their influence on teaching and learning. School districts' Acceptable Use Policies (AUPs) were analyzed to determine how they addressed recent legislative mandates to educate minors about specific Web 2.0 safety issues. The research results support the conclusions of previous anecdotal studies which show that K-12 Internet access policies are overly restrictive, resulting in inhibited access to online educational resources. The major implication of this study is that existing Internet access policies need to be fine-tuned in order to permit greater access to educational content. The study recommends Internet safety practices that will empower teachers and students to access the Internet's vast educational resources safely and securely while realizing the Internet's potential to enrich teaching and learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Mewett, John University of Ballarat. "Electrokinetic remediation of arsenic contaminated soils." University of Ballarat, 2005. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/12797.

Повний текст джерела
Анотація:
"Arsenic is a common soil contaminant in Australia and worldwide. There is a need to find safe, effective and economic methods to deal with this problem. The soils used in this research were collected from central Victoria. They were contaminated with arsenic by historic gold mining activity or by past sheep dipping practices. This research investigated ten different leaching agents for their effects on three different arsenic contaminated soils. [...] Electrokinetic experiments were conducted on three arsenic contaminated soils. [...] The arsenic in these soils appears to be relatively stable and immobile under oxidising conditions. The soils had a high iron content which assists in the stabilisation of arsenic. This is beneficial with respect to the environmental impact of the arsenic contamination, however, it remains an obstacle to low cost electrokinetic remediation."
Masters of Applied Science
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Mewett, John. "Electrokinetic remediation of arsenic contaminated soils." University of Ballarat, 2005. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/14633.

Повний текст джерела
Анотація:
"Arsenic is a common soil contaminant in Australia and worldwide. There is a need to find safe, effective and economic methods to deal with this problem. The soils used in this research were collected from central Victoria. They were contaminated with arsenic by historic gold mining activity or by past sheep dipping practices. This research investigated ten different leaching agents for their effects on three different arsenic contaminated soils. [...] Electrokinetic experiments were conducted on three arsenic contaminated soils. [...] The arsenic in these soils appears to be relatively stable and immobile under oxidising conditions. The soils had a high iron content which assists in the stabilisation of arsenic. This is beneficial with respect to the environmental impact of the arsenic contamination, however, it remains an obstacle to low cost electrokinetic remediation."
Masters of Applied Science
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Cako, Stefan. "Politics and ethics of the information retrieving and access in the western modern state: The case of Sweden." Thesis, Stockholms universitet, Statsvetenskapliga institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-176940.

Повний текст джерела
Анотація:
Issues relating to individual legal certainty and security in Sweden, tensions between governmental actions to secure the collectives needs and violations of the individual’s privacy are explored. Due to the tensions between privacy and securitization, this will be achieved through the securitization framework of the Copenhagen school in international relations. A qualitative content analysis using archived files from the Swedish Data Inspection Board for the period of 2001 to 2014 is presented. Changes in discourses, diversions between the individual’s legal certainty and governmental action are noted. Needs for revision of old legal texts and rhetoric are highlighted.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Wunsch-Vincent, Sacha. "Market access for digitally-delivered content products and the resulting challenges to WTO : a US versus EC perspective /." [S.l. : s.n.], 2005. http://www.gbv.de/dms/zbw/485013843.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tirouvengadam, Balaaji. "Enhancement of LTE Radio Access Protocols for Efficient Video Streaming." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23260.

Повний текст джерела
Анотація:
A drastic increase in traffic of mobile broadband is seen in the past few years, which is further accelerated by the increase in usage of smart phones and its applications. The availability of good smart phones and better data connectivity are encouraging mobile users to use video services. This huge increase in usage will pose a lot of challenges to the wireless networks. The wireless network has to become content aware in order to offer enhanced quality of video service through efficient utilization of the wireless spectrum. This thesis focuses on improving the Quality of Experience (QoE) for video transmission over Long Term Evolution (LTE) networks by imparting the content awareness to the system and providing unequal error protection for critical video packets. Two different schemes for the improvement of video quality delivery over LTE networks are presented in this thesis. Using content awareness, the retransmission count of Hybrid Automatic Repeat reQuest (HARQ) are changed dynamically such that the most important video frame gets more number of retransmission attempts, which increases its success for delivery in-turn increasing the received video quality. Since Radio Link Control (RLC) is the link layer for radio interface, the second approach focuses on optimizing this layer for efficient video transmission. As part of this scheme, a new operation mode called Hybrid Mode (HM) for RLC is defined. This mode performs retransmission only for the critical video frames, leaving other frames to unacknowledged transmission. The simulation results of both proposed schemes provide significant improvement in achieving good video quality without affecting the system performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wesso, Iona. "Science text: Facilitating access to physiology through cognition-based reading intervention." University of the Western Cape, 1995. http://hdl.handle.net/11394/8485.

Повний текст джерела
Анотація:
Philosophiae Doctor - PhD
Reading and understanding science text is the principal means by which students at tertiary level access scientific information and attain scientific literacy. However, understanding and learning from science texts require cognitive processing abilities which students mayor may not have. If students fail to understand scientific text, their acquisition of subject knowledge and expertise will be impeded and they will fail to develop into thinking and independent learners, so crucial for academic progress and achievement. A major assumption in this study is thus that in order to increase access to science subjects there is a need to explicitly teach the thinking abilities involved in learning science from text. A review of the literature showed that while reading to learn from scientific text poses special challenges to students faced with this unfamiliar genre, little is known about reading (and thinking) for science learning. A synthesis of current research which describes the neglected interface between science learning, science reading and cognition is given in the literature review of this study. This synthesis highlights, in particular, the parallel developments in research into science learning and reading; the lack of integration of research in these areas; the absence of investigations on science reading located within the cognitive domain; and the absence of research into reading as it affects cognition and cognition as it affects reading in subject-specific areas such as physiology Possibilities for improving students' cognitive performance in reading to learn through intervention were considered from a cognitive perspective. From this perspective, students' observable intellectual performance can be attributed to their underlying knowledge, behaviour, and thought processes. Accordingly, the mental processes involved in comprehending scientific concepts from text and the cognitive processes which the students bring to the learning situation become highly relevant to efforts to improve cognitive skills for learning science Key questions which were identified to serve as a basis for intervention included: a) What cognitive abilities are needed for competent reading comprehension as demanded by physiology text?; b) How adequate is the cognitive repertoire of students in dealing with physiology text? With regard to these questions a catalogue of cognitive functions as formulated by Feuerstein et al (1980) was identified as optimally suited for establishing the cognitive match between reading tasks and students. Micro-analyses of the cognitive demands of students' textbook material and the cognitive make-up of second-year university students revealed a profound mismatch between students and their learning material. Students lacked both comprehension fostering and comprehension monitoring abilities appropriate to the demands of the learning task. The explication of the cognitive requirements which physiology text demands served as a basis for systematically designing instruction whereby appropriate intellectual performance for scientific comprehension from text may be attained Subsequent intervention was based on the explicit teaching of thinking abilities within the context of domain-specific (physiology) knowledge. An instructional framework was developed that integrated cognitive learning theories and instructional prescriptions to achieve an effective learning environment and improve students' cognitive abilities to employ and extend their knowledge. The objective was that the instructional model and resultant instructional methods would ensure that students learn not only the desired kinds of knowledge by conceptual change, but also the thought processes embedded and required by reading scientific material for appropriate conceptual change to take place. Micro-analysis of the cognitive processes intrinsic to understanding physiology text illuminated cognitive demands such as, for example, the ability to: transform linearly presented material into structural patterns which illuminate physiological relationships; analyse conceptually dense text rich in "paradoxical jargon"; activate and retrieve extensive amounts of topic-specific and subject-specific prior knowledge; to visualise events; and contextualise concepts by establishing an application for it. Within the above instructional setting, the study shows that the notion of explicitly teaching the cognitive processes intrinsic to physiology text is possible. By translating the cognitive processes into cognitive strategies such as assessing the situation, planning, processing, organisation, elaboration, monitoring and reflective responses, the heuristic approach effectively served to guide students through various phases of learning from text. Systematic and deliberate methods of thought that would enhance students problem-solving and thinking abilities were taught. One very successful strategy for learning from physiology text was the ability to reorganise the linearly presented information into a different text structure by means of the construction of graphic organisers. The latter allowed students to read systematically, establish relationships between concepts, identify important ideas, summarise passages, readily retrieve information from memory, go beyond the given textual information and very effectively monitor and evaluate their understanding In addition to teaching appropriate cognitive strategies as demanded by physiology text, this programme also facilitated an awareness of expository text conventions, the nature of physiological understanding, the value of active strategic involvement in constructing knowledge and the value of metacognitive awareness. Also, since the intervention was executed within the context of physiology content, the acquisition of content-specific information took place quite readily. This overcame the problem of transfer, so often experienced with "content-free" programmes. In conclusion, this study makes specific recommendations to improve science education. Inparticular, the notion of teaching the appropriate cognitive behaviour and thought processes as demanded by academic tasks such as reading to learn physiology seems to be a particularly fruitful area into which science educational research should develop and be encouraged.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Khan, Asiya. "Video quality prediction for video over wireless access networks (UMTS and WLAN)." Thesis, University of Plymouth, 2011. http://hdl.handle.net/10026.1/893.

Повний текст джерела
Анотація:
Transmission of video content over wireless access networks (in particular, Wireless Local Area Networks (WLAN) and Third Generation Universal Mobile Telecommunication System (3G UMTS)) is growing exponentially and gaining popularity, and is predicted to expose new revenue streams for mobile network operators. However, the success of these video applications over wireless access networks very much depend on meeting the user’s Quality of Service (QoS) requirements. Thus, it is highly desirable to be able to predict and, if appropriate, to control video quality to meet user’s QoS requirements. Video quality is affected by distortions caused by the encoder and the wireless access network. The impact of these distortions is content dependent, but this feature has not been widely used in existing video quality prediction models. The main aim of the project is the development of novel and efficient models for video quality prediction in a non-intrusive way for low bitrate and resolution videos and to demonstrate their application in QoS-driven adaptation schemes for mobile video streaming applications. This led to five main contributions of the thesis as follows:(1) A thorough understanding of the relationships between video quality, wireless access network (UMTS and WLAN) parameters (e.g. packet/block loss, mean burst length and link bandwidth), encoder parameters (e.g. sender bitrate, frame rate) and content type is provided. An understanding of the relationships and interactions between them and their impact on video quality is important as it provides a basis for the development of non-intrusive video quality prediction models.(2) A new content classification method was proposed based on statistical tools as content type was found to be the most important parameter. (3) Efficient regression-based and artificial neural network-based learning models were developed for video quality prediction over WLAN and UMTS access networks. The models are light weight (can be implemented in real time monitoring), provide a measure for user perceived quality, without time consuming subjective tests. The models have potential applications in several other areas, including QoS control and optimization in network planning and content provisioning for network/service providers.(4) The applications of the proposed regression-based models were investigated in (i) optimization of content provisioning and network resource utilization and (ii) A new fuzzy sender bitrate adaptation scheme was presented at the sender side over WLAN and UMTS access networks. (5) Finally, Internet-based subjective tests that captured distortions caused by the encoder and the wireless access network for different types of contents were designed. The database of subjective results has been made available to research community as there is a lack of subjective video quality assessment databases.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chen, Yi-chun [Verfasser], and Bertram [Akademischer Betreuer] Gerber. "Experimental access to the content of an olfactory memory trace in larval Drosophila / Yi-chun Chen. Betreuer: Bertram Gerber." Würzburg : Universitätsbibliothek der Universität Würzburg, 2013. http://d-nb.info/1044237066/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Njoku, Geoffrey. "The political economy of deregulation and commercialization of radio broadcasting in Nigeria, 1992-2017: An assessment of access, participation, content and peacebuilding." Doctoral thesis, Universitat Autònoma de Barcelona, 2018. http://hdl.handle.net/10803/665925.

Повний текст джерела
Анотація:
This work analyses the effect of deregulation and commercialisation of the broadcast media in Nigeria since 1992. Concretely it focuses on radio stations, the nature of their programming decisions and what informs them. It studies the effect of deregulation on three dimensions: a) Production and distribution of Programmes. The broadcast industry manufactures and distributes content, so when a deregulation policy is applied to communication industry, the immediate effect is on content production and how this content is distributed in order to remain in business and maximize profit. b) Pubic services function of these programmes in relation to development communication/journalism, education, peacebuilding, amelioration of hate speeches both online and offline, culture and social cohesion. In the early beginnings of radio broadcasting, attempts were made to make it a public service for citizens’ enlightenment, entertainment and education. British broadcasting was a pioneer of this tradition. This tradition remained for a long time before the policy of deregulation swept across the world. This work will analyse how deregulation and commercialisation may have affected the contribution of radio as Nigeria faces one of its biggest problems today: hate speech, ethnic and religious violence, radicalisation and terrorism. c) Access and participation for a broad range of segments in society, the rich, the poor, marginalized groups, women and others. In what ways have access and participation been constricted or improved for these groups as a result of deregulation and commercialisation of radio? It looks at hate speech in Nigeria, analysing its forms, dimensions and magnitude. It also, proposes strategies that could be used to ameliorate its impact. While legislation and regulations are potential strategies to consider, It argues that, even in this digital era, radio in Nigeria is still a powerful and popular medium in countering hate speech in the country, and if properly deployed, radio can be a potent tool in countering hate speech offline and online, although it would need to adapt programming for the new media generation to achieve this goal. Through the convergence of new media forms, radio can contribute in the battle against hate speech. The deregulation and commercialisation of the broadcast media was the demand of international political economy through the Structural Adjustment Programme and other economic revival programmes of the international financial institutions like the World Bank and the International Monetary Fund. They were framed as market-driven policies that would reduce state-funded wastages and lead to economic prosperity, diversity and the ascendancy of market-driven democracy of choice. The conceptual frameworks and models that guide this analysis of deregulation and commercialization of broadcasting is a number of key ideas and theses on the literature of the political economy of communication as well as McQuail’s the Democratic Participants Theory. Analysis is on the ways in which political and economic structures and processes, in this case the public policy of deregulation and commercialization of broadcasting impinge upon the production, dissemination and appropriation of communication by economic forces seeking profit. The critical political economy of communication sets out to show how different methods of financing and organizing our communicative needs, including ownership have consequences for the range of discourses and representations within the public domain and for audiences’ access to them (Mosco, 2008; 2009; 2015). Under the Democratic Participant Media Theory, the primary role of the media is to ensure the individuals’ rights and society’s right to access relevant information. Providing a feedback mechanism for the people to answer back as a right, and the right to use the means of the communication process for their interaction and among their communities of interest, are critical issues raised by the theory. One of the key elements of the theory is access and participation for a broad spectrum of society not on the basis power, influence and wealth but focusing on communication as the right of every citizen. The theory advocates the freedom and rights of persons especially minority groups and their rights to access media and for the media to serve them according to their dictates and needs (McQuail, 1983:96-97; Asemah, E. S et al, 2017). This study employed qualitative methods of in-depth interviews and focus group discussions of listeners of four radio stations in the Federal Capital Territory Abuja, Nigeria. The study selected two radio stations that were established after deregulation and commercialisation and are privately owned. They are Raypower FM managed by the Daar Communication Company, later substituted with Vision FM operated by Vision Company Limited, Rhythm FM owned by Silverbird Production. It also selected one old public interest, public broadcasting station that is government-owned and still partly government-subvented, Kapital FM and also a new generation FM station established after deregulation but government-owned called ASO FM. It believes that such disparate stations would exhibit significant differences in our analysis of variance. The study also employs document analysis of programme schedules and two weeks of audio broadcasts of these four radio stations to triangulate and validate findings from the in-depth interviews and focus group discussions. The in-depth interviews and focus group discussion sessions were transcribed and the transcripts were uploaded and analysed, using NVivo, a qualitative research data management and analysis software. Thematic coding was done and major themes relating to the research questions were identified and classified in relation to basic concepts of the study. Codes identified from the theory and literature, as they relate to access, participation, content, programing etc. were used to code interviews and focus group discussions. The codes were also framed using questions that examined the issues of access and participation/content and peacebuilding. For this study, two weeks’ audio recorded broadcasts of the four radio stations audio recordings of broadcasts of four radio stations were analysed. The hardcopy textual programmes schedules were used for cross-referencing. In analysing the audio texts, the codes created by the researcher were guided by the key research questions of the study. The study found that there is some development and peacebuilding content on radio post deregulation but not enough. Only 37 broadcast hours out of 1008 hours of four radio stations’ broadcast hours in two weeks were devoted to development and peacebuilding content. Peacebuilding content was a paltry 8 hours of 1008 hours. It also found that there are instances of hate speeches on radio, post deregulation occasioned by the drive for profit and the privately-owned radio stations are, due to the quest for profit, more prone to disturb the peace and escalate violence But for the convergence between radio, cell phones and social media, access and participation would not have increased. There is more access for the poor and marginalized groups but not enough participation. The situation could have been worse without the emergence of cell phones, despite the multiplicity of radio stations. Higher levels of participation are not happening to any group without money. You can only produce your own programmes and broadcast them at the time of your choosing, if you pay for them. Despite improved access and participation for the poor, they do not contribute to the weightier issues of national development and governance, on the contrary, their participation is limited to whimsical, trivial and mundane issues like sports, riddles and jokes. Contrary to the concerns of political economy, post deregulation, radio stations are granting access and participation to the people who are in need for social justice and humanitarian concerns especially for the poor. We have tagged this new genre in radio broadcasting as “human rights radio”
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Starr, Kim L. "Audio description and cognitive diversity : a bespoke approach to facilitating access to the emotional content in multimodal narrative texts for autistic audiences." Thesis, University of Surrey, 2018. http://epubs.surrey.ac.uk/848660/.

Повний текст джерела
Анотація:
Audio description (AD) offers untapped potential for delivering content to new audiences, particularly in the realm of cognitive accessibility. To date, bespoke AD orientations, moving beyond the standard blind and visually impaired modality (BVI-AD), have not been researched. This study explores the application of bespoke AD for emotion recognition purposes, from the perspective of individuals with autism spectrum disorders (ASDs) experiencing comorbid alexithymia (emotion recognition difficulties). It aims to establish the suitability of audio description as a vehicle for delivering emotion-based cues to assist with access to affective markers in film narrative. A study of AD for sight-impaired individuals undertaken by the British Broadcasting Corporation found evidence suggesting AD helped ASD individuals to engage with affective narrative (Fellowes, 2012). Studies of affect with autistic spectrum individuals commonly employ multimodal materials for the purposes of measuring emotion identification (Golan, Baron-Cohen & Golan, 2008), but have not yet incorporated supplementary AD, either as an entertainment or pedagogical resource. Addressing the gap, this project pairs AD remodelling techniques with an intervention study, to test for enhanced affective accessibility in ASD audiences. Applying a functionalist, skopos-based (Nord, 1997; Vermeer, 2012; Reiss & Vermeer, 2014) approach to modelling AD in the first phase of the study (S1), two new emotion recognition difficulties (ERD) modalities were developed, emoto-descriptive (EMO-AD) and emoto-interpretative (CXT-AD). These were subsequently tested, alongside standard (BVI) AD and a ‘zero’ AD modality (Z-AD), in an intervention study with young ASD individuals (S2). Results suggested that BVI-AD might represent a confound for this particular audience. Since ‘ceiling’ effect was observed in the other modalities (EMO-AD, CXT-AD and Z-AD), the efficacy of bespoke AD for emotion recognition applications remains unproven. However, the results indicate that affect-oriented AD, per se, is unlikely to confound ASD audiences. This study represents the first trial of tailor-made AD for audiences with cognitive accessibility needs, representing an interdisciplinary approach bridging the fields of audiovisual translation (Translation Studies) and psychology. As such, it opens up the debate for broader application of AD to aid accessibility in the cognitive arena.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Harte, David. "Internet content control in Australia : data topology, topography and the data deficit." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2001. https://ro.ecu.edu.au/theses/1073.

Повний текст джерела
Анотація:
The success of the online adult industry has provoked a public policy controversy over the need for internet censorship, and in recent times there has emerged desire to protect minors from possibly unsuitable content. On January 1st 2000, the Broadcasting Services Amendment (Online Services) Act (Cwlth, 1999) (BSA) was proclaimed. The Act purports to regulate and control Internet content in Australia. Operating in tandem with the Act is the Internet Industry Association Code of Practice, giving Australia a co-regulatory approach to Internet content control. The Australian Broadcasting Authority (ABA) is charged with implementing the regime. This study sets out examine the Internet content control problem in the Australian context. The political issues surrounding the topic of Internet censorship and the lack of reliable operational statistics, revealed the difficulty of estimating the effectiveness of the current control regime. Pivotal questions for the study concerned the scope and scale of content control in the Australian context and trends in hosting. This study used website typology, as defined by data topology and data topography, to examine the scope and scale of the content control task, and the implications for the effectiveness of the BSA. It was expected that if the BSA was to have an impact, that a discernible change in user download behaviour should ensue. This study used information provided by the adult Internet Content Provider (ICP) industry to gauge the BSA's impact-on user download behaviour as a measure of the control regime’s effectiveness. It was suggested by some observers that the so-called 'data deficit' between Australia and the US would be exacerbated by the new content control regime, with possible negative implications for the conduct of e-commerce in Australia generally. A study of Australian adult website hosting arrangements and data topography was conducted to examine the implications of the control regime for the "data deficit'. This study suggests that most Australian online adult content is in fact hosted in the US. The reasons for offshore hosting are almost totally financial and pre-date the introduction of the Broadcasting Services Act (Online Services) Amendment Act 1999. The study also suggests that any effect on the 'data deficit' should be minimal, and that the typology of adult content websites in such that the current co-regulatory regime may prove ineffective in controlling access to adult content.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Setterstig, Amalia. "Heden : Att förstå det offentliga rummet genom en samhällsbyggnadsdebatt." Thesis, Uppsala universitet, Kulturgeografiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-298106.

Повний текст джерела
Анотація:
The study aims to explore different aspects of urban public space. It does so through the case study of the medial debate concerning the planned redevelopment of Heden, an open central publicly owned area in Gothenburg, Sweden. The case study revealed the highly contested meanings of Heden, as well as different understandings of public space. The study also points to the dilemma of making urban public space readable and convivial, while maintaining it inclusive and open for everyone. The medial debate circles around the newly publicized redevelopment plan for Heden. In the plan the local government proposes the addition of more activities and functions to Heden. Thereby, they wish to attract new target groups to Heden. This proposal has met with some approval in the medial debate, but also with harsh critique. Some critics voice the concern of to whom public space is redeveloped. Other critics want to see more extensive redevelopment of Heden, to cover it with “inner city”. Others yet wish a future Heden to have a more explicit focus on sports. The study examines these differing opinions and their possible consequences for the “publicness” of urban public space.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

de, Jager Gerdi. "Opportunities for the development of understanding in Grade 8 mathematics classrooms." Diss., University of Pretoria, 2016. http://hdl.handle.net/2263/60992.

Повний текст джерела
Анотація:
Learner performance in South Africa is poor in comparison with other countries as a result of poor teaching. At the core of the concern about learners' performance in mathematics in South Africa lies a controversy regarding how mathematics should be taught. The purpose of this study was to explore Grade 8 mathematics teachers' creation and utilisation of opportunities for learners to develop mathematical understanding in their classrooms. To accomplish this, an explorative case study was conducted to explore three mathematics teachers' instructional practices by using Schoenfeld et al.'s (2014) five dimensions of Teaching for the Robust Understanding of Mathematics (TRU Math) scheme, namely, the mathematics, cognitive demand, access to mathematical content, mathematical agency, authority and identity and uses of assessment. The three participants were conveniently selected from three private schools in Mpumalanga. The data collected consist of a document analysis, two lessons observations and a post-observation interview per teacher. This study revealed that only one of the three teachers applied all Schoenfeld et al.'s (2014) TRU Math dimensions. The dimension identified which the teachers applied most in their classrooms was the mathematics. The dimensions identified where teachers still lack skills were cognitive demand, access to mathematical content, agency, authority and identity, and uses of assessment. This study revealed that the content of most tasks and lessons was focused and coherent, and built meaningful connections. However, the content did not engage learners in important mathematical content or provided opportunities for learners to apply the content to solve real-life problems. Due to the small sample used, the results from this study cannot be generalised. However, I hope that the findings will contribute to student-teacher training and in-service teacher training in both government and private schools. Future research could possibly build on this study by examining the learners and how they learn with understanding by using the TRU Math dimensions.
Dissertation (MEd)--University of Pretoria, 2016.
Science, Mathematics and Technology Education
MEd
Unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Weller, Michael, and Rosa Elena Di. "Lizenzierungsformen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-114810.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Underhill, Les, and Dave Bradfield. "INTROSTAT (Statistics textbook)." Thesis, University of Cape Town, 2013. https://vula.uct.ac.za/access/content/group/23066897-bf3d-4a8d-9637-049c04424e24/IntroStat-%20Dr%20Underhill/.

Повний текст джерела
Анотація:
IntroStat was designed to meet the needs of students, primarily those in business, commerce and management, for a course in applied statistics. IntroSTAT is designed as a lecture-book. One of the aims is to maximize the time spent in explaining concepts and doing examples. The book is commonly used as part of first year courses into Statistics.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Chisita, Collence Takaingenhamo. "Library consortia and Zimbabwe's national development agenda : Librarians’ views on constructing a suitable model." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/62248.

Повний текст джерела
Анотація:
The development of library Consortia in Zimbabwe was necessitated by the need to reduce subscription costs and to widen access to electronic resources as well as implement new technologies among academic libraries. The development of Zimbabwe University Library Consortium (ZULC) and College and Research Library Consortium (CARLC) enabled libraries to cooperate and collaborate in building capacity to support teaching, learning and research through access to quality scholarly information. The trajectory of consortia development in Zimbabwe since 2002 has however been characterised by a focus on the academic sector to the exclusion of other types of libraries. The future development of library consortia in Zimbabwe can be better envisioned when correlated with the country’s national development agenda. While not made explicit, this agenda is underpinned by the idea of access to information. This study investigated how the benefits of the existing library consortia can be harnessed to promote the achievement of Zimbabwe’s national development goals. More specifically, it examined the ways that the development paths of ZULC and CARLC can be transformed to support the country’s national development agenda and programmes. This culminated in a model that will accelerate and guide the future development of its library consortia to facilitate a supporting developmental role. The novel aspect of this study is that it seeks to integrate library consortia into the national development plans of a developing country and to extend their benefits as widely as possible. An extensive literature review of the characteristics, models, and development of consortia in selected countries was complemented by an empirical mixed-method component that generated data through interviews, questionnaires, observation, and the analysis of key documents. A special feature of the study is a detailed analysis of the successes and challenges of library consortia in other countries and in the Southern African region to supplement the empirical data that informs the proposed model. The main finding is that a model with a multi-type structure and a National Coordinating Committee is best to transform the development paths of Zimbabwe’s academic library consortia to support the country’s national development agenda. The model’s key elements are finance, structure, governance, functions, and special features.
Thesis (PhD)--University of Pretoria, 2017.
Information Science
PHD
Unrestricted
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Elsinga, Danka. "The effects of the European Copyright Directive on Generation Z's news consumption : An explorative study on the effects of the link tax, concerning the access and consumption of news content by Generation Z in Europe." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-265531.

Повний текст джерела
Анотація:
After eighteen years filled with technical evaluation, the European Parliament agreed on a new online copyright directive in 2019. The aim of this directive is to modernize the rules that were last adjusted in 2001, to create a better balance between content providers and online platforms. After the European Commission introduced a proposal for a copyright directive in 2016, caught it the attention of many, mainly due to two articles: Article 11 and 13. The reason for this attention was the practical way in which these articles contributed in reaching the general aim of the new copyright directive. This research focuses on Article 11, which also became known as the ‘’link tax’’. Content creators should, according to this article, get rewarded for their work by other online parties. Within these eighteen years, those who were born in this timeframe, known as Generation Z, grew up in a world where technology is everywhere and at any time. This generation was born in the 90’s, grew up in the 00’s, and is shaped by the presence of technology in which they communicate, interact, and consume knowledge. Because of that, this study presents the effects of the implementation of the new European copyright directive, on Generation Z in Europe.
Efter arton år av teknisk utvärdering så enades Europaparlamentet om ett nytt upphovsrättsdirektiv på nätet år 2019. Syftet med detta direktiv är att modernisera reglerna som senast anpassades år 2001, för att skapa en bättre balans mellan innehållsleverantörer och online plattformar. Efter att Europeiska kommissionen införde ett förslag om upphovsrättsdirektiv år 2016 så fångades uppmärksamheten hos många, främst på grund av två artiklar: artikel 11 och 13. Orsaken till denna uppståndelse var huruvida dessa artiklar bidrog till att nå det slutgiltiga målet av det nya upphovsrättsdirektivet eller inte. Denna forskning fokuserar på artikel 11, som även blev känd som '' länkskatt ''. Innehållsskapare bör enligt denna artikel få ersättning för sitt arbete av andra online-parter. Inom dessa arton år så växte de som föddes inom denna tidsram, känd som Generation Z, upp i en värld där teknik finns överallt och när som helst. Denna generation föddes på 90-talet, växte upp på 00-talet och formas av närvaron av teknik där de kommunicerar, interagerar och konsumerar kunskap. På grund av detta presenterar denna studieeffekterna av implementeringen av det nya europeiska upphovsrättsdirektivet för Generation Z i Europa.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Govindaraj, Rekha. "Emerging Non-Volatile Memory Technologies for Computing and Security." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7674.

Повний текст джерела
Анотація:
With CMOS technology scaling reaching its limitations rigorous research of alternate and competent technologies is paramount to push the boundaries of computing. Spintronic and resistive memories have proven to be effective alternatives in terms of area, power and performance to CMOS because of their non-volatility, ability for logic computing and easy integration with CMOS. However, deeper investigations to understand their physical phenomenon and improve their properties such as writability, stability, reliability, endurance, uniformity with minimal device-device variations is necessary for deployment as memories in commercial applications. Application of these technologies beyond memory and logic are investigated in this thesis i.e. for security of integrated circuits and systems and special purpose memories. We proposed a spintonic based special purpose memory for search applications, present design analysis and techniques to improve the performance for larger word lengths upto 256 bits. Salient characteristics of RRAM is studied and exploited in the design of widely accepted hardware security primitives such as Physically Unclonable Function (PUF) and True Random Number Generators (TRNG). Vulnerability of these circuits to adversary attacks and countermeasures are proposed. Proposed PUF can be implemented within 1T-1R conventional memory architecture which offers area advantages compared to RRAM memory and cross bar array PUFs with huge number of challenge response pairs. Potential application of proposed strong arbiter PUF in the Internet of things is proposed and performance is evaluated theoretically with valid assumptions on the maturity of RRAM technology. Proposed TRNG effectively utilizes the random telegraph noise in RRAM current to generate random bit stream. TRNG is evaluated for sufficient randomness in the random bit stream generated. Vulnerability and countermeasures to adversary attacks are also studied. Finally, in thesis we investigated and extended the application of emerging non-volatile memory technologies for search and security in integrated circuits and systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Bellafkih, Said. "Exploitation de l'effet électro-calorique pour la réfrigération : optimisation des propriétés des matériaux et des processus associés." Thesis, Littoral, 2020. https://documents.univ-littoral.fr/access/content/group/50b76a52-4e4b-4ade-a198-f84bc4e1bc3c/BULCO/Th%C3%A8ses/UDSMM/These_BELLAFKIH_Said_Definitif.pdf.

Повний текст джерела
Анотація:
Pour répondre aux enjeux environnementaux (changement climatique, couche d'ozone...) et à l'accroissement des besoins de réfrigération (besoins sanitaires, agro-alimentaire, de confort...), la réfrigération électrocalorique est une alternative prometteuse : elle permet d'éviter l'utilisation des fluides réfrigérants nocifs pour l'environnement employés dans les réfrigérateurs conventionnels. L'effet électrocalorique est la variation de la température d'un matériau polaire suite à l'application ou la suppression d'un champ électrique. Cette thèse avait deux objectifs : le premier était l'élaboration et la caractérisation d'un matériau électrocalorique original, et le deuxième était de concevoir un démonstrateur de réfrigération électrocalorique. Ainsi, un nouveau matériau électrocalorique à base de titanate de baryum dopé par du samarium avec différents taux de concentration a été élaboré et ses propriétés microstructurales, thermiques, diélectriques et ferroélectriques ont été caractérisées. L'effet du taux de concentration en samarium dans les sites A de la structure pérovskite du BaTiO3 sur la température de transition ferroélectrique/paraélectrique a pu être mis en évidence. L'étude des propriétés électrocaloriques en fonction de la température et de l'intensité du champ électrique a été réalisée par mesure directe du flux de chaleur électrocalorique généré ou absorbé suite à l'application ou la suppression du champ électrique. Nous avons utilisé pour cela un calorimètre adiabatique développé au sein du laboratoire et adapté à l'étude de l'effet électrocalorique. Cette étude préliminaire a permis de mettre en évidence un effet électrocalorique même pour des champs faibles et l'influence de la température et de la concentration en samarium sur l'intensité de cet effet. Par ailleurs, un démonstrateur de réfrigération électrocalorique a été mis au point, les premiers résultats obtenus ont permis d'évaluer et de discuter les différentes solutions techniques proposées. Nous avons ainsi pu montrer la faisabilité de l'utilisation de l'effet électrocalorique pour la réfrigération, validant ainsi l'idée de l'exploitation de l'effet électrocalorique comme une solution alternative aux techniques usuelles de réfrigération. De par sa conception, ce démonstrateur peut être envisagé comme un banc d'essai permettant l'optimisation des propriétés des matériaux et des processus associés dans le cadre d'une application de l'effet électrocalorique pour la réfrigération
To answer to the environmental issues (global warning, pollution), electrocaloric cooling can be considered as a promising approach as an environment friendly alternative to the conventional refrigeration other techniques. It avoids the use of environment harmful refrigerant used in conventional refrigerators. Electrocaloric effect is the charge of temperature of a polar material when an electric field is applied or removed. This thesis had two objectives. Firstly, the elaboration and characterization of an original electrocaloric material, and then the conception of an electrocaloric refrigeration demonstrator device. Thus a new electrocaloric material based on samarium doped barium titanate has been elaborated and its structural, thermal, electrical and ferrolectrical properties have been characterized. The effect of the dopant (samarium) amount on the ferroelectric/paraelectric transition temperature has been confirmed. The characterization of the electrocaloric properties as a function of temperature and electric field intensity have been realized by direct measurement of the isothermal heat flux upon application and removal of the electric field using a homemade calorimeter. Preliminary results showed that an electrocaloric effect can be detected even for low electric field intensity, and evidenced the effect of the temperature and the electric field intensity on the magnitude of the electrocaloric effect. Then an electrocaloric refrigeration device has been developed and built, first results allowing to evaluate the different solutions we have selected for the conception of the demonstrator. They showed that the electrocaloric effect can be used as an alternative to the conventional solutions for the refrigeration. By its conception, the device may also be considered as a testing bench for the optimization of the materials properties and associated process in Electrocaloric refrigeration applications
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Méausoone, Clémence. "Etude en Interface Air-Liquide de la toxicité des Composés Organiques Volatils lors d’expositions répétées : Cas du toluène, de ses homologues et des émissions issues de son traitement catalytique." Thesis, Littoral, 2019. https://documents.univ-littoral.fr/access/content/group/50b76a52-4e4b-4ade-a198-f84bc4e1bc3c/BULCO/Th%C3%A8ses/Toxicologie/these_Meausoone_Clemence.pdf.

Повний текст джерела
Анотація:
Le toluène est un solvant fréquemment utilisé par l’industrie manufacturière. Il appartient à la catégorie des composés organiques volatils (COV), dont nombre d’entre eux présentent des impacts néfastes sur la santé humaine et sont aujourd’hui classés cancérogènes, mutagènes et/ou reprotoxiques. Afin de diminuer la présence dans l’air de composés nocifs comme le toluène, il apparait essentiel d’envisager leur substitution dans les procédés industriels par des composés moins toxiques et/ou de réduire au maximum leurs émissions à la source. Dans ce contexte, le premier objectif du travail de recherche est d’étudier la toxicité aiguë et répétée du toluène, de ses homologues supérieurs, pouvant être utilisés comme composés de substitution, ainsi que de son homologue inférieur sur des cellules épithéliales bronchiques humaines à l’aide d’un dispositif d’exposition en interface air/liquide. Le deuxième objectif vise à évaluer le caractère toxique d’effluents gazeux issus de la dégradation du toluène par oxydation catalytique. Pour cela, les cellules BEAS-2B ont été exposées 1 heure par jour pendant 1, 3 et 5 jours au benzène, au toluène, au xylène ou au mésitylène, ainsi qu’aux effluents gazeux obtenus après traitement catalytique du toluène. Les effets toxiques ont été évalués au travers des paramètres de cytotoxicité, de réponse inflammatoire et d’expression génique des enzymes de métabolisation des xénobiotiques (EMX). L’exposition des cellules BEAS-2B au toluène et à ses homologues a révélé l’implication de voies métaboliques spécifiques à chaque composé. Une augmentation significative des marqueurs de l’inflammation a également été observée, avec une concentration plus importante pour le benzène et le xylène par rapport aux autres molécules. Concernant l’exposition aux effluents gazeux issus de l’oxydation catalytique du toluène, l’expression tardive de gènes impliqués dans le métabolisme des xénobiotiques organiques aromatiques, est compatible avec la présence de sous-produits, tels que le benzène ou les hydrocarbures aromatiques polycycliques. En conclusion, les résultats obtenus dans ce projet montrent l’intérêt de mener des expositions in vitro en condition répétée permettant de déceler de potentiels effets tardifs et la pertinence de la validation toxicologique des systèmes catalytiques avant leur formulation en pilote industriel
Toluene is a solvent widely used in manufacturing industries. It belongs to a family of volatile organic compounds (VOCs), many of which have adverse impacts on human health and are classified as carcinogenic, mutagenic or toxic for reproduction. Different measures have been implemented to reduce the emissions of toxic compounds, such as their replacement in the industry by less harmful compounds and/or reducing gas emissions at the source. In this context, the first objective of the research was to investigate the acute toxicity and the one after repeated exposure to toluene and its superior homologous solvents, which can be used as its substitution compounds, as well as its lower homologous on human bronchial epithelial cells using an air/liquid interface exposure device. The second objective was to assess the toxicity of gaseous effluents from the degradation of toluene by catalytic oxidation. For this purpose, BEAS-2B cells were exposed during 1 hour for 1, 3 or 5 days to benzene, toluene, xylene or mesitylene, and to the exhausts of catalytic oxidation of toluene. Toxic effects were evaluated through cytotoxicity, inflammatory response and gene expression of xenobiotic metabolism enzymes (XME). Exposure of BEAS-2B cells to toluene and its homologous compounds revealed the involvement of metabolic pathways specific to each compound. A significant increase in inflammatory marker response was also observed, with a higher concentration after cell exposure to benzene and to xylene compared to the other molecules. With regard to exposure to gaseous effluents from the catalytic oxidation of toluene, the late expression of genes involved in the metabolism of aromatic organic xenobiotics has made possible to highlight the presence of by-products, such as benzene or polycyclic aromatic hydrocarbons. In conclusion, the results obtained in this project show the interest of conducting repeated in vitro exposures to detect potential late effects, and the importance of toxicological validation of catalytic systems before scaling-up in industrial pilots
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Dib, Hadi. "Traitement catalytique des émissions issues de la combustion de la biomasse." Thesis, Littoral, 2019. https://documents.univ-littoral.fr/access/content/group/50b76a52-4e4b-4ade-a198-f84bc4e1bc3c/BULCO/Th%C3%A8ses/Toxicologie/These_DIB_Hadi.pdf.

Повний текст джерела
Анотація:
La combustion de la biomasse et en particulier du bois constitue une alternative intéressante à l'utilisation des combustibles fossiles pour l'approvisionnement en énergie. Elle présente l'avantage de pouvoir être renouvelable sans contribuer à une émission supplémentaire de CO₂. Cependant, il est connu que les appareils de chauffage utilisant la biomasse comme combustible peuvent générer certains polluants gazeux dont notamment les Composés Organiques Volatils (COV) et le monoxyde de carbone (CO). Le post-traitement catalytique se révèle comme une des technologies les plus prometteuses pour limiter ces émissions de polluants. Ce projet vise donc à développer des matériaux catalytiques actifs, sélectifs en dioxyde de carbone et stables, assurant une décomposition complète du mélange de COV et de CO. Les catalyseurs à base de métaux nobles, étant reconnus pour leur bonne activité pour ce type de réaction d'oxydation, engendrent cependant un coût important pour le développement d'un tel procédé. L'objectif de nos travaux sera donc basé sur la synthèse et le développement de nouveaux matériaux catalytiques peu onéreux à base d'oxydes de métaux de transition qui seront utilisés comme alternatifs aux catalyseurs à base de métaux nobles. Afin d'obtenir des oxydes performants, la synthèse des matériaux en utilisant la voie hydrotalcite a été choisie. Nous avons montré l'effet bénéfique de l'ajout du cérium dans les oxydes MgAl-O et CuAl-O vis-à-vis de l'oxydation du toluène et/ou du CO. Une relation entre la réductibilité et l'activité de ces solides pour ces réactions a été observée. Pour les catalyseurs MgAlCe-O, aucun effet sur la conversion du toluène n'a été observé, cependant un effet significatif sur la conversion du CO en présence de toluène a été mis en évidence. Ainsi, un oxyde du type CuAlCe-O s'est avéré actif et stable pour la destruction de mélanges de COV et de CO. De plus, l'intérêt d'utiliser la voie hydrotalcite pour synthétiser ces oxydes CuAlCe-O a été vérifié par comparaison avec d'autres voies de synthèses. L'activité supérieure du catalyseur CuAlCe-O peut être corrélée à un effet de synergie entre les éléments cuivre et cérium
Biomass burning, in particular wood, is an attractive alternative to the utilization of fossil fuels for energy supply, as it is renewable and does contribute to any additional CO₂ emission into atmospher. However, it is known that heating appliances using biomass generate large amounts of Volatile Organic Compounds (VOCs) and carbon monoxide (CO) during the combustion cycle. A catalytic post-treatment is one of the most promising technologies to limit the emissions of these pollutants. This project aims to develop active and selective catalytic materials with enhanced redox properties in order to achieve the total oxidation of VOCs and CO at low temperature. Noble metals based catalysts are considered as good candidates for such types of reactions. However, these catalysts are very expensive for adaptation to domestic heating device. The objective of our work is focused on the synthesis and development of innovative and cheaper catalytic materials composed of transition metal oxides that could be used as alternatives to noble metal catalysts. In order to obtain efficient oxides, the hydrotelcite route was chosen for the synthesis of the catalysts. The beneficial effect of adding cerium to MgAl-O and CuAl-O oxides towards the oxidation of toluene and/or CO was demonstrated. A relationship between the reducibility and activity of these solids for these reactions has been also identified. For MgAlCe-O catalysts, a beneficial effect on the conversion of toluene in presence of CO was observed. Indeed, the temperature of toluene oxidation was shifted at lower temperatures in presence of CO. In contrary, no effect on toluene conversion was observed for the CuAlCe-O materials. However, a significant effect on the conversion of CO in presence of toluene was revealed. Briefly, a CuAlCe-O type oxide with high activity and stability has been synthesized for the destruction of VOCs and CO mixtures. In addition, the advantage of using the hydrotalcite route to synthesize these CuAlCe-O oxides has been verified by comparison with other synthetic routes. The high activity of the CuAlCe-O catalyst can be attributed to the synergic effect between copper and cerium elements
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Fayad, Layal. "Caractérisation de la nouvelle chambre de simulation atmosphérique CHARME et étude de la réaction d’ozonolyse d’un COV biogénique, le γ-terpinène". Thesis, Littoral, 2019. https://documents.univ-littoral.fr/access/content/group/50b76a52-4e4b-4ade-a198-f84bc4e1bc3c/BULCO/Th%C3%A8ses/LPCA/These_Fayad_Layal.pdf.

Повний текст джерела
Анотація:
L’étude des mécanismes et interactions atmosphériques est un des sujets majeurs actuels de recherches environnementales. La façon la plus directe et pertinente pour étudier la transformation des polluants et la formation des aérosols dans l'atmosphère est de simuler les processus dans des conditions contrôlées et simplifiées. Une nouvelle chambre de simulation CHARME (CHamber for the Atmospheric Reactivity and the Metrology of the Environment) a été développée au Laboratoire de Physico-Chimie Atmosphérique (LPCA) de l’Université du Littoral Côte d’Opale (ULCO). CHARME est également dédiée à la validation de dispositifs optiques utilisés pour la métrologie d’espèces atmosphériques stables et instables (radicaux). La première partie de ces travaux de recherche concerne la caractérisation de tous les paramètres techniques, physiques et chimiques de cette nouvelle chambre et l’optimisation des méthodes pour étudier la réactivité des composés organiques volatiles (COV) et simuler la formation d’aérosols organiques secondaires (AOS). Les résultats obtenus démontrent que CHARME est un outil adapté pour reproduire les réactions se produisant dans la troposphère. La deuxième partie est dédiée à l’étude de la réaction d’ozonolyse d’un COV biogénique, le γ-terpinène. La constante de vitesse à (294 ± 2) K et la pression atmosphérique a été mesurée et les produits d’oxydation identifiés dans la phase gazeuse. L’hygroscopicité des aérosols organiques secondaires a également été étudiée. A notre connaissance, ce travail représente la première étude sur la formation des AOS à partir de l’ozonolyse du γ-terpinène
The study of atmospheric processes is among the central topics of current environmental research. The most direct and significant way to investigate the transformation of pollutants and the formation of aerosols in the atmosphere, is to simulate these processes under controlled and simplified conditions. In this regard, a new simulation chamber, CHARME (CHamber for the Atmospheric Reactivity and the Metrology of the Environment) has been designed in the Laboratory of Physico-Chemistry of the Atmosphere (LPCA) in the University of Littoral Côte d’Opale (ULCO). CHAE is also dedicated to the development and validation of new spectroscopic approaches for the metrology of atmospheric species including gases, particles and radicals.The first aim of this research was to characterize all the technical, physical and chemical parameters of this new chamber and to optimize the methods for studying the atmospheric reactivity of volatile organic compounds (VOCs) and simulating the formation of secondary organic aerosols (SOA). The results of numerous experiments and tests show that CHARME is a convenient tool to reproduce chemical reactions occurring in the troposphere. The second research objective was to investigate the reaction of the biogenic VOC, γ-terpinene, with ozone. The rate coefficient at (294 ± 2) K and atmospheric pressure was determined and the gas-phase oxidation products were identified. The physical state and hygroscopicity of the secondary organic aerosols was also studied. To our knowledge, this work represents the first study on SOA formation from the ozonolysis of γ-terpinene
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії