Journal articles on the topic 'Copyright – Electronic information resources – Copyright and electronic data processing'

To see the other types of publications on this topic, follow the link: Copyright – Electronic information resources – Copyright and electronic data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 30 journal articles for your research on the topic 'Copyright – Electronic information resources – Copyright and electronic data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shuman, Jill M. "Copyright and open access in the life sciences: a researcher's guide to sharing and publishing scientific information." Emerging Topics in Life Sciences 2, no. 6 (December 21, 2018): 779–84. http://dx.doi.org/10.1042/etls20180154.

Full text
Abstract:
Copyright provides the creator of an original work, such as a journal article or a scientific poster, with exclusive rights to authorize and reproduction and sharing copies of the work. Issues regarding copyright have become more prominent as digital technologies have made copying and distributing information easier. In a research environment, there is ample opportunity to share print and electronic resources among colleagues, which may represent noncompliance with copyright law. The desire to remove the paywall from the published literature has led to several versions of open access (OA), differentiated by the fees charged to the author as article processing charges, where the article is stored, and when the published article becomes freely available as OA. A number of government agencies and major research funders in the U.S.A. and the EU have implemented specific guidelines as to where and how their funded research can be published. Although OA publications can be read for free, they are still subject to various license limitations regarding sharing and reuse.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yueh-Peng, Tzuo-Yau Fan, and Her-Chang Chao. "WMNet: A Lossless Watermarking Technique Using Deep Learning for Medical Image Authentication." Electronics 10, no. 8 (April 14, 2021): 932. http://dx.doi.org/10.3390/electronics10080932.

Full text
Abstract:
Traditional watermarking techniques extract the watermark from a suspected image, allowing the copyright information regarding the image owner to be identified by the naked eye or by similarity estimation methods such as bit error rate and normalized correlation. However, this process should be more objective. In this paper, we implemented a model based on deep learning technology that can accurately identify the watermark copyright, known as WMNet. In the past, when establishing deep learning models, a large amount of training data needed to be collected. While constructing WMNet, we implemented a simulated process to generate a large number of distorted watermarks, and then collected them to form a training dataset. However, not all watermarks in the training dataset could properly provide copyright information. Therefore, according to the set restrictions, we divided the watermarks in the training dataset into two categories; consequently, WMNet could learn and identify the copyright information that the watermarks contained, so as to assist in the copyright verification process. Even if the retrieved watermark information was incomplete, the copyright information it contained could still be interpreted objectively and accurately. The results show that the method proposed by this study is relatively effective.
APA, Harvard, Vancouver, ISO, and other styles
3

Sridhar, Balakrishnan, and Vadlamudi Syambabu. "Analysis of watermarking techniques in multimedia communications." Serbian Journal of Electrical Engineering 18, no. 3 (2021): 321–32. http://dx.doi.org/10.2298/sjee2103321s.

Full text
Abstract:
Multimedia information is critical of examining, information perceived and which are illustrated by the human cerebrum. In our brain, 33% of the cortical area concentrates only on visual information processing. Digital watermarking technology is being received to guarantee and encourage such kinds of digital data authentication, security, and copyright. These algorithms permit the expandable values of different techniques to prevent the problems of copyright issues during the transmission. This paper discusses the detailed about the point by point investigation of watermarking definition and different watermarking applications and strategies used to improve information security.
APA, Harvard, Vancouver, ISO, and other styles
4

Rishek Hoshi, Alaa, Nasharuddin Zainal, Mahamod Ismail, Abd Al-Razak T. Rahem, and Salim Muhsin Wadi. "A robust watermark algorithm for copyright protection by using 5-level DWT and two logos." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 2 (May 1, 2021): 842. http://dx.doi.org/10.11591/ijeecs.v22.i2.pp842-856.

Full text
Abstract:
<span>Recent growth and development of internet and multimedia technologies have made it significant to upload data; however, in this situation, the protection of intellectual property rights has become a critical issue. Digital media, including videos, audios, and images are readily distributed, reproduced, and manipulated over these networks that will be lost copyright. Also, the development of various data manipulation tools like PDF converter and Photoshop Editor has resulted in digital data copyright issues. So, a digital watermarking technique has emerged as an efficient technique of protecting intellectual property rights by providing digital data copyright authentication and protection. In this technique, a watermarked document was integrated into electronic data to prevent unauthorized access. In this paper, A robust watermark algorithm based on a 5-level DWT and Two Log was proposed to enhance the copyright protection of images in unsecured media. Our lab results validate that our algorithm scheme is robust and forceful against several sets of attacks, and high quality watermarked image was achieved, where the algorithm was assessed by computation of many evaluation metrics such as PSNR, SNR, MAE, and RMSE.</span>
APA, Harvard, Vancouver, ISO, and other styles
5

Bogdanović, Dragana. "Otvoreni pristup i autorsko pravo = Open Access and Copyright (Author’s Rights)." Bosniaca 21, no. 21 (December 2016): 15–18. http://dx.doi.org/10.37083/bosn.2016.15.

Full text
Abstract:
Digitalne tehnologije omogućile su stvaranje arhiva elektroničkih radova. Tako su ovi naučni repozitoriji postali javno dostupni u režimu otvorenog pristupa. Open access uključuje slobodan i univerzalan pristup i korištenje informacijskog sadržaja baza podataka i drugih digitalnih resursa na internetu. Da bi se zaštitili interesi nositelja autorskih prava radova koji su u režimu otvorenog pristupa, morale su biti unesene mnoge inovacije u zakon o autorskim pravima. Najpoznatija licenca za otvoreni pristup je Licenca kreativne zajednice (Creative Commons Licence – CCL). Prva internacionalna deklaracija o otvorenom pristupu je Budimpeštanska deklaracija (inicijati-va) (Budapest Open Access Initiative, 2002). 2007. godine utvrđene su osnovne smjernice za primjenu otvore-nog pristupa u bibliotekama od strane IFLA (International Federation of Library Associations and Institutions), eIFL (electronic Information for Libraries) i LCA (Library Copyright Alliance koja obuhvata pet najvećih biblioteč-kih udruga u SAD). = Digital tehnologies allowed forming the archival repository of electronic issues. These scientific repositories became publicly available within the open access regime. Open access includes open and universal access and usage of the content data bases and the other internet digital resources. With the aim to protect the interests of the all entitled to the open access regime, many inovations connected with copyright law have had to beimplemented. The most famous open access licence is the Creative Commons Licence – CCL. Budapest Open Access Initiative is the first international declaration of the open access. In 2007, the implementation of fundamental principles of the open acces in the libraries were confirmed by IFLA (International Federation of Library Associations and Institutions), eIFL (electronic Information for Libraries) and LCA (Library Copyright Alliancewhich involved five greatest library associations in the USA
APA, Harvard, Vancouver, ISO, and other styles
6

Omeluzor, Saturday U., Ugochi Esther Molokwu, Sunday Ikhimeakhu Dika, and Onah Evelyn Anene. "Factors Affecting the Development of E-Library in Universities in Nigeria." Information Impact: Journal of Information and Knowledge Management 13, no. 2 (January 31, 2023): 26–46. http://dx.doi.org/10.4314/iijikm.v13i2.3.

Full text
Abstract:
This study investigated the factors for the development of electronic library in university libraries in Southern Nigeria. The study adopted a descriptive research design with a population of 107 which comprised all the systems librarians, electronic librarians and digital librarians in the federal, state and private universities in Southern Nigeria. An online questionnaire using Google form was the main tool for data collection with a total of 107 librarians who responded appropriately. The study revealed that ICT tools, information resources and facilities were used for the development of e-library in university. The findings also showed that there was a general consensus among the respondents concerning ICT tools and resources that were used for the development of e-library such as: CD-ROM, wireless network, and interactive board, office and electrical equipment, information resources (e-book, e-journal, e-newspaper) and subscription to databases. The findings further revealed that funding, authentication, digital preservation process, copyright issues, training, and ease of access were challenges affecting the development of e-library in Nigeria. The researchers therefore recommended that universities, colleges of education and polytechnics in Nigeria should endeavor to develop its e-library by considering the findings in order to achieve its mandate of delivering quality information services to the library patrons.
APA, Harvard, Vancouver, ISO, and other styles
7

Moiseienko, Stefani M. "The Organizational Principles of the Control over Settlements in Electronic Commerce." Business Inform 9, no. 536 (2022): 67–73. http://dx.doi.org/10.32983/2222-4459-2022-9-67-73.

Full text
Abstract:
The purpose of the article is to improve the organization of the control over settlements in e-commerce by developing an organizational and information model. Being a promising and quite attractive form of doing business, e-commerce is characterized by specific risks: errors in the operation of the applied software, illegal withdrawal of funds, copyright infringement, leakage of corporate information, cybercrime and fraud with financial transactions, violation of personal data protection and their use for personal purposes, etc., which has an impact on the organization of control over settlements in this sphere. It is identified that the improvement of the organizational principles of control over settlements in e-commerce will contribute to a better control measure and will positively affect the overall efficiency of the system of management of an e-commerce entity. As result of the study, an organizational and information model is developed and substantiated, which takes into account the peculiarities of this sphere of economic activity and consists of six blocks: the purpose and objectives of control over settlements in e-commerce; objects and subjects of control over settlements in e-commerce; system of economic indicators of control over settlements in e-commerce; information base of control over settlements in e-commerce; methodical methods of processing primary (incoming) information of control over settlements in e-commerce; methodical methods of generalization and implementation of the results of control over settlements in e-commerce. The use of the presented development will contribute to better preparation of the control measure, optimization of control over both the time and cost parameters. Prospects for further research in this area may be the development of a risk-oriented methodology of control in e-commerce.
APA, Harvard, Vancouver, ISO, and other styles
8

Michaels, Tamela D., and John D. Lea-Cox. "NurseryWeb—An Information and Communication Page for the Nursery Industry." HortScience 32, no. 3 (June 1997): 540D—540. http://dx.doi.org/10.21273/hortsci.32.3.540d.

Full text
Abstract:
Electronic information systems that take advantage of new technological developments on the Web are a key towards fulfilling the mission of the extension educator; i.e., to help individuals, families and communities put research-based knowledge to work in improving their lives. Webpages are one key to achieving this goal, but vertical searches using search engines are tedious and inefficient. There is a need for a) rapid and easy access to verifiable information databases and b) the coordination of good information resources that are already available on the Web in an horizontal format. NurseryWeb was developed as an open information resource within a frames environment that enables users to gather information about a variety of nursery-related material; e.g., cultural information, diagnostic criteria for disease and pest identification, data on integrated pest management and marketing data. In addition, a password-protected communication resource within the page provides nurserymen with conferencing and direct email connections to nursery extension specialists through WebChat™, as well as providing time-sensitive data, alerts, and links to professional organizations. A number of critical issues remain unresolved—e.g., the integrity of information links, data and picture copyright issues, and software support. Nonetheless, the ease of use, availability of information in remote areas at relatively low cost, and 24-hr access assures that this type of information provision will become dominant in the future.
APA, Harvard, Vancouver, ISO, and other styles
9

Malipatil, Manikamma, and D. C. Shubhangi. "An efficient data masking method for encrypted 3D mesh model." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 2 (November 1, 2021): 957. http://dx.doi.org/10.11591/ijeecs.v24.i2.pp957-964.

Full text
Abstract:
The industrial 3D mesh model (3DMM) plays a significant part in engineering and computer aided designing field. Thus, protecting copyright of 3DMM is one of the major research problems that require significant attention. Further, the industries started outsourcing its 3DMM to cloud computing (CC) environment. For preserving privacy, the 3DMM are encrypted and stored on cloud computing environment. Thus, building efficient data masking of encrypted 3DMM is considered to be efficient solution for masking information of 3DMM. First, using the secret key, the original 3DMM is encrypted. Second without procuring any prior information of original 3DMM it is conceivable mask information on encrypted 3D mesh models. Third, the original 3DMM are reconstructed by extracting masked information. The existing masking methods are not efficient in providing high information masking capacity in reversible manner and are not robust. For overcoming research issues, this work models an efficient data masking (EDM) method that is reversible nature. Experiment outcome shows the EDM for 3DMM attain better performance in terms of peak signal-to-noise ratio (PSNR) and root mean squared error (RMSE) over existing data masking methods. Thus, the EDM model brings good tradeoffs between achieving high data masking capacity with good reconstruction quality of 3DMM.
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Ke, Zheng Shan, Chunyan Zhang, Lianqiu Xu, Meng Qiao, and Fudong Liu. "DFSGraph: Data Flow Semantic Model for Intermediate Representation Programs Based on Graph Network." Electronics 11, no. 19 (October 8, 2022): 3230. http://dx.doi.org/10.3390/electronics11193230.

Full text
Abstract:
With the improvement of software copyright protection awareness, code obfuscation technology plays a crucial role in protecting key code segments. As the obfuscation technology becomes more and more complex and diverse, it has spawned a large number of malware variants, which make it easy to evade the detection of anti-virus software. Malicious code detection mainly depends on binary code similarity analysis. However, the existing software analysis technologies are difficult to deal with the growing complex obfuscation technologies. To solve this problem, this paper proposes a new obfuscation-resilient program analysis method, which is based on the data flow transformation relationship of the intermediate representation and the graph network model. In our approach, we first construct the data transformation graph based on LLVM IR. Then, we design a novel intermediate language representation model based on graph networks, named DFSGraph, to learn the data flow semantics from DTG. DFSGraph can detect the similarity of obfuscated code by extracting the semantic information of program data flow without deobfuscation. Extensive experiments prove that our approach is more accurate than existing deobfuscation tools when searching for similar functions from obfuscated code.
APA, Harvard, Vancouver, ISO, and other styles
11

Pallaw, Vijay Krishna, Kamred Udham Singh, Ankit Kumar, Tikam Singh, Chetan Swarup, and Anjali Goswami. "A Robust Medical Image Watermarking Scheme Based on Nature-Inspired Optimization for Telemedicine Applications." Electronics 12, no. 2 (January 9, 2023): 334. http://dx.doi.org/10.3390/electronics12020334.

Full text
Abstract:
Medical images and patient information are routinely transmitted to a remote radiologist to assist in diagnosis. It is critical in e-healthcare systems to ensure that data are accurately transmitted. Medical images of a person’s body can be used against them in many ways, including by transmitting them. Copyright and intellectual property laws prohibit the unauthorized use of medical images. Digital watermarking is used to prove the authenticity of the medical images before diagnosis. In this paper, we proposed a hybrid watermarking scheme using the Slantlet transform, randomized-singular value decomposition, and optimization techniques inspired by nature (Firefly algorithm). The watermark image is encrypted using the XOR encryption technique. Extensive testing reveals that our innovative approach outperforms the existing methods based on the NC, SSIM, and PSNR. The SSIM and NC values of watermarked image and extracted watermark are close to or equal to 1 at a scaling factor of 0.06, and the PSNR of the proposed scheme lies between 58 dB and 59 dB, which shows the better performance of the scheme.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Chunpeng, Yushuo Liu, Zhiqiu Xia, Qi Li, Jian Li, Xiaoyu Wang, and Bin Ma. "CWAN: Covert Watermarking Attack Network." Electronics 12, no. 2 (January 6, 2023): 303. http://dx.doi.org/10.3390/electronics12020303.

Full text
Abstract:
Digital watermarking technology is widely used in today’s copyright protection, data monitoring, and data tracking. Digital watermarking attack techniques are designed to corrupt the watermark information contained in the watermarked image (WMI) so that the watermark information cannot be extracted effectively or correctly. While traditional digital watermarking attack technology is more mature, it is capable of attacking the watermark information embedded in the WMI. However, it is also more damaging to its own visual quality, which is detrimental to the protection of the original carrier and defeats the purpose of the covert attack on WMI. To advance watermarking attack technology, we propose a new covert watermarking attack network (CWAN) based on a convolutional neural network (CNN) for removing low-frequency watermark information from WMI and minimizing the damage caused by WMI through the use of deep learning. We import the preprocessed WMI into the CWAN, obtain the residual feature images (RFI), and subtract the RFI from the WMI to attack image watermarks. At this point, the WMI’s watermark information is effectively removed, allowing for an attack on the watermark information while retaining the highest degree of image detail and other features. The experimental results indicate that the attack method is capable of effectively removing the watermark information while retaining the original image’s texture and details and that its ability to attack the watermark information is superior to that of most traditional watermarking attack methods. Compared with the neural network watermarking attack methods, it has better performance, and the attack performance metrics are improved by tens to hundreds of percent in varying degrees, indicating that it is a new covert watermarking attack method.
APA, Harvard, Vancouver, ISO, and other styles
13

Mavodza, Judith. "The impact of cloud computing on the future of academic library practices and services." New Library World 114, no. 3/4 (March 22, 2013): 132–41. http://dx.doi.org/10.1108/03074801311304041.

Full text
Abstract:
PurposeThe purpose of this paper is to discuss issues involved in navigating the modern information environment where the relevance of cloud computing is unavoidable. This is a way of shifting from the hardware and software demands of storing and organizing data, to information access concerns. That is because with the exponential growth in information sources and all accompanying complexities, the limited capacity of libraries to host their own in its entirety necessitates opting for alternatives in the cloud.Design/methodology/approachA review of current literature about the topic was performedFindingsLiterature used reveals that currently, libraries are using the cloud for putting together user resources, i.e. using Software as a Service (SaaS), such as in library catalogues, WorldCat, Googledocs, and the aggregated subject gateways like SUMMON, and others; the web Platform as a Service (PaaS) as in the use of GoogleApp Engine; or Infrastructure as a Service (IaaS) as in the use of D‐Space, FEDORA, and others. The cloud is confirmed as a facilitator in storing and accessing information in addition to providing a unified web presence with reduced local storage capacity challenges.Originality/valueThe value of these findings is to remind librarians of the shift in focus towards which devices provide the easiest access to data and applications. This is one of the reasons they in many instances are currently having to address issues relating to the use of electronic media tools such as smartphones, iPad, e‐book readers, and other handheld devices. The largely borderless information resources also bring to the forefront considerations about digital rights management, fair use, information security, ownership and control of data, privacy, scholarly publishing, copyright guidance, and licensing that the librarian has to be knowledgeable about. It has become necessary for librarians who make use of commercial cloud services to be conversant with the implications on institutional data. To avert the ever present dangers and risks involving cyber‐security, it is usually practical for institutions to keep policies, procedures, fiscal, and personnel data in private clouds that have carefully crafted access permissions. Being aware of these implications enables thoughtful, adaptive planning strategies for the future of library practice and service.
APA, Harvard, Vancouver, ISO, and other styles
14

Daoui, Achraf, Mohamed Yamni, Hicham Karmouni, Mhamed Sayyouri, Hassan Qjidaa, Saad Motahhir, Ouazzani Jamil, et al. "Efficient Biomedical Signal Security Algorithm for Smart Internet of Medical Things (IoMTs) Applications." Electronics 11, no. 23 (November 23, 2022): 3867. http://dx.doi.org/10.3390/electronics11233867.

Full text
Abstract:
Due to the rapid development of information and emerging communication technologies, developing and implementing solutions in the Internet of Medical Things (IoMTs) field have become relevant. This work developed a novel data security algorithm for deployment in emerging wireless biomedical sensor network (WBSN) and IoMTs applications while exchanging electronic patient folders (EPFs) over unsecured communication channels. These EPF data are collected using wireless biomedical sensors implemented in WBSN and IoMTs applications. Our algorithm is designed to ensure a high level of security for confidential patient information and verify the copyrights of bio-signal records included in the EPFs. The proposed scheme involves the use of Hahn’s discrete orthogonal moments for bio-signal feature vector extraction. Next, confidential patient information with the extracted feature vectors is converted into a QR code. The latter is then encrypted based on a proposed two-dimensional version of the modified chaotic logistic map. To demonstrate the feasibility of our scheme in IoMTs, it was implemented on a low-cost hardware board, namely Raspberry Pi, where the quad-core processors of this board are exploited using parallel computing. The conducted numerical experiments showed, on the one hand, that our scheme is highly secure and provides excellent robustness against common signal-processing attacks (noise, filtering, geometric transformations, compression, etc.). On the other hand, the obtained results demonstrated the fast running of our scheme when it is implemented on the Raspberry Pi board based on parallel computing. Furthermore, the results of the conducted comparisons reflect the superiority of our algorithm in terms of robustness when compared to recent bio-signal copyright protection schemes.
APA, Harvard, Vancouver, ISO, and other styles
15

Sullo, Elaine. "Engineering Faculty Indicate High Levels of Awareness and Use of the Library but Tend to Consult Google and Google Scholar First for Research Resources." Evidence Based Library and Information Practice 11, no. 3 (September 26, 2016): 102. http://dx.doi.org/10.18438/b84k98.

Full text
Abstract:
A Review of: Zhang, L. (2015). Use of library services by engineering faculty at Mississippi State University, a large land grant institution. Science & Technology Libraries, 34(3), 272-286. http://dx.doi.org/10.1080/0194262X.2015.1090941 Objective – To investigate the engineering faculty’s information-seeking behaviour, experiences, awareness, and use of the university library. Design – Web-based survey questionnaire. Setting – The main campus of a state university in the United States of America. Subjects – 119 faculty members within 8 engineering departments. Methods – An email invitation to participate in a 16-item electronic survey questionnaire, with questions related to library use, was sent in the spring of 2015 to 119 engineering faculty members. Faculty were given 24 days to complete the survey, and a reminder email was sent 10 days after the original survey invitation. Main Results – Thirty-eight faculty members responded to the survey, representing a response rate of 32%. Overall, faculty had a high level of use and awareness of both online and physical library resources and services, although their awareness of certain scholarly communication services, such as data archiving and copyright advisory, was significantly lower. Faculty tend to turn to Google and Google Scholar when searching for information rather than turning to library databases. Faculty do not use social media to keep up with library news and updates. The library website, as well as liaison librarians, were cited as the primary sources for this type of information. Conclusions – The researcher concludes that librarians need to do a better job of marketing library resources, such as discipline-specific databases, as well as other library search tools. Because faculty use web search engines as a significant source of information, the author proposes further research on this behaviour, and suggests more action to educate faculty on different search tools, their limitations, and effective use. As faculty indicated a general lack of interest in integrating information literacy into their classes, the researcher notes that librarians need to find ways to persuade faculty that this type of integrated instruction is beneficial for students’ learning and research needs. Faculty were aware of the library liaison program, so this baseline relationship between faculty and librarian can serve as an opportunity to build upon current liaison services and responsibilities.
APA, Harvard, Vancouver, ISO, and other styles
16

Johnston, Nicole, and Rupert Williams. "Skills and knowledge needs assessment of current and future library professionals in the state of Qatar." Library Management 36, no. 1/2 (January 12, 2015): 86–98. http://dx.doi.org/10.1108/lm-10-2014-0120.

Full text
Abstract:
Purpose – The purpose of this paper is to investigate and document the skills and knowledge needs of future library professionals in Qatar and to use the outcomes of this research to help develop or refine focused library and information studies course curricula that meet the needs of the local workforce and also guide or improve national or local professional development programmes. Design/methodology/approach – A skills and knowledge needs assessment survey was sent to library professionals, LIS students and library managers in Qatar. A total of 109 respondents completed the survey, a representation of around 25 per cent of the current LIS workforce in Qatar. Findings – Findings indicated that respondents felt that the most needed future job roles included more client focused positions such as research librarians, information services librarians and subject librarians, as well as technical roles such as Arabic cataloguers, electronic resources librarians and system librarians. The largest amount of needed positions was also felt to be in school libraries. Respondents to the survey also felt that there was a lack of opportunities for professional development in Qatar and that the most needed area of skills training was information literacy, followed by copyright training and technical skills including RDA and Arabic cataloguing. One further finding identified from the survey was the concern felt by respondents about the lack of a professional body in Qatar that represented LIS professionals. Practical implications – This paper provides data on future roles, skills and knowledge needed by library professionals working in international and culturally diverse workforces. It also provides findings that can be used to develop LIS curriculum and professional development programmes in international LIS environments. Originality/value – A detailed needs assessment of this kind has not previously been undertaken in Qatar. The library and information sector in Qatar is an emerging field with a largely international workforce. This situation provides a distinct perspective on the needs of an emerging library sector that is a blend of different cultures, workplace practices and differing expectations and understandings of the role and skills needed to be a LIS professional.
APA, Harvard, Vancouver, ISO, and other styles
17

Istomin, V. G. "Antimonopoly regulation of the activities of digital companies and the operation of Internet platforms in Russia and in the European Union." Law Enforcement Review 6, no. 2 (June 22, 2022): 120–33. http://dx.doi.org/10.52468/2542-1514.2022.6(2).120-133.

Full text
Abstract:
The subject. The article examines the antimonopoly regulation of relations arising in the course of the activities of modern companies that ensure the operation of certain digital online platforms. The development of digital information technologies has led to the emergence of various new forms of economic and social communications. These forms include, among other things, digital technological platforms operating on the Internet and representing a kind of platform within which information interaction of various subjects takes place, related to the implementation of their professional activities or interpersonal communication. In this regard, the law faces the task of ensuring effective regulation of relations that are formed in the context of the development of electronic market systems and digital services. An important role in this should be assigned to antimonopoly legislation, since the possession of large data sets and the latest information technologies can lead to companies trying to use their resources to violate the rights of other subjects.The aim of the study is to determine the legal essence of the Internet platform and to identify possible features and limits of antimonopoly regulation of the activities of companies that ensure their work, including taking into account the current Russian and foreign legislation and law enforcement practice in this area.Research methods are formal – logical interpretation, systemic method and comparative analysis.The main results, scope of application. Digital technological platform is a complex phenomenon that includes various results of intellectual activity, both subject to and not subject to legal protection, including computer programs, databases, as well as technical means, ensuring the functioning of the digital platform. In addition, the analysis of Russian antitrust legislation and the theory of civil law led to the conclusion that the existing exemptions from the scope of the rules on the prohibition of monopolistic activities established for holders of exclusive intellectual rights could significantly complicate the application of antitrust rules to digital companies that are copyright holders results of intellectual activity that are part of the Internet platform. At the same time, the currently established law enforcement practice actually follows the path of limiting these antimonopoly immunities, despite their legislative consolidation, which is hardly justified. On the other hand, the existence of broad antitrust immunities is also unfounded. In order to bring the antimonopoly legislation in line with the needs of the emerging digitalization relations antitrust immunities are subject to limitations.Conclusions. There are new criteria for determining the dominant position of digital companies in the relevant markets, which include network effects, large user data and significant barriers to entry into the market.
APA, Harvard, Vancouver, ISO, and other styles
18

Schulte, Stephanie J. "Information Professional Job Advertisements in the U.K. Indicate Professional Experience is the Most Required Skill." Evidence Based Library and Information Practice 4, no. 2 (June 14, 2009): 158. http://dx.doi.org/10.18438/b8ts51.

Full text
Abstract:
A Review of: Orme, Verity. “You will be…: A Study of Job Advertisements to Determine Employers’ Requirements for LIS Professionals in the UK in 2007.” Library Review 57.8 (2008): 619-33. Objective –To determine what skills employers in the United Kingdom (U.K.) want from information professionals as revealed through their job advertisements. Design – Content analysis, combining elements of both quantitative and qualitative content analysis. Orme describes it as “a descriptive non-experimental approach of content analysis” (62). Setting – Data for this study were obtained from job advertisements in the Chartered Institute of Library and Information Professional’s (CILIP) Library and Information Gazette published from June 2006 through May 2007. Subjects – A total of 180 job advertisements. Methods – Job advertisements were selected using a random number generator, purposely selecting only 15 advertisements per first issue of each month of the Library and Information Gazette (published every two weeks). The author used several sources to create an initial list of skills required by information professionals, using such sources as prior studies that examined this topic, the Library and Information Science Abstracts (LISA) database thesaurus, and personal knowledge. Synonyms for the skills were then added to the framework for coding. Skills that were coded had to be noted in such a way that the employer plainly stated the employee would be a certain skill or attribute or they were seeking a skill or a particular skill was essential or desirable. Skills that were stated in synonymous ways within the same advertisement were counted as two incidences of that skill. Duties for the position were not counted unless they were listed as a specific skill. Data were all coded by hand and then tallied. The author claims to have triangulated the results of this study with the literature review, the synonym ring used to prepare the coding framework, and a few notable studies. Main Results – A wide variety of job titles was observed, including “Copyright Clearance Officer,” “Electronic Resources and Training Librarian,” and “Assistant Information Advisor.” Employers represented private, school, and university libraries, as well as legal firms and prisons. Fifty-nine skills were found a total of 1,021 times across all of the advertisements. Each advertisement averaged 5.67 requirements. These skills were classified in four categories: professional, generic, personal, and experience. The most highly noted requirement was professional experience, noted 129 times, followed by interpersonal/communication skills (94), general computing skills (63), enthusiasm (48), and team-working skills (39). Professional skills were noted just slightly more than generic and personal skills in the top twenty skills found. Other professional skills that were highly noted were customer service skills (34), chartership (30), cataloguing/classification/metadata skills (25), and information retrieval skills (20). Some notable skills that occurred rarely included Web design and development skills (6), application of information technology in the library (5), and knowledge management skills (3). Conclusion – Professional, generic, and personal qualities were all important to employers in the U.K.; however, without experience, possessing these qualities may not be enough for new professionals in the field.
APA, Harvard, Vancouver, ISO, and other styles
19

Ricci, Gabriela, and Braian Veloso. "Periódicos científicos: sua inserção em materiais didáticos no curso de especialização em Educação e Tecnologias da UFSCar (Scientific journals: its insertion in didactics materials in the specialization course on Educational Technologies from UFSCar)." Revista Eletrônica de Educação 14 (March 3, 2020): 3708067. http://dx.doi.org/10.14244/198271993708.

Full text
Abstract:
This paper aims at analyzing the use of scientific electronic papers and journals in the didactics material of the specialization Course on Educational Technologies from UFSCar. The bibliographical analysis focused on the development and legitimacy of scientific journals. The purpose of the documental analysis on the didactic material, composed by 48 volumes, was to verify the kinds of material cited and suggested in the multimedia study guides. The data show that books are the most cited material, whereas other electronic materials are the most suggested kind of complementary material. The international scientific community considers papers published in scientific journals as the main and most legitimate mean of scientific knowledge sharing. They are the third most cited and suggested kind of material by the author-docents. This result indicates dissociation between the course didactics material and its innovative proposal of hybrid, flexible and integrated education. It is also dissociated from the international trends of scientific knowledge legitimacy. Such dissociation, commonly found in human sciences, may jeopardize the quality of the Brazilian science and its integration in the international scientific community.ResumoEste artigo pretende analisar a utilização de artigos e periódicos científicos eletrônicos no material didático do curso de especialização em Educação e Tecnologias da UFSCar. Para tanto, foi feita análise bibliográfica sobre o desenvolvimento dos periódicos científicos e a questão da legitimidade conferida a esse tipo de publicação. A análise documental do material didático do curso, composto por 48 volumes, teve como objetivo avaliar o tipo de material citado e indicado nos guias de estudo multimídia. Os dados levantados mostram que os livros são o material mais citado pelos autores do material didático, enquanto outros tipos de material eletrônico compõem a maioria das indicações de material complementar. Os artigos publicados em periódicos científicos, considerados pela comunidade científica internacional o principal e mais legítimo meio de divulgação de ciência, aparecem como terceiro tipo de material mais citado e indicado pelos docentes autores. Esse resultado aponta uma dissociação do material didático do curso e a sua proposta inovadora de formação híbrida, flexível e integrada, bem como as tendências internacionais de legitimação do conhecimento científico. Tal dissociação, comumente encontrada nas ciências humanas, pode ferir a qualidade da ciência produzida no Brasil e sua integração na comunidade científica internacional.Palavras-chave: Periódicos científicos, Legitimidade, Material didático, Educação a distância.Keywords: Scientific journals, Legitimacy, Didactics materials, Educational technology.ReferencesBATES, Tony. The promise and the myths of e-learning in post-secondary education. In: CASTELLS, Manuel. The network society: a cross-cultural perspective. Cheltenham: Edward Elgar Publishing, 2004. p. 271-292.BLOCKEEL, Christophe et al. Review the ‘peer review’. Reproductive Biomedicine Online, v. 35, p. 747-749, 2017.CASTELLS, Manuel. The rise of the network society. 2. ed. Oxford: Blackwell, 2000.CASTELLS, Manuel. Informationalism, networks, and the network society: a theoretical blueprint. In: CASTELLS, Manuel. The network society: a cross-cultural perspective. Cheltenham: Edward Elgar Publishing, 2004. p. 3-45.FERREIRA, Catarina et al. The evolution of peer review as a basis for scientific publication: directional selection towards a robust discipline? Biological Reviews, v. 91, p. 957-610, 2016.HAMES, Irene. Peer review in a rapidly evolving publishing landscape. In: CAMPBELL, R.; PENTZ, E.; BORTHWICK, I. Academic and Professional Publishing. New Delhi: Oxford Cambridge, 2012. p. 15-52.HARNAD, Stevan. Learned inquiry and the net: the role of peer review, peer commentary and copyright. Learned publishing, v. 11, n. 4, p. 283-292, 1998.LETA, Jacqueline. Brazilian growth in the mainstream science: the role of human resources and national journals. Journal of Scientometrics Research, v. 1, n. 1, p. 44-51, 2012.LÉVY, Pierre. Education and training: new technologies and collective intelligence. Prospects, v. 27, n. 2, p. 249-263, 1997.MILL, Daniel. Flexibilidade educacional na cibercultura: analisando espaços, tempos e currículo em produções científicas da área educacional. RIED, v. 17, n. 2, p. 97-126, 2014.MILL, Daniel et al. (Eds.). Coleção Educação e Tecnologias Curso de Especialização. São Carlos: Pixel, 2017.MILL, Daniel; SANTIAGO, Glauber; OLIVEIRA, Camila. Educação e Tecnologias: proposta de formação aberta, flexível, híbrida e integrada. EduTec, São Carlos, 14 dez. 2018. Disponível em: <http://edutec.ead.ufscar.br/index.php/proposta-pedagogica/>. Acesso em: 01 mar. 2019.MUELLER, Suzana P. M. A comunicação científica e o movimento de acesso livre ao conhecimento. Ciência da informação, v. 35, n. 2, p. 27-38, 2006.NATURE EDITORIAL. Review rewards; welcome efforts are being made to recognize academics who give up their time to peer review. Nature, v. 514, p. 274, 2014.NATURE MATERIALS EDITORIAL. The cost of salami slicing. Nature Materials, v. 4, p. 1, 2005.OLIVEIRA, Érica B. O uso de periódicos científicos eletrônicos por docentes e pós-graduandos do Instituto de Geociências da USP. Ciência da Informação, v. 36, n. 3, p. 59-66, 2007.OLIVEIRA, Érica B. P. M. Periódicos científicos eletrônicos: definições e histórico. Informação e Sociedade, v. 18, n. 2, p. 69-77, 2008.PACKER, Abel L. Os periódicos brasileiros e a comunicação da pesquisa nacional. Revista USP, v. 89, p. 26-61, 2011.POSTMAN, Neil. Technopoly: the surrender of culture to technopoly. Nova Iorque: Vintage, 1993.PRODANOV, Cleber C.; FREITAS, Ernani C. Metodologia do trabalho científico: métodos e técnicas da pesquisa e do trabalho acadêmico. 2. ed. Novo Hamburgo: Feevale, 2013.RIGHETTI, Sabine; GAMBA, Estêvão. Ciências humanas levam Brasil à elite da produção científica. Folha de S. Paulo, São Paulo, 15 jun. 2019. Ilustríssima. Disponível em: <https://www1.folha.uol.com.br/ilustrissima/2019/06/ciencias-humanas-levam-brasil-a-elite-da-producao-cientifica.shtml?utm_source=facebook&fbclid=IwAR1AI7J1UXERa23Z4-9VU4VRQ7M-8MkCXXSQOvp_K_r3R-u0BbqPdqvGZo8>. Acesso em: 02 jul. 2019.SILVA, Edna L.; MENEZES, Estera M.; PINHEIRO, Liliane V. Avaliação da produtividade científica dos pesquisadores nas áreas de ciências humanas e sociais aplicadas. Informação e Sociedade, v. 13, n. 2, p. 193-222, 2003.SPIER, Ray. The history of the peer-review process. TRENDS in Biotechnology, v. 20, n. 8, p. 357-358, 2002.TENOPIR, Carol; HITCHCOCK, Brenda; PILLOW, Ashley. Use and users of electronic library resources. Council on Library and Information Resources. Washington. 2003.e3708067
APA, Harvard, Vancouver, ISO, and other styles
20

Taljard, Elsabé, Danie Prinsloo, and Michelle Goosen. "Creating electronic resources for African languages through digitisation: a technical report." Journal of the Digital Humanities Association of Southern Africa (DHASA) 4, no. 01 (January 26, 2023). http://dx.doi.org/10.55492/dhasa.v4i01.4441.

Full text
Abstract:
The need for electronic resources for (under-resourced) African languages is an often stated one. These resources are needed for language research in general, and more specifically for the development of Human Language Technology (HLT) applications such as machine translation, speech recognition, electronic dictionaries, spelling and grammar checkers, and optical character recognition. These technologies rely on large quantities of high-quality electronic data. Digitisation is one of the strategies that can be used to collect such data. For the purpose of this paper, digitisation is understood as the conversion of analogue text, audio and video data into digital form, as well as the provision of born digital data that is currently not available in a format that enables downstream processing. There is a general perception that the African languages are under-resourced with regard to sufficient digitisation tools to function effectively in the modern digital world. Our paper is presented as a technical report, detailing the tools, procedures, best practices and standards that are utilised by the UP digitisation node to digitise text, audio and audio-visual material for the African languages. The digitisation effort is part of the South African Digital Languages Resources (SADiLaR) project (https://www.sadilar.org/index.php/en/), funded by the Department of Science and Innovation. Our report is based on a best practices document, developed through the course of our digitisation project and forms part of the deliverables as per contractual agreement between the UP digitisation node and the SADiLaR Hub. The workflow as explained in this document was designed with this specific project in mind; software and hardware utilised were also selected based on the constraints with regard to capacity and available technical skills in mind. We motivate our choice of Optical Character Recognition (OCR) software by referring to an earlier experiment in which we evaluated three commercially available OCR programmes. We did not attempt a full-scale evaluation of all available OCR software, but rather focused on selecting one that renders high quality outputs. We also reflect on one of the challenges specific to our project, i.e. copyright clearance. This is particularly relevant with regard to published material. In the absence of newspapers for specifically the African Languages (isiZulu being a notable exception), the biggest portion of textual material available for digitisation consists of printed material such as textbooks, novels, dramas, short stories and other literary genres. The digitisation process is driven by the availability of material for the different languages. Furthermore, obtaining copyright clearance from publishers is a prerequisite for digitisation and especially for the release of any digitised text data for further use and/or processing. Having information on a relatively small-scale digitisation workflow and best practices readily available will enable other interested parties to participate in the digitisation effort, thus contributing to the collection of electronic data for the African languages.
APA, Harvard, Vancouver, ISO, and other styles
21

Nechaeva, T. V. "International conference "Plagiarism Detection": does an academic journal need plagiarism checking?" Почвы и окружающая среда, 2022. http://dx.doi.org/10.31251/pos.v5i1.177.

Full text
Abstract:
This article presents the overview of the Vth and VIth annual International Scientific and Practical Conference "Plagiarism Detection" (hereinafter – the conference; website: ozconf.ru), held October, 2020 and 2021. The main aim of this major professional platform is to create expert environment for discussing plagiarism in the educational and scientific communities in the CIS countries, to inform about the novel text processing technologies, text mining and assessing its originality, as well as to discuss the use electronic resources in education and science to facilitate networking of specialists. The article reviews some definitions and presents examples of unethical behaviour in science, such as plagiarism, self-plagiarism, duplication (i.e. multiple publications), fraud and data fabrication. By using such practices some unscrupulous authors assign to themselves the credit of other scientists’ achievements and results, present out-dated results as new, perform data hacking and manipulate research process to obtain an anticipated result. Altogether such practice builds the illusion of promoting the knowledge and publication activity, providing the authors with access to financial support, salary increase, career promotion, bogus authority and standing in the science environment. The article also presents some law aspects concerning the authorship and plagiarism in the Russian academic community, reiterating that plagiarism is a criminal law category and can be regarded as a crime if it causes substantial damage to authors or copyright holders. Revealing errors, typos, plagiarism and other infringements of ethical norms and regulations in works, published by many journals and publishers, is comparable with eroding credibility and authority, as well as necessitating special checks, errata correction and possible retraction of publications. The situation, however, looks different if the victimized scientists consider compensation for the violation of the copyright and property ownership rights. The article lists examples and the main reasons for retracting fraudulent publications. The aim of retraction is to correct the information published, ensuring its validity, rather than to punish the authors. In the Russian research publishing environment publication retraction has been so far rather scarce. Yet globally the practice of retraction is rather widely spread. The article also summarizes the main features distinguishing unethical publishers, focusing on similarities and differences between journal-clones and journal-predators. It is estimated that currently there is information about hundreds of hijacked journals and more than one hundred thousand authors of the publications in journal-clones. The article also presents some information, pertaining to the history of “Antiplagiat” company establishment and its service development, drawing attention to different points of view about the need to use “Antiplagiat” system or other software for checking education and research publications for plagiarism. It is concluded that any search engines are not ideal and cannot substitute humans in crucial decision-making about a publication. Only an expert in the field, who can judge adequately about the exact substantive nature of text duplications and can assess adequately the scale of borrowing, shall make such decisions. Shifting responsibility from a human being to the “Antiplagiat” system by citing the rate of borrowings, provided by the system, is unethical practice as well.
APA, Harvard, Vancouver, ISO, and other styles
22

Terentiev, Serhii. "Electronic resources in the communications system of public libraries in Ukraine." Scientific journal “Library Science. Record Studies. Informology”, no. 2 (September 6, 2022). http://dx.doi.org/10.32461/2409-9805.2.2022.263807.

Full text
Abstract:
The purpose of the work consists in systematization and synthesis of data concerning the value of electronic resources in the system of social communications of public libraries of Ukraine. A methodological basis of this research is critical, chronological and structural and system approaches to the analysis of the chosen perspective. The scientific novelty consists in complex studying of value of electronic resources for effective development of modern library establishment in the conditions of quarantine restrictions. Conclusions. The considerable popularity of electronic resources of the public library is explained by the fact that the electronic format provides easy, quick and continuous access to a big array of specialized data. At the same time there are also considerable problem aspects with a regulatory framework, in particular connected with copyright compliance in the text of the scientific electronic publication. Electronic resources of public library have to optimize quick access of erudite and other users to relevant domestic scientific literature. Electronic editions in the abstract database of public libraries can be presented as a collection of electronic documents in the reviewed look. The place of electronic resources in the system of data of library is defined by functions of the specified editions to which, the general, special and concrete functions belong. To signs which inherent in electronic resources of public libraries, it is necessary to specify multifunctionality, targeting and high quality. Of course, in the remote prospect providing the status to electronic resources international, increase in the index of a quotes in the international measurement, etc. is optimum. The website of library establishment which has to provide such sections acts as an important component of electronic resources of public library: general information; curriculum vitae of outstanding librarians and management of public library; structure of library, achievement, award and prospect of work; news feed; resources; methodical providing; electronic delivery of documents; headings for readers, librarians and administration of the website. Thus the website of public library has to be focused on all participants of the information and communication relations created within a certain library establishment. Keywords: electronic resources, public libraries, library institutions, library science, document science, normative legal act, functions.
APA, Harvard, Vancouver, ISO, and other styles
23

Basha, Shaik Hedayath, and Jaison B. "A novel secured Euclidean space points algorithm for blind spatial image watermarking." EURASIP Journal on Image and Video Processing 2022, no. 1 (September 23, 2022). http://dx.doi.org/10.1186/s13640-022-00590-w.

Full text
Abstract:
AbstractDigital raw images obtained from the data set of various organizations require authentication, copyright protection, and security with simple processing. New Euclidean space point’s algorithm is proposed to authenticate the images by embedding binary logos in the digital images in the spatial domain. Diffie–Hellman key exchange protocol is implemented along with the Euclidean space axioms to maintain security for the proposed work. The proposed watermarking methodology is tested on the standard set of raw grayscale and RGB color images. The watermarked images are sent in the email, WhatsApp, and Facebook and analyzed. Standard watermarking attacks are also applied to the watermarked images and analyzed. The finding shows that there are no image distortions in the communication medium of email and WhatsApp. But in the Facebook platform, raw images experience compression and observed exponential noise on the digital images. The authentication and copyright protection are tested from the processed Facebook images. It is found that the embedded logo could be recovered and seen with added noise distortions. So the proposed method offers authentication and security with compression attacks. Similarly, it is found that the proposed methodology is robust to JPEG compression, image tampering attacks like collage attack, image cropping, rotation, salt-and-pepper noise, sharpening filter, semi-robust to Gaussian filtering, and image resizing, and fragile to other geometrical attacks. The receiver operating characteristics (ROC) curve is drawn and found that the area under the curve is approximately equal to unity and restoration accuracy of [67 to 100]% for various attacks.
APA, Harvard, Vancouver, ISO, and other styles
24

Eminağaoğlu, Mete, and Yılmaz Gökşen. "A New Similarity Measure for Document Classification and Text Mining." KnE Social Sciences, January 12, 2020. http://dx.doi.org/10.18502/kss.v4i1.5999.

Full text
Abstract:
Accurate, efficient and fast processing of textual data and classification of electronic documents have become an important key factor in knowledge management and related businesses in today’s world. Text mining, information retrieval, and document classification systems have a strong positive impact on digital libraries and electronic content management, e-marketing, electronic archives, customer relationship management, decision support systems, copyright infringement, and plagiarism detection, which strictly affect economics, businesses, and organizations. In this study, we propose a new similarity measure that can be used with k-nearest neighbors (k-NN) and Rocchio algorithms, which are some of the well-known algorithms for document classification, information retrieval, and some other text mining purposes. We have tested our novel similarity measure with some structured textual data sets and we have compared the results with some other standard distance metrics and similarity measures such as Cosine similarity, Euclidean distance, and Pearson correlation coefficient. We have obtained some promising results, which show that this proposed similarity measure could be alternatively used within all suitable algorithms, methods, and models for text mining, document classification, and relevant knowledge management systems. Keywords: text mining, document classification, similarity measures, k-NN, Rocchio algorithm
APA, Harvard, Vancouver, ISO, and other styles
25

Noh, Younghee. "A study on researcher use behavior changes since Covid-19." Journal of Librarianship and Information Science, August 3, 2021, 096100062110338. http://dx.doi.org/10.1177/09610006211033893.

Full text
Abstract:
In December 2019, Covid-19 virus spread from Wuhan, China, to the rest of the world and had profound impact on politics, economy, society, and culture as well as significant effect on libraries and information utilization. This study aimed to propose future directions for library service by analyzing information use behavior of researchers and deriving the requirements through a survey method. As a result, it was initially identified that the data types and search paths used by researchers have changed to using electronic or online resources. Second, it was difficult to access and acquire certain data only available offline along with difficulty in searching and selecting data suitable for the purpose of research, difficulty in grasping the material, and technical problems such as usage method, search, and information use environment were identified. Third, after Covid-19, there has been a change in demand for services people need from libraries. Including the expansion of budget support for securing digital content, free provision of e-books and e-journals for a limited time by publishers, free provision of paid educational content for a certain period, and temporary copyright agreement for works, libraries were required to make efforts to secure information resources available online in response to Covid-19.
APA, Harvard, Vancouver, ISO, and other styles
26

Moore, Christopher Luke. "Digital Games Distribution: The Presence of the Past and the Future of Obsolescence." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.166.

Full text
Abstract:
A common criticism of the rhythm video games genre — including series like Guitar Hero and Rock Band, is that playing musical simulation games is a waste of time when you could be playing an actual guitar and learning a real skill. A more serious criticism of games cultures draws attention to the degree of e-waste they produce. E-waste or electronic waste includes mobiles phones, computers, televisions and other electronic devices, containing toxic chemicals and metals whose landfill, recycling and salvaging all produce distinct environmental and social problems. The e-waste produced by games like Guitar Hero is obvious in the regular flow of merchandise transforming computer and video games stores into simulation music stores, filled with replica guitars, drum kits, microphones and other products whose half-lives are short and whose obsolescence is anticipated in the annual cycles of consumption and disposal. This paper explores the connection between e-waste and obsolescence in the games industry, and argues for the further consideration of consumers as part of the solution to the problem of e-waste. It uses a case study of the PC digital distribution software platform, Steam, to suggest that the digital distribution of games may offer an alternative model to market driven software and hardware obsolescence, and more generally, that such software platforms might be a place to support cultures of consumption that delay rather than promote hardware obsolescence and its inevitability as e-waste. The question is whether there exists a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities (its current 'green' benefit), but also for supporting consumer practices that further reduce e-waste. The games industry relies on a rapid production and innovation cycle, one that actively enforces hardware obsolescence. Current video game consoles, including the PlayStation 3, the Xbox 360 and Nintendo Wii, are the seventh generation of home gaming consoles to appear within forty years, and each generation is accompanied by an immense international transportation of games hardware, software (in various storage formats) and peripherals. Obsolescence also occurs at the software or content level and is significant because the games industry as a creative industry is dependent on the extensive management of multiple intellectual properties. The computing and video games software industry operates a close partnership with the hardware industry, and as such, software obsolescence directly contributes to hardware obsolescence. The obsolescence of content and the redundancy of the methods of policing its scarcity in the marketplace has been accelerated and altered by the processes of disintermediation with a range of outcomes (Flew). The music industry is perhaps the most advanced in terms of disintermediation with digital distribution at the center of the conflict between the legitimate and unauthorised access to intellectual property. This points to one issue with the hypothesis that digital distribution can lead to a reduction in hardware obsolescence, as the marketplace leader and key online distributor of music, Apple, is also the major producer of new media technologies and devices that are the paragon of stylistic obsolescence. Stylistic obsolescence, in which fashion changes products across seasons of consumption, has long been observed as the dominant form of scaled industrial innovation (Slade). Stylistic obsolescence is differentiated from mechanical or technological obsolescence as the deliberate supersedence of products by more advanced designs, better production techniques and other minor innovations. The line between the stylistic and technological obsolescence is not always clear, especially as reduced durability has become a powerful market strategy (Fitzpatrick). This occurs where the design of technologies is subsumed within the discourses of manufacturing, consumption and the logic of planned obsolescence in which the product or parts are intended to fail, degrade or under perform over time. It is especially the case with signature new media technologies such as laptop computers, mobile phones and portable games devices. Gamers are as guilty as other consumer groups in contributing to e-waste as participants in the industry's cycles of planned obsolescence, but some of them complicate discussions over the future of obsolescence and e-waste. Many gamers actively work to forestall the obsolescence of their games: they invest time in the play of older games (“retrogaming”) they donate labor and creative energy to the production of user-generated content as a means of sustaining involvement in gaming communities; and they produce entirely new game experiences for other users, based on existing software and hardware modifications known as 'mods'. With Guitar Hero and other 'rhythm' games it would be easy to argue that the hardware components of this genre have only one future: as waste. Alternatively, we could consider the actual lifespan of these objects (including their impact as e-waste) and the roles they play in the performances and practices of communities of gamers. For example, the Elmo Guitar Hero controller mod, the Tesla coil Guitar Hero controller interface, the Rock Band Speak n' Spellbinder mashup, the multiple and almost sacrilegious Fender guitar hero mods, the Guitar Hero Portable Turntable Mod and MAKE magazine's Trumpet Hero all indicate a significant diversity of user innovation, community formation and individual investment in the post-retail life of computer and video game hardware. Obsolescence is not just a problem for the games industry but for the computing and electronics industries more broadly as direct contributors to the social and environmental cost of electrical waste and obsolete electrical equipment. Planned obsolescence has long been the experience of gamers and computer users, as the basis of a utopian mythology of upgrades (Dovey and Kennedy). For PC users the upgrade pathway is traversed by the consumption of further hardware and software post initial purchase in a cycle of endless consumption, acquisition and waste (as older parts are replaced and eventually discarded). The accumulation and disposal of these cultural artefacts does not devalue or accrue in space or time at the same rate (Straw) and many users will persist for years, gradually upgrading and delaying obsolescence and even perpetuate the circulation of older cultural commodities. Flea markets and secondhand fairs are popular sites for the purchase of new, recent, old, and recycled computer hardware, and peripherals. Such practices and parallel markets support the strategies of 'making do' described by De Certeau, but they also continue the cycle of upgrade and obsolescence, and they are still consumed as part of the promise of the 'new', and the desire of a purchase that will finally 'fix' the users' computer in a state of completion (29). The planned obsolescence of new media technologies is common, but its success is mixed; for example, support for Microsoft's operating system Windows XP was officially withdrawn in April 2009 (Robinson), but due to the popularity in low cost PC 'netbooks' outfitted with an optimised XP operating system and a less than enthusiastic response to the 'next generation' Windows Vista, XP continues to be popular. Digital Distribution: A Solution? Gamers may be able to reduce the accumulation of e-waste by supporting the disintermediation of the games retail sector by means of online distribution. Disintermediation is the establishment of a direct relationship between the creators of content and their consumers through products and services offered by content producers (Flew 201). The move to digital distribution has already begun to reduce the need to physically handle commodities, but this currently signals only further support of planned, stylistic and technological obsolescence, increasing the rate at which the commodities for recording, storing, distributing and exhibiting digital content become e-waste. Digital distribution is sometimes overlooked as a potential means for promoting communities of user practice dedicated to e-waste reduction, at the same time it is actively employed to reduce the potential for the unregulated appropriation of content and restrict post-purchase sales through Digital Rights Management (DRM) technologies. Distributors like Amazon.com continue to pursue commercial opportunities in linking the user to digital distribution of content via exclusive hardware and software technologies. The Amazon e-book reader, the Kindle, operates via a proprietary mobile network using a commercially run version of the wireless 3G protocols. The e-book reader is heavily encrypted with Digital Rights Management (DRM) technologies and exclusive digital book formats designed to enforce current copyright restrictions and eliminate second-hand sales, lending, and further post-purchase distribution. The success of this mode of distribution is connected to Amazon's ability to tap both the mainstream market and the consumer demand for the less-than-popular; those books, movies, music and television series that may not have been 'hits' at the time of release. The desire to revisit forgotten niches, such as B-sides, comics, books, and older video games, suggests Chris Anderson, linked with so-called “long tail” economics. Recently Webb has queried the economic impact of the Long Tail as a business strategy, but does not deny the underlying dynamics, which suggest that content does not obsolesce in any straightforward way. Niche markets for older content are nourished by participatory cultures and Web 2.0 style online services. A good example of the Long Tail phenomenon is the recent case of the 1971 book A Lion Called Christian, by Anthony Burke and John Rendall, republished after the author's film of a visit to a resettled Christian in Africa was popularised on YouTube in 2008. Anderson's Long Tail theory suggests that over time a large number of items, each with unique rather than mass histories, will be subsumed as part of a larger community of consumers, including fans, collectors and everyday users with a long term interest in their use and preservation. If digital distribution platforms can reduce e-waste, they can perhaps be fostered by to ensuring digital consumers have access to morally and ethically aware consumer decisions, but also that they enjoy traditional consumer freedoms, such as the right to sell on and change or modify their property. For it is not only the fixation on the 'next generation' that contributes to obsolescence, but also technologies like DRM systems that discourage second hand sales and restrict modification. The legislative upgrades, patches and amendments to copyright law that have attempted to maintain the law's effectiveness in competing with peer-to-peer networks have supported DRM and other intellectual property enforcement technologies, despite the difficulties that owners of intellectual property have encountered with the effectiveness of DRM systems (Moore, Creative). The games industry continues to experiment with DRM, however, this industry also stands out as one of the few to have significantly incorporated the user within the official modes of production (Moore, Commonising). Is the games industry capable (or willing) of supporting a digital delivery system that attempts to minimise or even reverse software and hardware obsolescence? We can try to answer this question by looking in detail at the biggest digital distributor of PC games, Steam. Steam Figure 1: The Steam Application user interface retail section Steam is a digital distribution system designed for the Microsoft Windows operating system and operated by American video game development company and publisher, Valve Corporation. Steam combines online games retail, DRM technologies and internet-based distribution services with social networking and multiplayer features (in-game voice and text chat, user profiles, etc) and direct support for major games publishers, independent producers, and communities of user-contributors (modders). Steam, like the iTunes games store, Xbox Live and other digital distributors, provides consumers with direct digital downloads of new, recent and classic titles that can be accessed remotely by the user from any (internet equipped) location. Steam was first packaged with the physical distribution of Half Life 2 in 2004, and the platform's eventual popularity is tied to the success of that game franchise. Steam was not an optional component of the game's installation and many gamers protested in various online forums, while the platform was treated with suspicion by the global PC games press. It did not help that Steam was at launch everything that gamers take objection to: a persistent and initially 'buggy' piece of software that sits in the PC's operating system and occupies limited memory resources at the cost of hardware performance. Regular updates to the Steam software platform introduced social network features just as mainstream sites like MySpace and Facebook were emerging, and its popularity has undergone rapid subsequent growth. Steam now eclipses competitors with more than 20 million user accounts (Leahy) and Valve Corporation makes it publicly known that Steam collects large amounts of data about its users. This information is available via the public player profile in the community section of the Steam application. It includes the average number of hours the user plays per week, and can even indicate the difficulty the user has in navigating game obstacles. Valve reports on the number of users on Steam every two hours via its web site, with a population on average between one and two million simultaneous users (Valve, Steam). We know these users’ hardware profiles because Valve Corporation makes the results of its surveillance public knowledge via the Steam Hardware Survey. Valve’s hardware survey itself conceptualises obsolescence in two ways. First, it uses the results to define the 'cutting edge' of PC technologies and publishing the standards of its own high end production hardware on the companies blog. Second, the effect of the Survey is to subsequently define obsolescent hardware: for example, in the Survey results for April 2009, we can see that the slight majority of users maintain computers with two central processing units while a significant proportion (almost one third) of users still maintained much older PCs with a single CPU. Both effects of the Survey appear to be well understood by Valve: the Steam Hardware Survey automatically collects information about the community's computer hardware configurations and presents an aggregate picture of the stats on our web site. The survey helps us make better engineering and gameplay decisions, because it makes sure we're targeting machines our customers actually use, rather than measuring only against the hardware we've got in the office. We often get asked about the configuration of the machines we build around the office to do both game and Steam development. We also tend to turn over machines in the office pretty rapidly, at roughly every 18 months. (Valve, Team Fortress) Valve’s support of older hardware might counter perceptions that older PCs have no use and begins to reverse decades of opinion regarding planned and stylistic obsolescence in the PC hardware and software industries. Equally significant to the extension of the lives of older PCs is Steam's support for mods and its promotion of user generated content. By providing software for mod creation and distribution, Steam maximises what Postigo calls the development potential of fan-programmers. One of the 'payoffs' in the information/access exchange for the user with Steam is the degree to which Valve's End-User Licence Agreement (EULA) permits individuals and communities of 'modders' to appropriate its proprietary game content for use in the creation of new games and games materials for redistribution via Steam. These mods extend the play of the older games, by requiring their purchase via Steam in order for the individual user to participate in the modded experience. If Steam is able to encourage this kind of appropriation and community support for older content, then the potential exists for it to support cultures of consumption and practice of use that collaboratively maintain, extend, and prolong the life and use of games. Further, Steam incorporates the insights of “long tail” economics in a purely digital distribution model, in which the obsolescence of 'non-hit' game titles can be dramatically overturned. Published in November 2007, Unreal Tournament 3 (UT3) by Epic Games, was unappreciated in a market saturated with games in the first-person shooter genre. Epic republished UT3 on Steam 18 months later, making the game available to play for free for one weekend, followed by discounted access to new content. The 2000 per cent increase in players over the game's 'free' trial weekend, has translated into enough sales of the game for Epic to no longer consider the release a commercial failure: It’s an incredible precedent to set: making a game a success almost 18 months after a poor launch. It’s something that could only have happened now, and with a system like Steam...Something that silently updates a purchase with patches and extra content automatically, so you don’t have to make the decision to seek out some exciting new feature: it’s just there anyway. Something that, if you don’t already own it, advertises that game to you at an agreeably reduced price whenever it loads. Something that enjoys a vast community who are in turn plugged into a sea of smaller relevant communities. It’s incredibly sinister. It’s also incredibly exciting... (Meer) Clearly concerns exist about Steam's user privacy policy, but this also invites us to the think about the economic relationship between gamers and games companies as it is reconfigured through the private contractual relationship established by the EULA which accompanies the digital distribution model. The games industry has established contractual and licensing arrangements with its consumer base in order to support and reincorporate emerging trends in user generated cultures and other cultural formations within its official modes of production (Moore, "Commonising"). When we consider that Valve gets to tax sales of its virtual goods and can further sell the information farmed from its users to hardware manufacturers, it is reasonable to consider the relationship between the corporation and its gamers as exploitative. Gabe Newell, the Valve co-founder and managing director, conversely believes that people are willing to give up personal information if they feel it is being used to get better services (Leahy). If that sentiment is correct then consumers may be willing to further trade for services that can reduce obsolescence and begin to address the problems of e-waste from the ground up. Conclusion Clearly, there is a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities but also supporting consumer practices that further reduce e-waste. For an industry where only a small proportion of the games made break even, the successful relaunch of older games content indicates Steam's capacity to ameliorate software obsolescence. Digital distribution extends the use of commercially released games by providing disintermediated access to older and user-generated content. For Valve, this occurs within a network of exchange as access to user-generated content, social networking services, and support for the organisation and coordination of communities of gamers is traded for user-information and repeat business. Evidence for whether this will actively translate to an equivalent decrease in the obsolescence of game hardware might be observed with indicators like the Steam Hardware Survey in the future. The degree of potential offered by digital distribution is disrupted by a range of technical, commercial and legal hurdles, primary of which is the deployment of DRM, as part of a range of techniques designed to limit consumer behaviour post purchase. While intervention in the form of legislation and radical change to the insidious nature of electronics production is crucial in order to achieve long term reduction in e-waste, the user is currently considered only in terms of 'ethical' consumption and ultimately divested of responsibility through participation in corporate, state and civil recycling and e-waste management operations. The message is either 'careful what you purchase' or 'careful how you throw it away' and, like DRM, ignores the connections between product, producer and user and the consumer support for environmentally, ethically and socially positive production, distribrution, disposal and recycling. This article, has adopted a different strategy, one that sees digital distribution platforms like Steam, as capable, if not currently active, in supporting community practices that should be seriously considered in conjunction with a range of approaches to the challenge of obsolescence and e-waste. References Anderson, Chris. "The Long Tail." Wired Magazine 12. 10 (2004). 20 Apr. 2009 ‹http://www.wired.com/wired/archive/12.10/tail.html›. De Certeau, Michel. The Practice of Everyday Life. Berkeley: U of California P, 1984. Dovey, Jon, and Helen Kennedy. Game Cultures: Computer Games as New Media. London: Open University Press,2006. Fitzpatrick, Kathleen. The Anxiety of Obsolescence. Nashville: Vanderbilt UP, 2008. Flew, Terry. New Media: An Introduction. South Melbourne: Oxford UP, 2008. Leahy, Brian. "Live Blog: DICE 2009 Keynote - Gabe Newell, Valve Software." The Feed. G4TV 18 Feb. 2009. 16 Apr. 2009 ‹http://g4tv.com/thefeed/blog/post/693342/Live-Blog-DICE-2009-Keynote-–-Gabe-Newell-Valve-Software.html›. Meer, Alec. "Unreal Tournament 3 and the New Lazarus Effect." Rock, Paper, Shotgun 16 Mar. 2009. 24 Apr. 2009 ‹http://www.rockpapershotgun.com/2009/03/16/unreal-tournament-3-and-the-new-lazarus-effect/›.Moore, Christopher. "Commonising the Enclosure: Online Games and Reforming Intellectual Property Regimes." Australian Journal of Emerging Technologies and Society 3. 2, (2005). 12 Apr. 2009 ‹http://www.swin.edu.au/sbs/ajets/journal/issue5-V3N2/abstract_moore.htm›. Moore, Christopher. "Creative Choices: Changes to Australian Copyright Law and the Future of the Public Domain." Media International Australia 114 (Feb. 2005): 71–83. Postigo, Hector. "Of Mods and Modders: Chasing Down the Value of Fan-Based Digital Game Modification." Games and Culture 2 (2007): 300-13. Robinson, Daniel. "Windows XP Support Runs Out Next Week." PC Business Authority 8 Apr. 2009. 16 Apr. 2009 ‹http://www.pcauthority.com.au/News/142013,windows-xp-support-runs-out-next-week.aspx›. Straw, Will. "Exhausted Commodities: The Material Culture of Music." Canadian Journal of Communication 25.1 (2000): 175. Slade, Giles. Made to Break: Technology and Obsolescence in America. Cambridge: Harvard UP, 2006. Valve. "Steam and Game Stats." 26 Apr. 2009 ‹http://store.steampowered.com/stats/›. Valve. "Team Fortress 2: The Scout Update." Steam Marketing Message 20 Feb. 2009. 12 Apr. 2009 ‹http://storefront.steampowered.com/Steam/Marketing/message/2269/›. Webb, Richard. "Online Shopping and the Harry Potter Effect." New Scientist 2687 (2008): 52-55. 16 Apr. 2009 ‹http://www.newscientist.com/article/mg20026873.300-online-shopping-and-the-harry-potter-effect.html?page=2›. With thanks to Dr Nicola Evans and Dr Frances Steel for their feedback and comments on drafts of this paper.
APA, Harvard, Vancouver, ISO, and other styles
27

Mallan, Kerry Margaret, and Annette Patterson. "Present and Active: Digital Publishing in a Post-print Age." M/C Journal 11, no. 4 (June 24, 2008). http://dx.doi.org/10.5204/mcj.40.

Full text
Abstract:
At one point in Victor Hugo’s novel, The Hunchback of Notre Dame, the archdeacon, Claude Frollo, looked up from a book on his table to the edifice of the gothic cathedral, visible from his canon’s cell in the cloister of Notre Dame: “Alas!” he said, “this will kill that” (146). Frollo’s lament, that the book would destroy the edifice, captures the medieval cleric’s anxiety about the way in which Gutenberg’s print technology would become the new universal means for recording and communicating humanity’s ideas and artistic expression, replacing the grand monuments of architecture, human engineering, and craftsmanship. For Hugo, architecture was “the great handwriting of humankind” (149). The cathedral as the material outcome of human technology was being replaced by the first great machine—the printing press. At this point in the third millennium, some people undoubtedly have similar anxieties to Frollo: is it now the book’s turn to be destroyed by yet another great machine? The inclusion of “post print” in our title is not intended to sound the death knell of the book. Rather, we contend that despite the enduring value of print, digital publishing is “present and active” and is changing the way in which research, particularly in the humanities, is being undertaken. Our approach has three related parts. First, we consider how digital technologies are changing the way in which content is constructed, customised, modified, disseminated, and accessed within a global, distributed network. This section argues that the transition from print to electronic or digital publishing means both losses and gains, particularly with respect to shifts in our approaches to textuality, information, and innovative publishing. Second, we discuss the Children’s Literature Digital Resources (CLDR) project, with which we are involved. This case study of a digitising initiative opens out the transformative possibilities and challenges of digital publishing and e-scholarship for research communities. Third, we reflect on technology’s capacity to bring about major changes in the light of the theoretical and practical issues that have arisen from our discussion. I. Digitising in a “post-print age” We are living in an era that is commonly referred to as “the late age of print” (see Kho) or the “post-print age” (see Gunkel). According to Aarseth, we have reached a point whereby nearly all of our public and personal media have become more or less digital (37). As Kho notes, web newspapers are not only becoming increasingly more popular, but they are also making rather than losing money, and paper-based newspapers are finding it difficult to recruit new readers from the younger generations (37). Not only can such online-only publications update format, content, and structure more economically than print-based publications, but their wide distribution network, speed, and flexibility attract advertising revenue. Hype and hyperbole aside, publishers are not so much discarding their legacy of print, but recognising the folly of not embracing innovative technologies that can add value by presenting information in ways that satisfy users’ needs for content to-go or for edutainment. As Kho notes: “no longer able to satisfy customer demand by producing print-only products, or even by enabling online access to semi-static content, established publishers are embracing new models for publishing, web-style” (42). Advocates of online publishing contend that the major benefits of online publishing over print technology are that it is faster, more economical, and more interactive. However, as Hovav and Gray caution, “e-publishing also involves risks, hidden costs, and trade-offs” (79). The specific focus for these authors is e-journal publishing and they contend that while cost reduction is in editing, production and distribution, if the journal is not open access, then costs relating to storage and bandwith will be transferred to the user. If we put economics aside for the moment, the transition from print to electronic text (e-text), especially with electronic literary works, brings additional considerations, particularly in their ability to make available different reading strategies to print, such as “animation, rollovers, screen design, navigation strategies, and so on” (Hayles 38). Transition from print to e-text In his book, Writing Space, David Bolter follows Victor Hugo’s lead, but does not ask if print technology will be destroyed. Rather, he argues that “the idea and ideal of the book will change: print will no longer define the organization and presentation of knowledge, as it has for the past five centuries” (2). As Hayles noted above, one significant indicator of this change, which is a consequence of the shift from analogue to digital, is the addition of graphical, audio, visual, sonic, and kinetic elements to the written word. A significant consequence of this transition is the reinvention of the book in a networked environment. Unlike the printed book, the networked book is not bound by space and time. Rather, it is an evolving entity within an ecology of readers, authors, and texts. The Web 2.0 platform has enabled more experimentation with blending of digital technology and traditional writing, particularly in the use of blogs, which have spawned blogwriting and the wikinovel. Siva Vaidhyanathan’s The Googlization of Everything: How One Company is Disrupting Culture, Commerce and Community … and Why We Should Worry is a wikinovel or blog book that was produced over a series of weeks with contributions from other bloggers (see: http://www.sivacracy.net/). Penguin Books, in collaboration with a media company, “Six Stories to Start,” have developed six stories—“We Tell Stories,” which involve different forms of interactivity from users through blog entries, Twitter text messages, an interactive google map, and other features. For example, the story titled “Fairy Tales” allows users to customise the story using their own choice of names for characters and descriptions of character traits. Each story is loosely based on a classic story and links take users to synopses of these original stories and their authors and to online purchase of the texts through the Penguin Books sales website. These examples of digital stories are a small part of the digital environment, which exploits computer and online technologies’ capacity to be interactive and immersive. As Janet Murray notes, the interactive qualities of digital environments are characterised by their procedural and participatory abilities, while their immersive qualities are characterised by their spatial and encyclopedic dimensions (71–89). These immersive and interactive qualities highlight different ways of reading texts, which entail different embodied and cognitive functions from those that reading print texts requires. As Hayles argues: the advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes (89–90). The transition to e-text also highlights how digitality is changing all aspects of everyday life both inside and outside the academy. Online teaching and e-research Another aspect of the commercial arm of publishing that is impacting on academe and other organisations is the digitising and indexing of print content for niche distribution. Kho offers the example of the Mark Logic Corporation, which uses its XML content platform to repurpose content, create new content, and distribute this content through multiple portals. As the promotional website video for Mark Logic explains, academics can use this service to customise their own textbooks for students by including only articles and book chapters that are relevant to their subject. These are then organised, bound, and distributed by Mark Logic for sale to students at a cost that is generally cheaper than most textbooks. A further example of how print and digital materials can form an integrated, customised source for teachers and students is eFictions (Trimmer, Jennings, & Patterson). eFictions was one of the first print and online short story anthologies that teachers of literature could customise to their own needs. Produced as both a print text collection and a website, eFictions offers popular short stories in English by well-known traditional and contemporary writers from the US, Australia, New Zealand, UK, and Europe, with summaries, notes on literary features, author biographies, and, in one instance, a YouTube movie of the story. In using the eFictions website, teachers can build a customised anthology of traditional and innovative stories to suit their teaching preferences. These examples provide useful indicators of how content is constructed, customised, modified, disseminated, and accessed within a distributed network. However, the question remains as to how to measure their impact and outcomes within teaching and learning communities. As Harley suggests in her study on the use and users of digital resources in the humanities and social sciences, several factors warrant attention, such as personal teaching style, philosophy, and specific disciplinary requirements. However, in terms of understanding the benefits of digital resources for teaching and learning, Harley notes that few providers in her sample had developed any plans to evaluate use and users in a systematic way. In addition to the problems raised in Harley’s study, another relates to how researchers can be supported to take full advantage of digital technologies for e-research. The transformation brought about by information and communication technologies extends and broadens the impact of research, by making its outputs more discoverable and usable by other researchers, and its benefits more available to industry, governments, and the wider community. Traditional repositories of knowledge and information, such as libraries, are juggling the space demands of books and computer hardware alongside increasing reader demand for anywhere, anytime, anyplace access to information. Researchers’ expectations about online access to journals, eprints, bibliographic data, and the views of others through wikis, blogs, and associated social and information networking sites such as YouTube compete with the traditional expectations of the institutions that fund libraries for paper-based archives and book repositories. While university libraries are finding it increasingly difficult to purchase all hardcover books relevant to numerous and varied disciplines, a significant proportion of their budgets goes towards digital repositories (e.g., STORS), indexes, and other resources, such as full-text electronic specialised and multidisciplinary journal databases (e.g., Project Muse and Proquest); electronic serials; e-books; and specialised information sources through fast (online) document delivery services. An area that is becoming increasingly significant for those working in the humanities is the digitising of historical and cultural texts. II. Bringing back the dead: The CLDR project The CLDR project is led by researchers and librarians at the Queensland University of Technology, in collaboration with Deakin University, University of Sydney, and members of the AustLit team at The University of Queensland. The CLDR project is a “Research Community” of the electronic bibliographic database AustLit: The Australian Literature Resource, which is working towards the goal of providing a complete bibliographic record of the nation’s literature. AustLit offers users with a single entry point to enhanced scholarly resources on Australian writers, their works, and other aspects of Australian literary culture and activities. AustLit and its Research Communities are supported by grants from the Australian Research Council and financial and in-kind contributions from a consortium of Australian universities, and by other external funding sources such as the National Collaborative Research Infrastructure Strategy. Like other more extensive digitisation projects, such as Project Gutenberg and the Rosetta Project, the CLDR project aims to provide a centralised access point for digital surrogates of early published works of Australian children’s literature, with access pathways to existing resources. The first stage of the CLDR project is to provide access to digitised, full-text, out-of-copyright Australian children’s literature from European settlement to 1945, with selected digitised critical works relevant to the field. Texts comprise a range of genres, including poetry, drama, and narrative for young readers and picture books, songs, and rhymes for infants. Currently, a selection of 75 e-texts and digital scans of original texts from Project Gutenberg and Internet Archive have been linked to the Children’s Literature Research Community. By the end of 2009, the CLDR will have digitised approximately 1000 literary texts and a significant number of critical works. Stage II and subsequent development will involve digitisation of selected texts from 1945 onwards. A precursor to the CLDR project has been undertaken by Deakin University in collaboration with the State Library of Victoria, whereby a digital bibliographic index comprising Victorian School Readers has been completed with plans for full-text digital surrogates of a selection of these texts. These texts provide valuable insights into citizenship, identity, and values formation from the 1930s onwards. At the time of writing, the CLDR is at an early stage of development. An extensive survey of out-of-copyright texts has been completed and the digitisation of these resources is about to commence. The project plans to make rich content searchable, allowing scholars from children’s literature studies and education to benefit from the many advantages of online scholarship. What digital publishing and associated digital archives, electronic texts, hypermedia, and so forth foreground is the fact that writers, readers, publishers, programmers, designers, critics, booksellers, teachers, and copyright laws operate within a context that is highly mediated by technology. In his article on large-scale digitisation projects carried out by Cornell and University of Michigan with the Making of America collection of 19th-century American serials and monographs, Hirtle notes that when special collections’ materials are available via the Web, with appropriate metadata and software, then they can “increase use of the material, contribute to new forms of research, and attract new users to the material” (44). Furthermore, Hirtle contends that despite the poor ergonomics associated with most electronic displays and e-book readers, “people will, when given the opportunity, consult an electronic text over the print original” (46). If this preference is universally accurate, especially for researchers and students, then it follows that not only will the preference for electronic surrogates of original material increase, but preference for other kinds of electronic texts will also increase. It is with this preference for electronic resources in mind that we approached the field of children’s literature in Australia and asked questions about how future generations of researchers would prefer to work. If electronic texts become the reference of choice for primary as well as secondary sources, then it seems sensible to assume that researchers would prefer to sit at the end of the keyboard than to travel considerable distances at considerable cost to access paper-based print texts in distant libraries and archives. We considered the best means for providing access to digitised primary and secondary, full text material, and digital pathways to existing online resources, particularly an extensive indexing and bibliographic database. Prior to the commencement of the CLDR project, AustLit had already indexed an extensive number of children’s literature. Challenges and dilemmas The CLDR project, even in its early stages of development, has encountered a number of challenges and dilemmas that centre on access, copyright, economic capital, and practical aspects of digitisation, and sustainability. These issues have relevance for digital publishing and e-research. A decision is yet to be made as to whether the digital texts in CLDR will be available on open or closed/tolled access. The preference is for open access. As Hayles argues, copyright is more than a legal basis for intellectual property, as it also entails ideas about authorship, creativity, and the work as an “immaterial mental construct” that goes “beyond the paper, binding, or ink” (144). Seeking copyright permission is therefore only part of the issue. Determining how the item will be accessed is a further matter, particularly as future technologies may impact upon how a digital item is used. In the case of e-journals, the issue of copyright payment structures are evolving towards a collective licensing system, pay-per-view, and other combinations of print and electronic subscription (see Hovav and Gray). For research purposes, digitisation of items for CLDR is not simply a scan and deliver process. Rather it is one that needs to ensure that the best quality is provided and that the item is both accessible and usable by researchers, and sustainable for future researchers. Sustainability is an important consideration and provides a challenge for institutions that host projects such as CLDR. Therefore, items need to be scanned to a high quality and this requires an expensive scanner and personnel costs. Files need to be in a variety of formats for preservation purposes and so that they may be manipulated to be useable in different technologies (for example, Archival Tiff, Tiff, Jpeg, PDF, HTML). Hovav and Gray warn that when technology becomes obsolete, then content becomes unreadable unless backward integration is maintained. The CLDR items will be annotatable given AustLit’s NeAt funded project: Aus-e-Lit. The Aus-e-Lit project will extend and enhance the existing AustLit web portal with data integration and search services, empirical reporting services, collaborative annotation services, and compound object authoring, editing, and publishing services. For users to be able to get the most out of a digital item, it needs to be searchable, either through double keying or OCR (optimal character recognition). The value of CLDR’s contribution The value of the CLDR project lies in its goal to provide a comprehensive, searchable body of texts (fictional and critical) to researchers across the humanities and social sciences. Other projects seem to be intent on putting up as many items as possible to be considered as a first resort for online texts. CLDR is more specific and is not interested in simply generating a presence on the Web. Rather, it is research driven both in its design and implementation, and in its focussed outcomes of assisting academics and students primarily in their e-research endeavours. To this end, we have concentrated on the following: an extensive survey of appropriate texts; best models for file location, distribution, and use; and high standards of digitising protocols. These issues that relate to data storage, digitisation, collections, management, and end-users of data are aligned with the “Development of an Australian Research Data Strategy” outlined in An Australian e-Research Strategy and Implementation Framework (2006). CLDR is not designed to simply replicate resources, as it has a distinct focus, audience, and research potential. In addition, it looks at resources that may be forgotten or are no longer available in reproduction by current publishing companies. Thus, the aim of CLDR is to preserve both the time and a period of Australian history and literary culture. It will also provide users with an accessible repository of rare and early texts written for children. III. Future directions It is now commonplace to recognize that the Web’s role as information provider has changed over the past decade. New forms of “collective intelligence” or “distributed cognition” (Oblinger and Lombardi) are emerging within and outside formal research communities. Technology’s capacity to initiate major cultural, social, educational, economic, political and commercial shifts has conditioned us to expect the “next big thing.” We have learnt to adapt swiftly to the many challenges that online technologies have presented, and we have reaped the benefits. As the examples in this discussion have highlighted, the changes in online publishing and digitisation have provided many material, network, pedagogical, and research possibilities: we teach online units providing students with access to e-journals, e-books, and customized archives of digitised materials; we communicate via various online technologies; we attend virtual conferences; and we participate in e-research through a global, digital network. In other words, technology is deeply engrained in our everyday lives. In returning to Frollo’s concern that the book would destroy architecture, Umberto Eco offers a placatory note: “in the history of culture it has never happened that something has simply killed something else. Something has profoundly changed something else” (n. pag.). Eco’s point has relevance to our discussion of digital publishing. The transition from print to digital necessitates a profound change that impacts on the ways we read, write, and research. As we have illustrated with our case study of the CLDR project, the move to creating digitised texts of print literature needs to be considered within a dynamic network of multiple causalities, emergent technological processes, and complex negotiations through which digital texts are created, stored, disseminated, and used. Technological changes in just the past five years have, in many ways, created an expectation in the minds of people that the future is no longer some distant time from the present. Rather, as our title suggests, the future is both present and active. References Aarseth, Espen. “How we became Postdigital: From Cyberstudies to Game Studies.” Critical Cyber-culture Studies. Ed. David Silver and Adrienne Massanari. New York: New York UP, 2006. 37–46. An Australian e-Research Strategy and Implementation Framework: Final Report of the e-Research Coordinating Committee. Commonwealth of Australia, 2006. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Erlbaum, 1991. Eco, Umberto. “The Future of the Book.” 1994. 3 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Gunkel, David. J. “What's the Matter with Books?” Configurations 11.3 (2003): 277–303. Harley, Diane. “Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences.” Research and Occasional Papers Series. Berkeley: University of California. Centre for Studies in Higher Education. 12 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Hayles, N. Katherine. My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: U of Chicago P, 2005. Hirtle, Peter B. “The Impact of Digitization on Special Collections in Libraries.” Libraries & Culture 37.1 (2002): 42–52. Hovav, Anat and Paul Gray. “Managing Academic E-journals.” Communications of the ACM 47.4 (2004): 79–82. Hugo, Victor. The Hunchback of Notre Dame (Notre-Dame de Paris). Ware, Hertfordshire: Wordsworth editions, 1993. Kho, Nancy D. “The Medium Gets the Message: Post-Print Publishing Models.” EContent 30.6 (2007): 42–48. Oblinger, Diana and Marilyn Lombardi. “Common Knowledge: Openness in Higher Education.” Opening up Education: The Collective Advancement of Education Through Open Technology, Open Content and Open Knowledge. Ed. Toru Liyoshi and M. S. Vijay Kumar. Cambridge, MA: MIT Press, 2007. 389–400. Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press, 2001. Trimmer, Joseph F., Wade Jennings, and Annette Patterson. eFictions. New York: Harcourt, 2001.
APA, Harvard, Vancouver, ISO, and other styles
28

Dieter, Michael. "Amazon Noir." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2709.

Full text
Abstract:
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
APA, Harvard, Vancouver, ISO, and other styles
29

Deck, Andy. "Treadmill Culture." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2157.

Full text
Abstract:
Since the first days of the World Wide Web, artists like myself have been exploring the new possibilities of network interactivity. Some good tools and languages have been developed and made available free for the public to use. This has empowered individuals to participate in the media in ways that are quite remarkable. Nonetheless, the future of independent media is clouded by legal, regulatory, and organisational challenges that need to be addressed. It is not clear to what extent independent content producers will be able to build upon the successes of the 90s – it is yet to be seen whether their efforts will be largely nullified by the anticyclones of a hostile media market. Not so long ago, American news magazines were covering the Browser War. Several real wars later, the terms of surrender are becoming clearer. Now both of the major Internet browsers are owned by huge media corporations, and most of the states (and Reagan-appointed judges) that were demanding the break-up of Microsoft have given up. A curious about-face occurred in U.S. Justice Department policy when John Ashcroft decided to drop the federal case. Maybe Microsoft's value as a partner in covert activity appealed to Ashcroft more than free competition. Regardless, Microsoft is now turning its wrath on new competitors, people who are doing something very, very bad: sharing the products of their own labour. This practice of sharing source code and building free software infrastructure is epitomised by the continuing development of Linux. Everything in the Linux kernel is free, publicly accessible information. As a rule, the people building this "open source" operating system software believe that maintaining transparency is important. But U.S. courts are not doing much to help. In a case brought by the Motion Picture Association of America against Eric Corley, a federal district court blocked the distribution of source code that enables these systems to play DVDs. In addition to censoring Corley's journal, the court ruled that any programmer who writes a program that plays a DVD must comply with a host of license restrictions. In short, an established and popular media format (the DVD) cannot be used under open source operating systems without sacrificing the principle that software source code should remain in the public domain. Should the contents of operating systems be tightly guarded secrets, or subject to public review? If there are capable programmers willing to create good, free operating systems, should the law stand in their way? The question concerning what type of software infrastructure will dominate personal computers in the future is being answered as much by disappointing legal decisions as it is by consumer choice. Rather than ensuring the necessary conditions for innovation and cooperation, the courts permit a monopoly to continue. Rather than endorsing transparency, secrecy prevails. Rather than aiming to preserve a balance between the commercial economy and the gift-economy, sharing is being undermined by the law. Part of the mystery of the Internet for a lot of newcomers must be that it seems to disprove the old adage that you can't get something for nothing. Free games, free music, free pornography, free art. Media corporations are doing their best to change this situation. The FBI and trade groups have blitzed the American news media with alarmist reports about how children don't understand that sharing digital information is a crime. Teacher Gail Chmura, the star of one such media campaign, says of her students, "It's always been interesting that they don't see a connection between the two. They just don't get it" (Hopper). Perhaps the confusion arises because the kids do understand that digital duplication lets two people have the same thing. Theft is at best a metaphor for the copying of data, because the original is not stolen in the same sense as a material object. In the effort to liken all copying to theft, legal provisions for the fair use of intellectual property are neglected. Teachers could just as easily emphasise the importance of sharing and the development of an electronic commons that is free for all to use. The values advanced by the trade groups are not beyond question and are not historical constants. According to Donald Krueckeberg, Rutgers University Professor of Urban Planning, native Americans tied the concept of property not to ownership but to use. "One used it, one moved on, and use was shared with others" (qtd. in Batt). Perhaps it is necessary for individuals to have dominion over some private data. But who owns the land, wind, sun, and sky of the Internet – the infrastructure? Given that publicly-funded research and free software have been as important to the development of the Internet as have business and commercial software, it is not surprising that some ambiguity remains about the property status of the dataverse. For many the Internet is as much a medium for expression and the interplay of languages as it is a framework for monetary transaction. In the case involving DVD software mentioned previously, there emerged a grass-roots campaign in opposition to censorship. Dozens of philosophical programmers and computer scientists asserted the expressive and linguistic bases of software by creating variations on the algorithm needed to play DVDs. The forbidden lines of symbols were printed on T-shirts, translated into different computer languages, translated into legal rhetoric, and even embedded into DNA and pictures of MPAA president Jack Valenti (see e.g. Touretzky). These efforts were inspired by a shared conviction that important liberties were at stake. Supporting the MPAA's position would do more than protect movies from piracy. The use of the algorithm was not clearly linked to an intent to pirate movies. Many felt that outlawing the DVD algorithm, which had been experimentally developed by a Norwegian teenager, represented a suppression of gumption and ingenuity. The court's decision rejected established principles of fair use, denied the established legality of reverse engineering software to achieve compatibility, and asserted that journalists and scientists had no right to publish a bit of code if it might be misused. In a similar case in April 2000, a U.S. court of appeals found that First Amendment protections did apply to software (Junger). Noting that source code has both an expressive feature and a functional feature, this court held that First Amendment protection is not reserved only for purely expressive communication. Yet in the DVD case, the court opposed this view and enforced the inflexible demands of the Digital Millennium Copyright Act. Notwithstanding Ted Nelson's characterisation of computers as literary machines, the decision meant that the linguistic and expressive aspects of software would be subordinated to other concerns. A simple series of symbols were thereby cast under a veil of legal secrecy. Although they were easy to discover, and capable of being committed to memory or translated to other languages, fair use and other intuitive freedoms were deemed expendable. These sorts of legal obstacles are serious challenges to the continued viability of free software like Linux. The central value proposition of Linux-based operating systems – free, open source code – is threatening to commercial competitors. Some corporations are intent on stifling further development of free alternatives. Patents offer another vulnerability. The writing of free software has become a minefield of potential patent lawsuits. Corporations have repeatedly chosen to pursue patent litigation years after the alleged infringements have been incorporated into widely used free software. For example, although it was designed to avoid patent problems by an array of international experts, the image file format known as JPEG (Joint Photographic Experts Group) has recently been dogged by patent infringement charges. Despite good intentions, low-budget initiatives and ad hoc organisations are ill equipped to fight profiteering patent lawsuits. One wonders whether software innovation is directed more by lawyers or computer scientists. The present copyright and patent regimes may serve the needs of the larger corporations, but it is doubtful that they are the best means of fostering software innovation and quality. Orwell wrote in his Homage to Catalonia, There was a new rule that censored portions of the newspaper must not be left blank but filled up with other matter; as a result it was often impossible to tell when something had been cut out. The development of the Internet has a similar character: new diversions spring up to replace what might have been so that the lost potential is hardly felt. The process of retrofitting Internet software to suit ideological and commercial agendas is already well underway. For example, Microsoft has announced recently that it will discontinue support for the Java language in 2004. The problem with Java, from Microsoft's perspective, is that it provides portable programming tools that work under all operating systems, not just Windows. With Java, programmers can develop software for the large number of Windows users, while simultaneously offering software to users of other operating systems. Java is an important piece of the software infrastructure for Internet content developers. Yet, in the interest of coercing people to use only their operating systems, Microsoft is willing to undermine thousands of existing Java-language projects. Their marketing hype calls this progress. The software industry relies on sales to survive, so if it means laying waste to good products and millions of hours of work in order to sell something new, well, that's business. The consequent infrastructure instability keeps software developers, and other creative people, on a treadmill. From Progressive Load by Andy Deck, artcontext.org/progload As an Internet content producer, one does not appeal directly to the hearts and minds of the public; one appeals through the medium of software and hardware. Since most people are understandably reluctant to modify the software running on their computers, the software installed initially is a critical determinant of what is possible. Unconventional, independent, and artistic uses of the Internet are diminished when the media infrastructure is effectively established by decree. Unaccountable corporate control over infrastructure software tilts the playing field against smaller content producers who have neither the advance warning of industrial machinations, nor the employees and resources necessary to keep up with a regime of strategic, cyclical obsolescence. It seems that independent content producers must conform to the distribution technologies and content formats favoured by the entertainment and marketing sectors, or else resign themselves to occupying the margins of media activity. It is no secret that highly diversified media corporations can leverage their assets to favour their own media offerings and confound their competitors. Yet when media giants AOL and Time-Warner announced their plans to merge in 2000, the claim of CEOs Steve Case and Gerald Levin that the merged companies would "operate in the public interest" was hardly challenged by American journalists. Time-Warner has since fought to end all ownership limits in the cable industry; and Case, who formerly championed third-party access to cable broadband markets, changed his tune abruptly after the merger. Now that Case has been ousted, it is unclear whether he still favours oligopoly. According to Levin, global media will be and is fast becoming the predominant business of the 21st century ... more important than government. It's more important than educational institutions and non-profits. We're going to need to have these corporations redefined as instruments of public service, and that may be a more efficient way to deal with society's problems than bureaucratic governments. Corporate dominance is going to be forced anyhow because when you have a system that is instantly available everywhere in the world immediately, then the old-fashioned regulatory system has to give way (Levin). It doesn't require a lot of insight to understand that this "redefinition," this slight of hand, does not protect the public from abuses of power: the dissolution of the "old-fashioned regulatory system" does not serve the public interest. From Lexicon by Andy Deck, artcontext.org/lexicon) As an artist who has adopted telecommunications networks and software as his medium, it disappoints me that a mercenary vision of electronic media's future seems to be the prevailing blueprint. The giantism of media corporations, and the ongoing deregulation of media consolidation (Ahrens), underscore the critical need for independent media sources. If it were just a matter of which cola to drink, it would not be of much concern, but media corporations control content. In this hyper-mediated age, content – whether produced by artists or journalists – crucially affects what people think about and how they understand the world. Content is not impervious to the software, protocols, and chicanery that surround its delivery. It is about time that people interested in independent voices stop believing that laissez faire capitalism is building a better media infrastructure. The German writer Hans Magnus Enzensberger reminds us that the media tyrannies that affect us are social products. The media industry relies on thousands of people to make the compromises necessary to maintain its course. The rapid development of the mind industry, its rise to a key position in modern society, has profoundly changed the role of the intellectual. He finds himself confronted with new threats and new opportunities. Whether he knows it or not, whether he likes it or not, he has become the accomplice of a huge industrial complex which depends for its survival on him, as he depends on it for his own. He must try, at any cost, to use it for his own purposes, which are incompatible with the purposes of the mind machine. What it upholds he must subvert. He may play it crooked or straight, he may win or lose the game; but he would do well to remember that there is more at stake than his own fortune (Enzensberger 18). Some cultural leaders have recognised the important role that free software already plays in the infrastructure of the Internet. Among intellectuals there is undoubtedly a genuine concern about the emerging contours of corporate, global media. But more effective solidarity is needed. Interest in open source has tended to remain superficial, leading to trendy, cosmetic, and symbolic uses of terms like "open source" rather than to a deeper commitment to an open, public information infrastructure. Too much attention is focussed on what's "cool" and not enough on the road ahead. Various media specialists – designers, programmers, artists, and technical directors – make important decisions that affect the continuing development of electronic media. Many developers have failed to recognise (or care) that their decisions regarding media formats can have long reaching consequences. Web sites that use media formats which are unworkable for open source operating systems should be actively discouraged. Comparable technologies are usually available to solve compatibility problems. Going with the market flow is not really giving people what they want: it often opposes the work of thousands of activists who are trying to develop open source alternatives (see e.g. Greene). Average Internet users can contribute to a more innovative, free, open, and independent media – and being conscientious is not always difficult or unpleasant. One project worthy of support is the Internet browser Mozilla. Currently, many content developers create their Websites so that they will look good only in Microsoft's Internet Explorer. While somewhat understandable given the market dominance of Internet Explorer, this disregard for interoperability undercuts attempts to popularise standards-compliant alternatives. Mozilla, written by a loose-knit group of activists and programmers (some of whom are paid by AOL/Time-Warner), can be used as an alternative to Microsoft's browser. If more people use Mozilla, it will be harder for content providers to ignore the way their Web pages appear in standards-compliant browsers. The Mozilla browser, which is an open source initiative, can be downloaded from http://www.mozilla.org/. While there are many people working to create real and lasting alternatives to the monopolistic and technocratic dynamics that are emerging, it takes a great deal of cooperation to resist the media titans, the FCC, and the courts. Oddly enough, corporate interests sometimes overlap with those of the public. Some industrial players, such as IBM, now support open source software. For them it is mostly a business decision. Frustrated by the coercive control of Microsoft, they support efforts to develop another operating system platform. For others, including this writer, the open source movement is interesting for the potential it holds to foster a more heterogeneous and less authoritarian communications infrastructure. Many people can find common cause in this resistance to globalised uniformity and consolidated media ownership. The biggest challenge may be to get people to believe that their choices really matter, that by endorsing certain products and operating systems and not others, they can actually make a difference. But it's unlikely that this idea will flourish if artists and intellectuals don't view their own actions as consequential. There is a troubling tendency for people to see themselves as powerless in the face of the market. This paralysing habit of mind must be abandoned before the media will be free. Works Cited Ahrens, Frank. "Policy Watch." Washington Post (23 June 2002): H03. 30 March 2003 <http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?la... ...nguage=printer>. Batt, William. "How Our Towns Got That Way." 7 Oct. 1996. 31 March 2003 <http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm>. Chester, Jeff. "Gerald Levin's Negative Legacy." Alternet.org 6 Dec. 2001. 5 March 2003 <http://www.democraticmedia.org/resources/editorials/levin.php>. Enzensberger, Hans Magnus. "The Industrialisation of the Mind." Raids and Reconstructions. London: Pluto Press, 1975. 18. Greene, Thomas C. "MS to Eradicate GPL, Hence Linux." 25 June 2002. 5 March 2003 <http://www.theregus.com/content/4/25378.php>. Hopper, D. Ian. "FBI Pushes for Cyber Ethics Education." Associated Press 10 Oct. 2000. 29 March 2003 <http://www.billingsgazette.com/computing/20001010_cethics.php>. Junger v. Daley. U.S. Court of Appeals for 6th Circuit. 00a0117p.06. 2000. 31 March 2003 <http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0... ...117p.06>. Levin, Gerald. "Millennium 2000 Special." CNN 2 Jan. 2000. Touretzky, D. S. "Gallery of CSS Descramblers." 2000. 29 March 2003 <http://www.cs.cmu.edu/~dst/DeCSS/Gallery>. Links http://artcontext.org/lexicon/ http://artcontext.org/progload http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0117p.06 http://www.billingsgazette.com/computing/20001010_cethics.html http://www.cs.cmu.edu/~dst/DeCSS/Gallery http://www.democraticmedia.org/resources/editorials/levin.html http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm http://www.mozilla.org/ http://www.theregus.com/content/4/25378.html http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?language=printer Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Deck, Andy. "Treadmill Culture " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/04-treadmillculture.php>. APA Style Deck, A. (2003, Apr 23). Treadmill Culture . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/04-treadmillculture.php>
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Peter. "Community without Flesh." M/C Journal 2, no. 3 (May 1, 1999). http://dx.doi.org/10.5204/mcj.1750.

Full text
Abstract:
On Wednesday 21 April the Minister for Communications, Information Technology and the Arts introduced a piece of legislation into the Australian Senate to regulate the way Australians use the Internet. This legislation is presented within Australia's existing system of content regulation, a scheme that the Minister describes is not censorship, but merely regulation (Alston 55). Underlying Senator Alston's rhetoric about the protection of children from snuff film makers, paedophiles, drug pushers and other criminals, this long anticipated bill is aimed at reducing the amount of pornographic materials available via computer networks, a censorship regime in an age when regulation and classification are the words we prefer to use when society draws the line under material we want to see, but dare not allow ourselves access to. Regardless of any noble aspirations expressed by free-speech organisations such as Electronic Frontiers Australia relating to the defence of personal liberty and freedom of expression, this legislation is about porn. Under the Bill, Australia would proscribe our citizens from accessing: explicit depictions of sexual acts between consenting adults; mild non-violent fetishes; depictions of sexual violence, coercion or non-consent of any kind; depictions of child sexual abuse, bestiality, sexual acts accompanied by offensive fetishes, or exploitative incest fantasies; unduly detailed and/or relished acts of extreme violence or cruelty; explicit or unjustifiable depictions of sexual violence against non-consenting persons; and detailed instruction or encouragement in matters of crime or violence or the abuse of proscribed drugs. (OFLC) The Australian public, as a whole, favour the availability of sexually explicit materials in some form, with OFLC data indicating a relatively high degree of public support for X rated videos, the "high end" of the porn market (Paterson et al.). In Australia strict regulation of X rated materials in conventional media has resulted in a larger illegal market for these materials than the legalised sex industries of the ACT and Northern Territory (while 1.2 million X rated videos are legally sold out of the territories, 2 million are sold illegally in other jurisdictions, according to Patten). In Australia, censorship of media content has traditionally been based on the principles of the protection of society from moral harm and individual degradation, with specific emphasis on the protection of innocents from material they are not old enough for, or mentally capable of dealing with (Joint Select Committee on Video Material). Even when governments distanced themselves from direct personal censorship (such as Don Chipp's approach to the censorship of films and books in the late 1960s and early 1970s) and shifted the rationale behind censorship from prohibition to classification, the publicly stated aims of these decisions have been the support of existing community standards, rather than the imposition of strict legalistic moral values upon an unwilling society. In the debates surrounding censorship, and especially the level of censorship applied (rather than censorship as a whole), the question "what is the community we are talking about here?" has been a recurring theme. The standards that are applied to the regulation of media content, both online and off, are often the focus of community debate (a pluralistic community that obviously lacks "standards" by definition of the word). In essence the problem of maintaining a single set of moral and ethical values for the treatment of media content is a true political dilemma: a problem that lacks any form of solution acceptable to all participants. Since the introduction of the Internet as a "mass" medium (or more appropriately, a "popular" one), government indecision about how best to treat this new technology has precluded any form or content regulation other than the ad hoc use of existing non-technologically specific law to deal with areas of criminal or legally sanctionable intent (such as the use of copyright law, or the powers under the Crimes Act relating to the improper use of telecommunications services). However, indecision in political life is often associated with political weakness, and in the face of pressure to act decisively (motivated again by "community concern"), the Federal government has decided to extend the role of the Australian Broadcasting Authority to regulate and impose a censorship regime on Australian access of morally harmful materials. It is important to note the government's intention to censor access, rather than content of the Internet. While material hosted in Australia (ignoring, of course, the "cyberspace" definitions of non-territorial existence of information stored in networks) will be censored (removed from Australia computers), the government, lacking extraterritorial powers to compel the owners of machines located offshore, intends to introduce of some form of refused access list to materials located in other nations. What is interesting to consider in this context is the way that slight shifts of definitional paradigm alter the way this legislation can be considered. If information flows (upon which late capitalism is becoming more dependent) were to be located within the context of international law governing the flow of waterways, does the decision to prevent travel of morally dubious material through Australia's informational waterways impinge upon the riparian rights of other nations (the doctrine of fair usage without impeding flow; Godana 50)? Similarly, if we take Smith's extended definition of community within electronic transactional spaces (the maintenance of members' commitment to the group, monitoring and sanctioning behaviour and the production and distribution of resources), then the current Bill proposes the regulation of the activities of one community by another (granted, a larger community that incorporates the former). Seen in this context, this legislation is the direct intervention in an established social order by a larger and less homogeneous group. It may be trite to quote the Prime Minister's view of community in this context, where he states ...It is free individuals, strong communities and the rule of law which are the best defence against the intrusive power of the state and against those who think they know what is best for everyone else. (Howard 21) possibly because the paradigm in which this new legislation is situated does not classify those Australians online (who number up to 3 million) as a community in their own right. In a way the Internet users of Australia have never identified themselves as a community, nor been asked to act in a communitarian manner. While discussions about the value of community models when applied to the Internet are still divided, there are those who argue that their use of networked services can be seen in this light (Worthington). What this new legislation does, however, is preclude the establishment of public communities in order to meet the desires of government for some limits to be placed on Internet content. The Bill does allow for the development of "restricted access systems" that would allow pluralistic communities to develop and engage in a limited amount of self-regulation. These systems include privately accessible Intranets, or sites that restrict access through passwords or some other form of age verification technique. Thus, ignoring the minimum standards that will be required for these communities to qualify for some measure of self-regulatory freedom, what is unspoken here is that specific subsections of the Internet population may exist, provided they keep well away from the public gaze. A ghetto without physical walls. Under the Bill, a co-regulatory approach is endorsed by the government, favouring the establishment of industry codes of practice by ISPs and (or) the establishment of a single code of practice by the content hosting industry (content developers are relegated to yet undetermined complementary state legislation). However, this section of the Bill, in mandating a range of minimum requirements for these codes of practice, and denying plurality to the content providers, places an administrative imperative above any communitarian spirit. That is, that the Internet should have no more than one community, it should be an entity bound by a single guiding set of principles and be therefore easier to administer by Australian censors. This administrative imperative re-encapsulates the dilemma faced by governments dealing with the Internet: that at heart, the broadcast and print press paradigms of existing censorship regimes face massive administrative problems when presented with a communications technology that allows for wholesale publication of materials by individuals. Whereas the limited numbers of broadcasters and publishers have allowed the development of Australia's system of classification of materials (on a sliding scale from G to RC classifications or the equivalent print press version), the new legislation introduced into the Senate uses the classification scheme simply as a censorship mechanism: Internet content is either "ok" or "not ok". From a public administration perspective, this allows government to drastically reduce the amount of work required by regulators and eases the burden of compliance costs by ISPs, by directing clear and unambiguous statements about the acceptability of existing materials placed online. However, as we have seen in other areas of social policy (such as the rationalisation of Social Security services or Health), administrative expedience is often antipathetic to small communities that have special needs, or cultural sensitivities outside of mainstream society. While it is not appropriate to argue that public administration creates negative social impacts through expedience, what can be presented is that, where expedience is a core aim of legislation, poor administration may result. For many Australian purveyors of pornography, my comments will be entirely unhelpful as they endeavour to find effective ways to spoof offshore hosts or bone up (no pun intended) on tunnelling techniques. Given the easy way in which material can be reconstituted and relocated on the Internet, it seems likely that some form of regulatory avoidance will occur by users determined not to have their content removed or blocked. For those regulators given the unenviable task of censoring Internet access it may be worthwhile quoting from Sexing the Cherry, in which Jeanette Winterson describes the town: whose inhabitants are so cunning that to escape the insistence of creditors they knock down their houses in a single night and rebuild them elsewhere. So the number of buildings in the city is always constant but they are never in the same place from one day to the next. (43) Thus, while Winterson saw this game as a "most fulfilling pastime", it is likely to present real administrative headaches to ABA regulators when attempting to enforce the Bill's anti-avoidance clauses. The Australian government, in adapting existing regulatory paradigms to the Internet, has overlooked the informal communities who live, work and play within the virtual world of cyberspace. In attempting to meet a perceived social need for regulation with political and administrative expedience, it has ignored the potentially cohesive role of government in developing self-regulating communities who need little government intervention to produce socially beneficial outcomes. In proscribing activity externally to the realm in which these communities reside, what we may see is a new type of community, one whose desire for a feast of flesh leads them to evade the activities of regulators who operate in the "meat" world. What this may show us is that in a virtual environment, the regulators' net is no match for a world wide web. References Alston, Richard. "Regulation is Not Censorship." The Australian 13 April 1999: 55. Paterson, K., et. al. Classification Issues: Film, Video and Television. Sydney: The Office of Film and Literature Classification, 1993. Patten, F. Personal interview. 9 Feb. 1999. Godana, B.A. Africa's Shared Water Resources: Legal and Institutional Aspects of the Nile, Niger and Senegal River Systems. London: Frances Pinter, 1985. Howard, John. The Australia I Believe In: The Values, Directions and Policy Priorities of a Coalition Government Outlined in 1995. Canberra: Liberal Party, 1995. Joint Select Committee On Video Material. Report of the Joint Select Committee On Video Material. Canberra: APGS, 1988. Office of Film and Literature Classification. Cinema & Video Ratings Guide. 1999. 1 May 1999 <http://www.oflc.gov.au/classinfo.php>. Smith, Marc A. "Voices from the WELL: The Logic of the Virtual Commons." 1998. 2 Mar. 1999 <http://www.sscnet.ucla.edu/soc/csoc/papers/voices/Voices.htm>. Winterson, Jeanette. Sexing the Cherry. New York: Vintage Books. 1991. Worthington, T. Testimony before the Senate Select Committee on Information Technologies. Unpublished, 1999. Citation reference for this article MLA style: Peter Chen. "Community without Flesh: First Thoughts on the New Broadcasting Services Amendment (Online Services) Bill 1999." M/C: A Journal of Media and Culture 2.3 (1999). [your date of access] <http://www.uq.edu.au/mc/9905/bill.php>. Chicago style: Peter Chen, "Community without Flesh: First Thoughts on the New Broadcasting Services Amendment (Online Services) Bill 1999," M/C: A Journal of Media and Culture 2, no. 3 (1999), <http://www.uq.edu.au/mc/9905/bill.php> ([your date of access]). APA style: Author. (1999) Community without flesh: first thoughts on the new broadcasting services amendment (online services) bill 1999. M/C: A Journal of Media and Culture 2(3). <http://www.uq.edu.au/mc/9905/bill.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography