To see the other types of publications on this topic, follow the link: Algorithmes non centralisés.

Journal articles on the topic 'Algorithmes non centralisés'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Algorithmes non centralisés.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Perez-Diaz, Alvaro, Enrico Harm Gerding, and Frank McGroarty. "Catching Cheats: Detecting Strategic Manipulation in Distributed Optimisation of Electric Vehicle Aggregators." Journal of Artificial Intelligence Research 67 (March 5, 2020): 437–70. http://dx.doi.org/10.1613/jair.1.11573.

Full text
Abstract:
Given the rapid rise of electric vehicles (EVs) worldwide, and the ambitious targets set for the near future, the management of large EV fleets must be seen as a priority. Specifically, we study a scenario where EV charging is managed through self-interested EV aggregators who compete in the day-ahead market in order to purchase the electricity needed to meet their clients' requirements. With the aim of reducing electricity costs and lowering the impact on electricity markets, a centralised bidding coordination framework has been proposed in the literature employing a coordinator. In order to improve privacy and limit the need for the coordinator, we propose a reformulation of the coordination framework as a decentralised algorithm, employing the Alternating Direction Method of Multipliers (ADMM). However, given the self-interested nature of the aggregators, they can deviate from the algorithm in order to reduce their energy costs. Hence, we study the strategic manipulation of the ADMM algorithm and, in doing so, describe and analyse different possible attack vectors and propose a mathematical framework to quantify and detect manipulation. Importantly, this detection framework is not limited to the considered EV scenario and can be applied to general ADMM algorithms. Finally, we test the proposed decentralised coordination and manipulation detection algorithms in realistic scenarios using real market and driver data from Spain. Our empirical results show that the decentralised algorithm's convergence to the optimal solution can be effectively disrupted by manipulative attacks achieving convergence to a different non-optimal solution which benefits the attacker. With respect to the detection algorithm, results indicate that it achieves very high accuracies and significantly outperforms a naive benchmark.
APA, Harvard, Vancouver, ISO, and other styles
2

Daccò, Edoardo, Davide Falabretti, Valentin Ilea, Marco Merlo, Riccardo Nebuloni, and Matteo Spiller. "Decentralised Voltage Regulation through Optimal Reactive Power Flow in Distribution Networks with Dispersed Generation." Electricity 5, no. 1 (March 12, 2024): 134–53. http://dx.doi.org/10.3390/electricity5010008.

Full text
Abstract:
The global capacity for renewable electricity generation has surged, with distributed photovoltaic generation being the primary driver. The increasing penetration of non-programmable renewable Distributed Energy Resources (DERs) presents challenges for properly managing distribution networks, requiring advanced voltage regulation techniques. This paper proposes an innovative decentralised voltage strategy that considers DERs, particularly inverter-based ones, as autonomous regulators in compliance with the state-of-the-art European technical standards and grid codes. The proposed method uses an optimal reactive power flow that minimises voltage deviations along all the medium voltage nodes; to check the algorithm’s performance, it has been applied to a small-scale test network and on a real Italian medium-voltage distribution network, and compared with a fully centralised ORPF. The results show that the proposed decentralised autonomous strategy effectively improves voltage profiles in both case studies, reducing voltage deviation by a few percentage points; these results are further confirmed through an analysis conducted over several days to observe how seasons affect the results.
APA, Harvard, Vancouver, ISO, and other styles
3

Murphy, DC, and DB Saleh. "Artificial Intelligence in plastic surgery: What is it? Where are we now? What is on the horizon?" Annals of The Royal College of Surgeons of England 102, no. 8 (October 2020): 577–80. http://dx.doi.org/10.1308/rcsann.2020.0158.

Full text
Abstract:
Introduction An increasing quantity of data is required to guide precision medicine and advance future healthcare practices, but current analytical methods often become overwhelmed. Artificial intelligence (AI) provides a promising solution. Plastic surgery is an innovative surgical specialty expected to implement AI into current and future practices. It is important for all plastic surgeons to understand how AI may affect current and future practice, and to recognise its potential limitations. Methods Peer-reviewed published literature and online content were comprehensively reviewed. We report current applications of AI in plastic surgery and possible future applications based on published literature and continuing scientific studies, and detail its potential limitations and ethical considerations. Findings Current machine learning models using convolutional neural networks can evaluate breast mammography and differentiate benign and malignant tumours as accurately as specialist doctors, and motion sensor surgical instruments can collate real-time data to advise intraoperative technical adjustments. Centralised big data portals are expected to collate large datasets to accelerate understanding of disease pathogeneses and best practices. Information obtained using computer vision could guide intraoperative surgical decisions in unprecedented detail and semi-autonomous surgical systems guided by AI algorithms may enable improved surgical outcomes in low- and middle-income countries. Surgeons must collaborate with computer scientists to ensure that AI algorithms inform clinically relevant health objectives and are interpretable. Ethical concerns such as systematic biases causing non-representative conclusions for under-represented patient groups, patient confidentiality and the limitations of AI based on the quality of data input suggests that AI will accompany the plastic surgeon, rather than replace them.
APA, Harvard, Vancouver, ISO, and other styles
4

Ye, Qiming, Yuxiang Feng, Eduardo Candela, Jose Escribano Macias, Marc Stettler, and Panagiotis Angeloudis. "Spatial-Temporal Flows-Adaptive Street Layout Control Using Reinforcement Learning." Sustainability 14, no. 1 (December 23, 2021): 107. http://dx.doi.org/10.3390/su14010107.

Full text
Abstract:
Complete streets scheme makes seminal contributions to securing the basic public right-of-way (ROW), improving road safety, and maintaining high traffic efficiency for all modes of commute. However, such a popular street design paradigm also faces endogenous pressures like the appeal to a more balanced ROW for non-vehicular users. In addition, the deployment of Autonomous Vehicle (AV) mobility is likely to challenge the conventional use of the street space as well as this scheme. Previous studies have invented automated control techniques for specific road management issues, such as traffic light control and lane management. Whereas models and algorithms that dynamically calibrate the ROW of road space corresponding to travel demands and place-making requirements still represent a research gap. This study proposes a novel optimal control method that decides the ROW of road space assigned to driveways and sidewalks in real-time. To solve this optimal control task, a reinforcement learning method is introduced that employs a microscopic traffic simulator, namely SUMO, as its environment. The model was trained for 150 episodes using a four-legged intersection and joint AVs-pedestrian travel demands of a day. Results evidenced the effectiveness of the model in both symmetric and asymmetric road settings. After being trained by 150 episodes, our proposed model significantly increased its comprehensive reward of both pedestrians and vehicular traffic efficiency and sidewalk ratio by 10.39%. Decisions on the balanced ROW are optimised as 90.16% of the edges decrease the driveways supply and raise sidewalk shares by approximately 9%. Moreover, during 18.22% of the tested time slots, a lane-width equivalent space is shifted from driveways to sidewalks, minimising the travel costs for both an AV fleet and pedestrians. Our study primarily contributes to the modelling architecture and algorithms concerning centralised and real-time ROW management. Prospective applications out of this method are likely to facilitate AV mobility-oriented road management and pedestrian-friendly street space design in the near future.
APA, Harvard, Vancouver, ISO, and other styles
5

Pauletto, Christian. "Gestion publique, agilité et innovation : l’expérience suisse du dispositif de crédits COVID-19." Revue Internationale des Sciences Administratives Vol. 90, no. 1 (April 2, 2024): 109–25. http://dx.doi.org/10.3917/risa.901.0109.

Full text
Abstract:
Au mois de mars 2020, l’administration suisse a conçu et mis en place en seulement dix jours un programme de crédits cautionnés destiné aux entreprises. La phase de mise en œuvre a également été de courte durée : moins de cinq mois. Cet article étudie comment cela a été possible compte tenu de la complexité du cadre institutionnel et de la nature novatrice du dispositif, notamment en matière de technologies de l’information, avec, en particulier, des avancées majeures dans la pratique suisse de l’administration électronique : le dispositif a utilisé des algorithmes pour vérifier les demandes des entreprises, un numéro d’identification unique des entreprises (IDE) a été créé à grande échelle, les banques suisses ont été associées à l’élaboration du projet et à sa mise en œuvre, et certaines opérations de leurs clients ont été centralisées sur une plateforme gouvernementale en ligne. Nous présentons les caractéristiques essentielles du processus au moyen d’une analyse du déroulement des opérations sur cette période de dix jours. Nous décrivons également les circonstances et le contexte qui ont conduit à des formes radicalement nouvelles de gouvernance publique. Enfin, nous analysons le résultat pour mettre en évidence les caractéristiques novatrices du livrable. L’exemple étudié a été de courte durée et était imprévu, de sorte qu’aucune donnée ni observation n’a pu être recueillie avant ou pendant son déroulement. Cette étude se fonde donc principalement sur des investigations a posteriori . Les participants au projet ont élaboré un système organisationnel informel sans disposer de mandats, de structures ou de rôles clairement définis. Le fait que le livrable était bien défini a joué un rôle moteur dans le processus. Plusieurs caractéristiques du projet, telles que l’efficacité des réseaux, un flux d’informations en temps réel, la flexibilité des fonctions, un management horizontal, et des sous-processus itératifs rapides, se rapprochent de celles des « organisations agiles ». Les tâches ont été exécutées parallèlement et non séquentiellement. Remarques à l’intention des praticiens Il est frappant que peu d’études académiques aient été publiées jusqu’ici sur les leçons tirées de l’expérience unique des paquets de mesures de soutien d’urgence mis en place pendant la pandémie, y compris au niveau intra-organisationnel. Des recherches pourraient être menées sur la reproductibilité de ces mesures, aussi bien dans la perspective de crises futures que d’un ajustement des pratiques usuelles de gestion publique. Notre proposition vise à contribuer à cette discussion et à inspirer les praticiens des administrations publiques et des entités gouvernementales. Elle met l’accent sur les relations entre la gestion gouvernementale des crises et la transformation numérique des procédures administratives à l’aide d’outils informatiques.
APA, Harvard, Vancouver, ISO, and other styles
6

Waldman, Deane. "Replace government healthcare with patient-controlled health care." Health Economics and Management Review 5, no. 1 (March 31, 2024): 80–89. http://dx.doi.org/10.61093/hem.2024.1-06.

Full text
Abstract:
The purpose of the article is to analyse the shortcomings of the state-run healthcare systems and to substantiate the need for and feasibility of transition to a patient-controlled model. It is shown that patient-controlled health care, free from centralised domination, can provide timely, high-quality, compassionate medical care at an affordable price for both individuals and the nation. It significantly expands the patient’s rights and opportunities to choose a doctor according to their own preferences and financial capabilities. The patient pays for the medical service provided directly to the doctor, who no longer has restrictions on choice of treatment protocols or prescription of medicines. The analysis in the article is based mainly on the example of the United States, where federal control for residents is both direct (194 million Americans are covered by Medicaid, Medicare, Tricare or EMTALA) and indirect (138 Americans have private insurance). In addition, aspects of the article analysis also apply to single-payer countries (Canada, the United Kingdom, France and Spain). The article examines the shortcomings of the current US model of its healthcare system in terms of its compliance with the Constitution. It is noted that, according to the Tenth Amendment to the US Constitution, healthcare powers are not among the 18 powers delegated to the federal government. Also, non-compliance with the law is also observed: government control or administration of state Medicaid programmes is contrary to US law; medical autonomy as the patient’s ability to make personal medical decisions without undue influence from the state. Another disadvantage of state-run healthcare system is that state-controlled healthcare payment structure violates the fiduciary relationship between doctor and patient, as doctors’ authority to make medical decisions is limited. It also calls into question the observance in the United States of the citizen’s “right” to receive medical care in its interpretation as a personal service of a professional caregiver when a patient can demand the desired care and the service provider cannot refuse. The article emphasises that state-run healthcare systems create a conflict between efficient use of financial resources and effective provision of medical care. This issue is considered through the prism of the interests of the main stakeholders: shareholders of companies operating in this area, politicians, patients, healthcare providers and administrators. As evidence of the inefficiency of the existing US healthcare system in comparison with other countries, comparative data for different countries on life expectancy and incidence rates of a number of diseases are provided. The author also discusses the problem of limiting access to medical care (rationing) for patients with public health insurance due to a shortage of healthcare professionals accepting new Medicaid patients. This is caused by low reimbursement rates, overly bureaucratic verification procedures for obtaining payment, overregulation of requirements for doctor-patient relations and procedures for reviewing medical errors, the need to comply with population-based clinical algorithms, etc. It leads to a decrease in the quality of medical care, an increase in patient deaths while waiting for medical care, the risk of disease complications due to delays in diagnosis and timely treatment, ignoring the needs of unique, individual patients, and an increase in the likelihood of medical errors. All of the above disadvantages of state-run healthcare are obviated when the patient is in charge, patient-controlled health care.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Zhixun, Keke Zhang, Leizheng Shu, Zhencai Zhu, and Meijiang Zhou. "Distributed angle‐only orbit determination algorithm for non‐cooperative spacecraft based on factor graph." IET Radar, Sonar & Navigation, May 22, 2024. http://dx.doi.org/10.1049/rsn2.12580.

Full text
Abstract:
AbstractBayesian filtering provides an effective approach for the orbit determination of a non‐cooperative target using angle measurements from multiple CubeSats. However, existing methods face challenges such as low reliability and limited estimation accuracy. Two distributed filtering algorithms based on factor graphs employed in the sub‐parent and distributed cluster spacecraft architectures are proposed. Two appropriate factor graphs representing different cluster spacecraft structures are designed and implement distributed Bayesian filtering within these models. The Gaussian messages transmitted between nodes and the probability distributions of variable nodes are calculated using the derived non‐linear Gaussian belief propagation algorithm. Gaussian messages propagate from the deputy spacecraft to the chief spacecraft in the sub‐parent spacecraft architecture, demonstrating that the estimation accuracy converges to the centralised extended Kalman filter (EKF). Simulation results indicate that the algorithm enhances system robustness in observation node failures without compromising accuracy. In the distributed spacecraft architecture, neighbouring spacecraft iteratively exchanges Gaussian messages. The accuracy of the algorithm can rapidly approach the centralised EKF, benefiting from the efficient and unbiased transmission of observational information. Compared to existing distributed consensus filtering algorithms, the proposed algorithm improves estimation accuracy and reduces the number of iterations needed to achieve consensus.
APA, Harvard, Vancouver, ISO, and other styles
8

Nagaraja G, Chandan K J, Amrutha S Dukandar, Akash N, and Charitha Reddy. "FINE: A Framework for Distributed Learning on Incomplete Observations for Heterogeneous Crowdsensing Networks." International Journal of Advanced Research in Science, Communication and Technology, May 5, 2023, 23–29. http://dx.doi.org/10.48175/ijarsct-9775.

Full text
Abstract:
Numerous crowdsensing applications have been developed recently in mobile social networks and vehicle networks. How to implement an accurate distributed learning process to estimate parameters of an unknown model in crowdsensing is a significant issue because centralised learning methods produce unreliable data gathering, expensive central servers, and privacy concerns. Due to this, we propose FINE, a distributed learning framework for imperfect data and non-smooth estimation, along with its design, analysis, and assessment. Our design, which is focused on creating a workable framework for learning parameters in crowdsensing networks accurately and efficiently, generalises earlier learning techniques by supporting heterogeneous dimensions of data records observed by various nodes as well as minimization based on non-smooth error functions.In particular, FINE makes use of a distributed dual average technique that efficiently minimises non-smooth error functions and a novel distributed record completion algorithm that enables each node to get the global consensus through effective communication with neighbours. All of these algorithms converge, as shown by our analysis, and the convergence rates are also obtained to support their efficacy. Through experiments on synthetic and actual networks, we assess how well our framework performs
APA, Harvard, Vancouver, ISO, and other styles
9

Jain, Sambhav, and Reshma Rastogi. "Multi-label Minimax Probability Machine with Multi-manifold Regularisation." Research Reports on Computer Science, December 30, 2021, 44–63. http://dx.doi.org/10.37256/rrcs.1120211193.

Full text
Abstract:
Semi-supervised learning i.e., learning from a large number of unlabelled data and exploiting a small percentage of labelled data has attracted centralised attention in recent years. Semi-supervised problem is handled mainly using graph based Laplacian and Hessian regularisation methods. However, neither the Laplacian method which leads to poor generalisation nor the Hessian energy can properly forecast the data points beyond the range of the domain. Thus, in this paper, the Laplacian-Hessian semi-supervised method is proposed, which can both predict the data points and enhance the stability of Hessian regulariser. In this paper, we propose a Laplacian-Hessian Multi-label Minimax Probability Machine, which is Multi-manifold regularisation framework. The proposed classifier requires mean and covariance information; therefore, assumptions related to the class conditional distributions are not required; rather, a upper bound on the misclassification probability of future data is obtained explicitly. Furthermore, the proposed model can effectively utilise the geometric information via a combination of Hessian-Laplacian manifold regularisation. We also show that the proposed method can be kernelised on the basis of a theorem similar to the representer theorem for handling non-linear cases. Extensive experimental comparisons of our proposed method with related multi-label algorithms on well known multi-label datasets demonstrate the validity and comparable performance of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
10

Jain, Sambhav, and Reshma Rastogi. "Multi-label Minimax Probability Machine with Multi-manifold Regularisation." Research Reports on Computer Science, December 30, 2021, 44–63. http://dx.doi.org/10.37256/rrcs.1120221193.

Full text
Abstract:
Semi-supervised learning i.e., learning from a large number of unlabelled data and exploiting a small percentage of labelled data has attracted centralised attention in recent years. Semi-supervised problem is handled mainly using graph based Laplacian and Hessian regularisation methods. However, neither the Laplacian method which leads to poor generalisation nor the Hessian energy can properly forecast the data points beyond the range of the domain. Thus, in this paper, the Laplacian-Hessian semi-supervised method is proposed, which can both predict the data points and enhance the stability of Hessian regulariser. In this paper, we propose a Laplacian-Hessian Multi-label Minimax Probability Machine, which is Multi-manifold regularisation framework. The proposed classifier requires mean and covariance information; therefore, assumptions related to the class conditional distributions are not required; rather, a upper bound on the misclassification probability of future data is obtained explicitly. Furthermore, the proposed model can effectively utilise the geometric information via a combination of Hessian-Laplacian manifold regularisation. We also show that the proposed method can be kernelised on the basis of a theorem similar to the representer theorem for handling non-linear cases. Extensive experimental comparisons of our proposed method with related multi-label algorithms on well known multi-label datasets demonstrate the validity and comparable performance of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
11

Decerf, Benoit, Gilles Grandjean, and Tom Truyts. "Numéro 138 - mai 2018." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco.v1i0.12493.

Full text
Abstract:
Ce numéro de Regards économiques analyse le décret qui organise les inscriptions en première année secondaire en Fédération Wallonie-Bruxelles. Le décret inscription requiert que les parents transmettent une liste de maximum dix établissements dans lesquels ils aimeraient inscrire leur enfant, classés dans l’ordre de leurs préférences. En parallèle, le décret fixe les critères qui sont utilisés pour déterminer à quels enfants donner priorité lorsque la demande pour un établissement excède son nombre de places disponibles. L’allocation des places disponibles est réalisée par un algorithme se basant sur les préférences transmises par les parents et les critères de priorité. Dans la mesure du possible, les élèves souhaiteraient pouvoir s’inscrire dans les établissements qu’ils estiment leur convenir le mieux. Le décret inscription atteint-il cet objectif ? Le rapport 2018 de la Commission Interréseaux des Inscriptions révèle qu’au 11 avril 2018, 91,13% des élèves de la Fédération Wallonie-Bruxelles (FWB) étaient assurés de disposer d’une place dans « l’établissement de leur première préférence ». A Bruxelles, ils étaient 77,85% dans ce cas. Si ces chiffres paraissent encourageants, il convient cependant de les relativiser car ils mesurent le pourcentage d’élèves pouvant s’inscrire dans l’établissement qu’ils ont classé en haut de leur liste. Mais le décret actuel incite les parents à ne pas classer les écoles dans l’ordre de leurs vraies préférences parce qu’il alloue 80% des places disponibles dans une école sur base des premiers choix et parce que la priorité d’un élève augmente dans un établissement qu’il a bien classé. En classant les écoles de manière stratégique, les élèves peuvent parfois obtenir une meilleure affectation qu’en les classant selon l’ordre de leurs préférences. Dès lors, ces chiffres ne nous disent pas vraiment quelle est la proportion d’élèves qui ont obtenu leur école préférée. Nous expliquons dans cet article que les comportements stratégiques des parents entraînent une série de désavantages. Le décret complexifie la tâche des parents, il favorise les élèves les mieux informés au détriment des autres, il constitue une source de stress lors de la réalisation du classement et de remords une fois les résultats connus, il peut conduire la direction d’établissement à influencer le classement choisi par les parents, il peut générer des situations où des échanges d’établissement permettraient aux élèves d’améliorer leur situation, il ne garantit pas à un élève une place dans une école dans laquelle il a une priorité plus élevée qu’un autre élève pouvant s’y inscrire et il favorise les élèves qui ont des options externes au détriment de ceux qui n’en ont pas. Par contre, il peut induire une allocation des élèves telle que ceux qui ont une préférence forte pour un établissement prisé peuvent s’y inscrire parce qu’ils prennent le risque de classer cette école en première position, alors que d’autres optent pour des stratégies sûres. Quelle est l’ampleur de ces comportements stratégiques et surtout quels sont leurs effets ? La réponse à ces questions dépend notamment des tensions entre l’offre et la demande dans les établissements ciblés par les élèves. S’ils avaient la garantie de pouvoir s’inscrire dans une école qu’ils apprécient, même si cette école n’est pas celle qu’ils préfèrent, le décret inscription et les classements stratégiques qu’il induit ne poseraient pas de problèmes. A l’inverse, lorsque tous les établissements dans lesquels des parents souhaiteraient inscrire leur enfant sont fortement demandés, la position de chaque école dans le classement est cruciale. En effet, des élèves peuvent dans ce cas se retrouver sans établissement parce qu’ils ont classé les écoles d’une manière plutôt que d’une autre. Les classements stratégiques sont de ce fait déterminants dans les zones densément peuplées en regard des places qui y sont disponibles (Nord-Est de Bruxelles) ou pour des parents qui veulent absolument inscrire leurs enfants dans un établissement qui a une bonne réputation. Gardons cependant en tête que ces difficultés ne sont pas le résultat de l’utilisation d’un algorithme mais proviennent plutôt d’une offre insuffisante en regard de la demande. Si les effets pervers liés aux classements stratégiques étaient jugés trop importants, il y aurait alors lieu de remplacer le décret inscription par une procédure non manipulable. Notons qu’un tel changement n’implique pas de modifier les critères déterminant les priorités attribuées aux élèves. Par contre, comme expliqué dans Maniquet (2009), changer l’algorithme modifiera la performance de la procédure en termes d’efficacité et de respect des priorités fixées par le décret, que ce soit en l’améliorant ou en la détériorant. Le choix d’une autre procédure peut être éclairé par les résultats des nombreuses études scientifiques qui ont proposé et évalué différentes procédures d’inscription centralisées.
APA, Harvard, Vancouver, ISO, and other styles
12

Decerf, Benoit, Gilles Grandjean, and Tom Truyts. "Numéro 138 - mai 2018." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco2018.05.01.

Full text
Abstract:
Ce numéro de Regards économiques analyse le décret qui organise les inscriptions en première année secondaire en Fédération Wallonie-Bruxelles. Le décret inscription requiert que les parents transmettent une liste de maximum dix établissements dans lesquels ils aimeraient inscrire leur enfant, classés dans l’ordre de leurs préférences. En parallèle, le décret fixe les critères qui sont utilisés pour déterminer à quels enfants donner priorité lorsque la demande pour un établissement excède son nombre de places disponibles. L’allocation des places disponibles est réalisée par un algorithme se basant sur les préférences transmises par les parents et les critères de priorité. Dans la mesure du possible, les élèves souhaiteraient pouvoir s’inscrire dans les établissements qu’ils estiment leur convenir le mieux. Le décret inscription atteint-il cet objectif ? Le rapport 2018 de la Commission Interréseaux des Inscriptions révèle qu’au 11 avril 2018, 91,13% des élèves de la Fédération Wallonie-Bruxelles (FWB) étaient assurés de disposer d’une place dans « l’établissement de leur première préférence ». A Bruxelles, ils étaient 77,85% dans ce cas. Si ces chiffres paraissent encourageants, il convient cependant de les relativiser car ils mesurent le pourcentage d’élèves pouvant s’inscrire dans l’établissement qu’ils ont classé en haut de leur liste. Mais le décret actuel incite les parents à ne pas classer les écoles dans l’ordre de leurs vraies préférences parce qu’il alloue 80% des places disponibles dans une école sur base des premiers choix et parce que la priorité d’un élève augmente dans un établissement qu’il a bien classé. En classant les écoles de manière stratégique, les élèves peuvent parfois obtenir une meilleure affectation qu’en les classant selon l’ordre de leurs préférences. Dès lors, ces chiffres ne nous disent pas vraiment quelle est la proportion d’élèves qui ont obtenu leur école préférée. Nous expliquons dans cet article que les comportements stratégiques des parents entraînent une série de désavantages. Le décret complexifie la tâche des parents, il favorise les élèves les mieux informés au détriment des autres, il constitue une source de stress lors de la réalisation du classement et de remords une fois les résultats connus, il peut conduire la direction d’établissement à influencer le classement choisi par les parents, il peut générer des situations où des échanges d’établissement permettraient aux élèves d’améliorer leur situation, il ne garantit pas à un élève une place dans une école dans laquelle il a une priorité plus élevée qu’un autre élève pouvant s’y inscrire et il favorise les élèves qui ont des options externes au détriment de ceux qui n’en ont pas. Par contre, il peut induire une allocation des élèves telle que ceux qui ont une préférence forte pour un établissement prisé peuvent s’y inscrire parce qu’ils prennent le risque de classer cette école en première position, alors que d’autres optent pour des stratégies sûres. Quelle est l’ampleur de ces comportements stratégiques et surtout quels sont leurs effets ? La réponse à ces questions dépend notamment des tensions entre l’offre et la demande dans les établissements ciblés par les élèves. S’ils avaient la garantie de pouvoir s’inscrire dans une école qu’ils apprécient, même si cette école n’est pas celle qu’ils préfèrent, le décret inscription et les classements stratégiques qu’il induit ne poseraient pas de problèmes. A l’inverse, lorsque tous les établissements dans lesquels des parents souhaiteraient inscrire leur enfant sont fortement demandés, la position de chaque école dans le classement est cruciale. En effet, des élèves peuvent dans ce cas se retrouver sans établissement parce qu’ils ont classé les écoles d’une manière plutôt que d’une autre. Les classements stratégiques sont de ce fait déterminants dans les zones densément peuplées en regard des places qui y sont disponibles (Nord-Est de Bruxelles) ou pour des parents qui veulent absolument inscrire leurs enfants dans un établissement qui a une bonne réputation. Gardons cependant en tête que ces difficultés ne sont pas le résultat de l’utilisation d’un algorithme mais proviennent plutôt d’une offre insuffisante en regard de la demande. Si les effets pervers liés aux classements stratégiques étaient jugés trop importants, il y aurait alors lieu de remplacer le décret inscription par une procédure non manipulable. Notons qu’un tel changement n’implique pas de modifier les critères déterminant les priorités attribuées aux élèves. Par contre, comme expliqué dans Maniquet (2009), changer l’algorithme modifiera la performance de la procédure en termes d’efficacité et de respect des priorités fixées par le décret, que ce soit en l’améliorant ou en la détériorant. Le choix d’une autre procédure peut être éclairé par les résultats des nombreuses études scientifiques qui ont proposé et évalué différentes procédures d’inscription centralisées.
APA, Harvard, Vancouver, ISO, and other styles
13

Smith, Naomi, Alexia Maddox, Clare Southerton, and Stephanie Alice Baker. "Conspiracy." M/C Journal 25, no. 1 (March 17, 2022). http://dx.doi.org/10.5204/mcj.2892.

Full text
Abstract:
Conspiracies have been a cultural mainstay for decades (Melley). While often framed as an American problem (Melley), social media has contributed to their global reach (Gerts et al.). Bruns, Harrington, and Hurcombe have traced the contemporary movement of conspiracy theories into the cultural mainstream from fringe conspiracist groups on social media platforms such as Facebook through their greater uptake in more diverse communities and to substantial amplification by celebrities, sports stars, and media outlets. Consequently, conspiracy theories that were once the product of subcultural groups have increasingly mixed into popular and authoritative media (Marwick and Lewis) and entertainment (Hyzen and van den Bulck; van den Bulck and Hyzen). Over the past five years conspiracy theories, whether they be anti-vaccination, politically motivated, or pop-cultural artefacts, have found their way into mainstream cultural discourse. Increasingly, conspiracy theories, once regarded as the domain of largely harmless eccentrics, are having real, material effects. These real-world harms are evident across a number of domains of social life, from the storming of the US Capitol on 6 January 2021 (Moskalenko and McCauley) to the effects of vaccine refusal and resistance which continue to stymie attempts to control the global COVID-19 pandemic (Baker, Wade, and Walsh). Digital spaces and communities have made conspiracy theories more accessible and transmissible. Conspiracies are persistent, resistant, and pervasive. The illusion of neat segmentation between the sites of conspiracy theorising and mainstream media content generation has vanished. However, our understanding of what motivates those engaging with and disseminating conspiracy theories is still partial and incomplete. While there is a large corpus of social psychological research into conspiracies, much of this research is focused on deficits in logic, reasoning, and/or personality traits. The focus of the ‘deficits’ of those draw to conspiracy theories is also reflected in popular discourse, where those believing in conspiracy theories are described within a variety of synonyms for the word ‘stupid’ (Chu, Yuan, and Liu). In this issue, we approach the topic of conspiracy from a different standpoint, exploring the sociological conditions that enable conspiracies to flourish. We have assembled a variety of articles, both empirical and conceptual, from which a more complex social picture of conspiracy emerges. To begin examining the complex social life of conspiracy theories, our feature article by Brownwyn Fredericks, Abraham Bradfield, Sue McAvoy, James Ward, Shea Spierings, Troy Combo, and Agnes Toth-Peter cuts through the conspiracy frame to a very real world example of the consequences of conspiracy. They examine the specific social contexts and media ecologies through which COVID-19 conspiracies have flourished in some (not all) Indigenous communities in Australia. Their analysis highlights the detrimental impacts of unresolved elements of settler colonialism that propagate conspiracist thinking within these communities. Through research conducted with stakeholder participants from the Indigenous health sector (both Indigenous and non-Indigenous) they outline a series of recommendations for how we can constructively address the demonstrated impact of circulating misinformation upon Indigenous communities in Australia. In their recommendations they reinforce the need to centralise Indigenous voices and expertise in our social and political life. Other articles in the issue explore how to theorise conspiracism, present examples of contemporary conspiracism in digital media, unpack methods for how to conduct research in this socially contentious space, and highlight the consequences of conspiracies. They draw examples of communities entangled with conspiracy theories and media environments across the world. Absence and presence (of evidence) are both important elements in conspiracy theorising. In contrast to scholarship that focusses on the spread of conspiracy-style misinformation, Tyler Easterbrook’s examination of dead links or ‘link rot’ online demonstrates how the absence and removal of information can be a powerful motivator of conspiracy rhetoric. Easterbrook’s work demonstrates the potential complexities of moderation models that emphasise the removal of conspiratorial content. The absence of content can be as powerful as its presence. Scott DeJong’s and Alex Bustamante’s article uses novel methods to interrogate the analogies we frequently use when discussing the spread of conspiracy theories online. In designing their own board system to model how conspiracy theories might spread, they speak to a growing body of work that likens conspiracy theories to game systems. DeJong’s and Bustamante’s article highlighted the powerful capacity of creative methods to speak to social problems. Echoing Easterbrook’s warning about the power of content removal to fuel conspiracy theorising, in their simulating DeJong and Bustamante found that there is an “interplay between the removal of content and its spread” and argue that “removing conspiracy is a band-aid solution to a larger problem”. With current attention focussed on the problem of moderating conspiracy and misinformation in digital ecologies, these articles are important considerations about the relative success of such a strategy. In their commentary examining so-called COVID-19 ‘cures’, Stephanie Alice Baker and Alexia Maddox explore how hydroxychloroquine and ivermectin shifted from potential COVID treatments to objects embroiled in conspiracy during the pandemic. Baker and Maddox highlight the interwoven nature of the conspiracy landscape illustrating the roles that public figures and influencers played in amplifying conspiratorial discourse and knowledge about these drugs. Importantly, as with DeJong and Bustamante, and as also highlighted by Easterbrook, they highlight how tackling conspiracy theories is not as simple as providing “accurate” facts to counter false and misleading information. Baker and Maddox argue that, paradoxically, the process of debunking which included mockery and derision “reinforces the audience segmentation that occurs in the current media ecology by virtue of alternative media with mockery and ridicule strengthening in and out group dynamics”. When debunking succumbs to ridicule, they suggest that critics may be strengthening people’s commitment to conspiratorial narratives and alternative influence networks. Tresa LeClerc’s article explores the increasing entanglement of health and wellness with alternative right (or alt-right) conspiracies, focussing on underlying themes of white nationalism within these communities. LeClerc’s piece compellingly traces the ideological underpinnings of purity within the paleo diet that already blend pseudoscience and conspiracy, highlighting the ways wellness spaces have cultivated modes of thinking that are conducive to alt-right conspiracies. Also delving into the intersections of wellness and conspiracy, Marie Heřmanová explores conspirituality and the politicisation of spiritual influencers during the COVID-19 pandemic, focusing on the case of prominent Czech lifestyle Instagrammer Helena Houdová who became an outspoken anti-vaxxer and COVID denialist. In a rich case study, Heřmanová examines the ways Helena blends her feminine aesthetic and aspirational and individualistic take on spirituality with conspiracy messages informed by QAnon and political messaging that speaks to both national history and global trends. Heřmanová astutely observes that the rise of conspirituality reveals the capacity of these influencers to bridge the gap between the everyday and personal, and the collective narratives of conspiracies such as QAnon. Continuing to explore how conspiracy theories intersect with embodied and digital environments, in her article on ‘Coronaconspiracies’ Merlyna Lim examines the role algorithms and users play in facilitating conspiracy theories during the pandemic. Lim contends that social media provides a fertile environment for conspiracies to flourish, while maintaining that “social media algorithms do not have an absolute hegemony in translating the high visibility or even the virality of conspiracy theories into the beliefs in them”. As Lim explains, human users retain their agency online; it is their “choices” and “preferences” that are informed by the algorithmic dynamics of these technologies. Extending research into the relationship between conspiracy and algorithms, the impacts of labelling are foregrounded in the work of Ahmed Al-Rawi, Carmen Celestini, Nicole Stewart, and Nathan Worku. Their article presents a reverse-engineering approach to understanding how Google’s autocomplete feature assigns subtitles to widely known conspiracists. Google’s algorithmic approach to labelling actors is proprietary knowledge, which blackboxes this process to researchers and the wider public. This article provides a technical peek into how this may work, but also raises the concern that these labels do not reflect what is publicly known about these actors. Their work provides an insight into the ways that the Google autocomplete subtitling feature may further contribute to the negative real-world impacts that these conspiracists, and other such toxic actors, have. Stijn Peeters and Tom Willaert take us into the fringes of the online ecosystem to explore ways to research conspiracist communities on Telegram. They extrapolate on Richard Rogers‘s edict to ​​repurpose the methods of the medium and take us through a case-based examination of how to conduct a structural analysis of forwarded messages to identify conspiracy communities. In weighing up the results of applying this technique to Dutch-speaking conspiracist narratives and communities on Telegram they highlight the methodological gains of such a technique and the ethical considerations that doing this style of data gathering and analysis can raise. Moving away from the fringes, Naomi Smith and Clare Southerton take us into the belly of popular culture with their examination of the #FreeBritney movement and raise the proposition of conspiracy as a site of pleasure. They turn on its head the assumption that conspiracy thinking is because of a deficient and deviant understanding and point to the appeal and pleasure of engaging in the chase of partial threads and leads found in social media that could be woven into an explanation, or conspiracy. Drawing from fan studies, they highlight that pleasure is not a new site of motivation and that a lot can be learned by applying it as an explanatory frame for why people engage with conspiracies. The diverse body of scholarship assembled in this special issue illustrates the complex nature of contemporary conspiracies as they find expression in digital spaces and media. There are a variety of approaches to understanding this phenomenon that highlight how strategies of control and technological intervention may not be straightforwardly successful. The contributions to this issue demonstrate, from a range of perspectives, the importance of understanding how and why conspiracy theories matter to the communities that embrace them if we are to address their social consequences. References Baker, Stephanie Alice, Matthew Wade, and Michael James Walsh. "The Challenges of Responding to Misinformation during a Pandemic: Content Moderation and the Limitations of the Concept of Harm." Media International Australia 177 (2020): 103-07. Bruns, Axel, Stephen Harrington, and Edward Hurcombe. “‘Corona? 5G? Or Both?’: The Dynamics of COVID-19/5G Conspiracy Theories on Facebook." Media International Australia 177 (2020): 12-29. Chu, Haoran, Shupei Yuan, and Sixiao Liu. "Call Them Covidiots: Exploring the Effects of Aggressive Communication Style and Psychological Distance in the Communication of Covid-19." Public Understanding of Science 30.3 (2021): 240-57. Gerts, Dax, et al. “‘Thought I’d Share First’ and Other Conspiracy Theory Tweets from the Covid-19 Infodemic: Exploratory Study." JMIR Public Health Surveill 7.4 (2021): e26527. Hyzen, Aaron, and Hilde van den Bulck. "Conspiracies, Ideological Entrepreneurs, and Digital Popular Culture." Media and Communication 9 (2021): 179–88. Marwick, Alice, and Rebecca Lewis. "Media Manipulation and Disinformation Online." New York: Data & Society Research Institute, 2017. 7-19. Melley, Timothy. Empire of Conspiracy: The Culture of Paranoia in Postwar America. Cornell University Press, 2016. Moskalenko, Sophia, and Clark McCauley. "QAnon: Radical Opinion Versus Radical Action." Perspectives on Terrorism 15.2 (2021): 142-46. Van den Bulck, Hilde, and Aaron Hyzen. "Of Lizards and Ideological Entrepreneurs: Alex Jones and Infowars in the Relationship between Populist Nationalism and the Post-Global Media Ecology." International Communication Gazette 82.1 (2020): 42-59.
APA, Harvard, Vancouver, ISO, and other styles
14

Sampson, Tony. "A Virus in Info-Space." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2368.

Full text
Abstract:
‘We are faced today with an entire system of communication technology which is the perfect medium to host and transfer the very programs designed to destroy the functionality of the system.’ (IBM Researcher: Sarah Gordon, 1995) Despite renewed interest in open source code, the openness of the information space is nothing new in terms of the free flow of information. The transitive and nonlinear configuration of data flow has ceaselessly facilitated the sharing of code. The openness of the info-space encourages a free distribution model, which has become central to numerous developments through the abundant supply of freeware, shareware and source code. Key moments in open source history include the release in 1998 of Netscape’s Communicator source code, a clear attempt to stimulate browser development. More recently in February 2004 the ‘partial leaking’ of Microsoft Windows 2000 and NT 4.0 source code demonstrated the often-hostile disposition of open culture and the potential threat it poses to existing corporate business models. However, the leading exponents of the open source ethic predate these events by more than a decade. As an extension of the hacker, the virus writer has managed, since the 1980s, to bend the shape of info-space beyond recognition. By freely spreading viruses, worms and hacker programs across the globe, virus writers have provided researchers with a remarkable set of digital footprints to follow. The virus has, as IBM researcher Sarah Gordon points out, exposed the info-space as a ‘perfect medium’ rife for malicious viral infection. This paper argues that viral technologies can hold info-space hostage to the uncertain undercurrents of information itself. As such, despite mercantile efforts to capture the spirit of openness, the info-space finds itself frequently in a state far-from-equilibrium. It is open to often-unmanageable viral fluctuations, which produce levels of spontaneity, uncertainty and emergent order. So while corporations look to capture the perpetual, flexible and friction-free income streams from centralised information flows, viral code acts as an anarchic, acentred Deleuzian rhizome. It thrives on the openness of info-space, producing a paradoxical counterpoint to a corporatised information society and its attempt to steer the info-machine. The Virus in the Open System Fred Cohen’s 1984 doctoral thesis on the computer virus locates three key features of openness that makes viral propagation possible (see Louw and Duffy, 1992 pp. 13-14) and predicts a condition common to everyday user experience of info-space. Firstly, the virus flourishes because of the computer’s capacity for information sharing_; transitive flows of code between nodes via discs, connected media, network links, user input and software use. In the process of information transfer the ‘witting and unwitting’ cooperation of users and computers is a necessary determinant of viral infection. Secondly, information flow must be _interpreted._ Before execution computers interpret incoming information as a series of instructions (strings of bits). However, before execution, there is no fundamental distinction between information received, and as such, information has no _meaning until it has been executed. Thus, the interpretation of information does not differentiate between a program and a virus. Thirdly, the alterability or manipulability of the information process allows the virus to modify information. For example, advanced polymorphic viruses avoid detection by using non-significant, or redundant code, to randomly encrypt and decrypt themselves. Cohen concludes that the only defence available to combat viral spread is the ‘limited transitivity of information flow’. However, a reduction in flow is contrary to the needs of the system and leads ultimately to the unacceptable limitation of sharing (Cohen, 1991). As Cohen states ‘To be perfectly secure against viral attacks, a system must protect against incoming information flow, while to be secure against leakage of information a system must protect against outgoing information flow. In order for systems to allow sharing, there must be some information flow. It is therefore the major conclusion of this paper that the goals of sharing in a general purpose multilevel security system may be in such direct opposition to the goals of viral security as to make their reconciliation and coexistence impossible.’ Cohen’s research does not simply end with the eradication of the virus via the limitation of openness, but instead leads to a contentious idea concerning the benevolent properties of viral computing and the potential legitimacy of ‘friendly contagion’. Cohen looks beyond the malevolent enemy of the open network to a benevolent solution. The viral ecosystem is an alternative to Turing-von Neumann capability. Key to this system is a benevolent virus,_ which epitomise the ethic of open culture. Drawing upon a biological analogy, benevolent viral computing _reproduces in order to accomplish its goals; the computing environment evolving_ rather than being ‘designed every step of the way’ (see Zetter, 2000). The _viral ecosystem_ demonstrates how the spread of viruses can purposely _evolve through the computational space using the shared processing power of all host machines. Information enters the host machine via infection and a translator program alerts the user. The benevolent virus_ passes through the host machine with any additional modifications made by the _infected_ _user. The End of Empirical Virus Research? Cohen claims that his research into ‘friendly contagion’ has been thwarted by network administrators and policy makers (See Levy, 1992 in Spiller, 2002) whose ‘apparent fear reaction’ to early experiments resulted in trying to solve technical problems with policy solutions. However, following a significant increase in malicious viral attacks, with estimated costs to the IT industry of $13 billion in 2001 (Pipkin, 2003 p. 41), research into legitimate viruses has not surprisingly shifted from the centre to the fringes of the computer science community (see Dibbell, 1995)._ _Current reputable and subsequently funded research tends to focus on efforts by the anti-virus community to develop computer hygiene. Nevertheless, malevolent or benevolent viral technology provides researchers with a valuable recourse. The virus draws analysis towards specific questions concerning the nature of information and the culture of openness. What follows is a delineation of a range of approaches, which endeavour to provide some answers. Virus as a Cultural Metaphor Sean Cubitt (in Dovey, 1996 pp. 31-58) positions the virus as a contradictory cultural element, lodged between the effective management of info-space and the potential for spontaneous transformation. However, distinct from Cohen’s aspectual analogy, Cubitt’s often-frivolous viral metaphor overflows with political meaning. He replaces the concept of information with a space of representation, which elevates the virus from empirical experience to a linguistic construct of reality. The invasive and contagious properties of the biological parasite are metaphorically transferred to viral technology; the computer virus is thus imbued with an alien otherness. Cubitt’s cultural discourse typically reflects humanist fears of being subjected to increasing levels of technological autonomy. The openness of info-space is determined by a managed society aiming to ‘provide the grounds for mutation’ (p. 46) necessary for profitable production. Yet the virus, as a possible consequence of that desire, becomes a potential opposition to ‘ideological formations’. Like Cohen, Cubitt concludes that the virus will always exist if the paths of sharing remain open to information flow. ‘Somehow’, Cubitt argues, ‘the net must be managed in such a way as to be both open and closed. Therefore, openness is obligatory and although, from the point of view of the administrator, it is a recipe for ‘anarchy, for chaos, for breakdown, for abjection’, the ‘closure’ of the network, despite eradicating the virus, ‘means that no benefits can accrue’ (p.55). Virus as a Bodily Extension From a virus writing perspective it is, arguably, the potential for free movement in the openness of info-space that that motivates the spread of viruses. As one writer infamously stated it is ‘the idea of making a program that would travel on its own, and go to places its creator could never go’ that inspires the spreading of viruses (see Gordon, 1993). In a defiant stand against the physical limitations of bodily movement from Eastern Europe to the US, the Bulgarian virus writer, the Dark Avenger, contended that ‘the American government can stop me from going to the US, but they can’t stop my virus’. This McLuhanesque conception of the virus, as a bodily extension (see McLuhan, 1964), is picked up on by Baudrillard in Cool Memories_ _(1990). He considers the computer virus as an ‘ultra-modern form of communication which does not distinguish, according to McLuhan, between the information itself and its carrier.’ To Baudrillard the prosperous proliferation of the virus is the result of its ability to be both the medium and the message. As such the virus is a pure form of information. The Virus as Information Like Cohen, Claude Shannon looks to the biological analogy, but argues that we have the potential to learn more about information transmission in artificial and natural systems by looking at difference rather than resemblance (see Campbell, 1982). One of the key aspects of this approach is the concept of redundancy. The theory of information argues that the patterns produced by the transmission of information are likely to travel in an entropic mode, from the unmixed to the mixed – from information to noise. Shannon’s concept of redundancy ensures that noise is diminished in a system of communication. Redundancy encodes information so that the receiver can successfully decode the message, holding back the entropic tide. Shannon considers the transmission of messages in the brain as highly redundant since it manages to obtain ‘overall reliability using unreliable components’ (in Campbell, 1982 p. 191). While computing uses redundancy to encode messages, compared to transmissions of biological information, it is fairly primitive. Unlike the brain, Turing-von-Neumann computation is inflexible and literal minded. In the brain information transmission relies not only on deterministic external input, but also self-directed spontaneity and uncertain electro-chemical pulses. Nevertheless, while Shannon’s binary code is constrained to a finite set of syntactic rules, it can produce an infinite number of possibilities. Indeed, the virus makes good use of redundancy to ensure its successful propagation. The polymorphic virus is not simply a chaotic, delinquent noise, but a decidedly redundant form of communication, which uses non-significant code to randomly flip itself over to avoid detection. Viral code thrives on the infinite potential of algorithmic computing; the open, flexible and undecidable grammar of the algorithm allows the virus to spread, infect and evolve. The polymorphic virus can encrypt and decrypt itself so as to avoid anti-viral scanners checking for known viral signatures from the phylum of code known to anti-virus researchers. As such, it is a raw form of Artificial Intelligence, relying on redundant inflexible_ _code programmed to act randomly, ignore or even forget information. Towards a Concept of Rhizomatic Viral Computation Using the concept of the rhizome Deleuze and Guattari (1987 p. 79) challenge the relation between noise and pattern established in information theory. They suggest that redundancy is not merely a ‘limitative condition’, but is key to the transmission of the message itself. Measuring up the efficiency of a highly redundant viral transmission against the ‘splendour’ of the short-term memory of a rhizomatic message, it is possible to draw some conclusions from their intervention. On the surface, the entropic tendency appears to be towards the mixed and the running down of the system’s energy. However, entropy is not the answer since information is not energy; it cannot be conserved, it can be created and destroyed. By definition information is something new, something that adds to existing information (see Campbell, 1982 p. 231), yet efficient information transmission creates invariance in a variant environment. In this sense, the pseudo-randomness of viral code, which pre-programs elements of uncertainty and free action into its propagation, challenges the efforts to make information centralised, structured and ordered. It does this by placing redundant noise within its message pattern. The virus readily ruptures the patterned symmetry of info-space and in terms of information produces something new. Viral transmission is pure information as its objective is to replicate itself throughout info-space; it mutates the space as well as itself. In a rhizomatic mode the anarchic virus is without a central agency; it is a profound rejection of all Generals and power centres. Viral infection, like the rhizomatic network, is made up of ‘finite networks of automata in which communication runs from any neighbour to any other’. Viral spread flows along non-pre-existent ‘channels of communication’ (1987 p. 17). Furthermore, while efforts are made to striate the virus using anti-viral techniques, there is growing evidence that viral information not only wants to be free, but is free to do as it likes. About the Author Tony Sampson is a Senior Lecturer and Course Tutor in Multimedia & Digital Culture, School of Cultural and Innovation Studies at the University of East London, UK Email: t.d.sampson@uel.ac.uk Citation reference for this article MLA Style Sampson, Tony. "A Virus in Info-Space" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/07_Sampson.php>. APA Style Sampson, T. (2004, Jul1). A Virus in Info-Space. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/07_Sampson.php>
APA, Harvard, Vancouver, ISO, and other styles
15

Binns, Daniel. "No Free Tickets." M/C Journal 25, no. 2 (April 25, 2022). http://dx.doi.org/10.5204/mcj.2882.

Full text
Abstract:
Introduction 2021 was the year that NFTs got big—not just in value but also in terms of the cultural consciousness. When digital artist Beeple sold the portfolio of his 5,000 daily images at Christie’s for US$69 million, the art world was left intrigued, confused, and outraged in equal measure. Depending on who you asked, non-fungible tokens (NFTs) seemed to be either a quick cash-grab or the future of the art market (Bowden and Jones; Smee). Following the Beeple sale, articles started to appear indicating that the film industry was abuzz for NFTs. Independent filmmaker Kevin Smith was quick to announce that he planned to release his horror film Killroy Was Here as an NFT (Alexander); in September 2021 the James Bond film No Time to Die also unveiled a series of collectibles to coincide with the film’s much-delayed theatrical release (Natalee); the distribution and collectible platforms Vuele, NFT Studios, and Mogul Productions all emerged, and the industry rumour mill suggests more start-ups are en route (CurrencyWorks; NFT Studios; NewsBTC). Blockchain disciples say that the technology will solve all the problems of the Internet (Tewari; Norton; European Business Review); critics say it will only perpetuate existing accessibility and equality issues (Davis and Flatow; Klein). Those more circumspect will doubtless sit back until the dust settles, waiting to see what parts of so-called web3 will be genuinely integrated into the architecture of the Internet. Pamela Hutchinson puts it neatly in terms of the arts sector: “the NFT may revolutionise the art market, film funding and distribution. Or it might be an ecological disaster and a financial bubble, in which few actual movies change hands, and fraudsters get rich from other people’s intellectual property” (Hutchinson). There is an uptick in the literature around NFTs and blockchain (see Quiniou; Gayvoronskaya & Meinel); however, the technology remains unregulated and unstandardised (Yeung 212-14; Dimitropoulos 112-13). Similarly, the sheer amount of funding being put into fundamental technical, data, and security-related issues speaks volumes to the nascency of the space (Ossinger; Livni; Gayvoronskaya & Meinel 52-6). Put very briefly, NFTs are part of a given blockchain system; think of them, like cryptocurrency coins, as “units of value” within that system (Roose). NFTs were initially rolled out on Ethereum, though several other blockchains have now implemented their own NFT frameworks. NFTs are usually not the artwork itself, but rather a unique, un-copyable (hence, non-fungible) piece of code that is attached, linked, or connected to another digital file, be that an image, video, text, or something else entirely. NFTs are often referred to as a digital artwork’s “certificate of authenticity” (Roose). At the time of writing, it remains to be seen how widely blockchain and NFT technology will be implemented across the entertainment industries. However, this article aims to outline the current state of implementation in the film trade specifically, and to attempt to sort true potential from the hype. Beginning with an overview of the core issues around blockchain and NFTs as they apply to film properties and adjacent products, current implementations of the technology are outlined, before finishing with a hesitant glimpse into the potential future applications. The Issues and Conversation At the core of current conversations around blockchain are three topics: intellectual property and ownership, concentrations of power and control, and environmental impact. To this I would like to add a consideration of social capital, which I begin with briefly here. Both the film industry and “crypto” — if we take the latter to encompass the various facets of so-called ‘web3’ — are engines of social capital. In the case of cinema, its products are commodified and passed through a model that begins with exclusivity (theatrical release) before progressing to mass availability (home media, streaming). The cinematic object, i.e., an individual copy of a film, is, by virtue of its origins as a mass product of the twentieth century, fungible. The film is captured, copied, stored, distributed, and shared. The film-industrial model has always relied on social phenomena, word of mouth, critical discourse, and latterly on buzz across digital social media platforms. This is perhaps as distinct from fine art, where — at least for dealers — the content of the piece does not necessarily matter so much as verification of ownership and provenance. Similarly, web3, with its decentralised and often-anonymised processes, relies on a kind of social activity, or at least a recorded interaction wherein the chain is stamped and each iteration is updated across the system. Even without the current hype, web3 still relies a great deal on discourse, sharing, and community, particularly as it flattens the existing hierarchies of the Internet that linger from Web 2.0. In terms of NFTs, blockchain systems attach scarcity and uniqueness to digital objects. For now, that scarcity and uniqueness is resulting in financial value, though as Jonathan Beller argues the notion of value could — or perhaps should — be reconsidered as blockchain technology, and especially cryptocurrencies, evolve (Beller 217). Regardless, NFT advocates maintain that this is the future of all online activity. To questions of copyright, the structures of blockchain do permit some level of certainty around where a given piece of intellectual property emerged. This is particularly useful where there are transnational differences in recognition of copyright law, such as in France, for instance (Quiniou 112-13). The Berne Convention stipulates that “the subsistence of copyright does not rest on the compliance with formal requirements: rights will exist if the work meets the requirements for protection set out by national law and treaties” (Guadamuz 1373). However, there are still no legal structures underpinning even the most transparent of transactions, when an originator goes out of their way to transfer rights to the buyer of the accompanying NFT. The minimum requirement — even courtesy — for the assignment of rights is the identification of the work itself; as Guadamuz notes, this is tricky for NFTs as they are written in code (1374). The blockchain’s openness and transparency are its key benefits, but until the code can explicitly include (or concretely and permanently reference) the ‘content’ of an NFT, its utility as a system of ownership is questionable. Decentralisation, too, is raised consistently as a key positive characteristic of blockchain technology. Despite the energy required for this decentralisation (addressed shortly), it is true that, at least in its base code, blockchain is a technology with no centralised source of truth or verification. Instead, such verification is performed by every node on the chain. On the surface, for the film industry, this might mean modes of financing, rights management, and distribution chains that are not beholden to multinational media conglomerates, streamers like Netflix, niche intermediaries, or legacy studios. The result here would be a flattening of the terrain: breaking down studio and corporate gatekeeping in favour of a more democratised creative landscape. Creators and creative teams would work peer-to-peer, paying, contracting, servicing, and distribution via the blockchain, with iron-clad, publicly accessible tracking of transactions and ownership. The alternative, though, is that the same imbalances persist, just in a different form: this is outlined in the next section. As Hunter Vaughan writes, the film industry’s environmental impact has long been under-examined. Its practices are diverse, distributed, and hard to quantify. Cinematic images, Vaughan writes, “do not come from nothing, and they do not vanish into the air: they have always been generated by the earth and sun, by fossil fuels and chemical reactions, and our enjoyment of them has material consequences” (3). We believe that by watching a “green” film like Avatar we are doing good, but it implicates us in the dirty secret, an issue of “ignorance and of voluntary psychosis” where “we do not see who we are harming or how these practices are affecting the environment, and we routinely agree to accept the virtual as real” (5). Beyond questions of implication and eco-material conceptualisation, however, there are stark facts. In the 1920s, the Kodak Park Plant in New York drew 12 million gallons of water from Lake Ontario each day to produce film stock. As the twentieth century came to a close, this amount — for a single film plant — had grown to 35-53 million gallons per day. The waste water was perfunctorily “cleaned” and then dumped into surrounding rivers (72-3). This was just one plant, and one part of the filmmaking process. With the shift to digital, this cost might now be calculated in the extraction of precious metals used to make contemporary cameras, computers, or storage devices. Regardless, extrapolate outwards to a global film industry and one quickly realises the impact is almost beyond comprehension. Considering — let alone calculating — the carbon footprint of blockchain requires outlining some fundamentals of the technology. The two primary architectures of blockchain are Proof of Work (PoW) and Proof of Stake (PoS), both of which denote methods of adding and verifying new blocks to a chain. PoW was the first model, employed by Bitcoin and the first iteration of Ethereum. In a PoW model, each new block has a specific cryptographic hash. To confirm the new block, crypto miners use their systems to generate a target hash that is less than or equal to that of the block. The systems process these calculations quickly, as the goal is to be “the first miner with the target hash because that miner is the one who can update the blockchain and receive crypto rewards” (Daly). The race for block confirmation necessitates huge amounts of processing power to make these quick calculations. The PoS model differs in that miners are replaced by validators (or staking services where participants pool validation power). Rather than investing in computer power, validators invest in the blockchain’s coins, staking those coins (tokens) in a smart contract (think of this contract like a bank account or vault). When a new block is proposed, an algorithm chooses a validator based on the size of their stake; if the block is verified, the validator receives further cryptocurrency as a reward (Castor). Given the ubiquity and exponential growth of blockchain technology and its users, an accurate quantification of its carbon footprint is difficult. For some precedent, though, one might consider the impact of the Bitcoin blockchain, which runs on a PoW model. As the New York Times so succinctly puts it: “the process of creating Bitcoin to spend or trade consumes around 91 terawatt-hours of electricity annually, more than is used by Finland, a nation of about 5.5 million” (Huang, O’Neill and Tabuchi). The current Ethereum system (at time of writing), where the majority of NFT transactions take place, also runs on PoW, and it is estimated that a single Ethereum transaction is equivalent to nearly nine days of power consumption by an average US household (Digiconomist). Ethereum always intended to operate on a PoS system, and the transition to this new model is currently underway (Castor). Proof of Stake transactions use significantly less energy — the new Ethereum will supposedly be approximately 2,000 times more energy efficient (Beekhuizen). However, newer systems such as Solana have been explicit about their efficiency goals, stating that a single Solana transaction uses less energy (1,837 Joules, to be precise) than keeping an LED light on for one hour (36,000 J); one Ethereum transaction, for comparison, uses over 692 million J (Solana). In addition to energy usage, however, there is also the question of e-waste as a result of mining and general blockchain operations which, at the time of writing, for Bitcoin sits at around 32 kilotons per year, around the same as the consumer IT wastage of the Netherlands (de Vries and Stoll). How the growth in NFT awareness and adoption amplifies this impact remains to be seen, but depending on which blockchain they use, they may be wasting energy and resources by design. If using a PoW model, the more valuable the cryptocurrency used to make the purchase, the more energy (“gas”) required to authenticate the purchase across the chain. Images abound online of jerry-rigged crypto data centres of varying quality (see also efficiency and safety). With each NFT minted, sold, or traded, these centres draw — and thus waste, for gas — more and more energy. With increased public attention and scrutiny, cryptocurrencies are slowly realising that things could be better. As sustainable alternatives become more desirable and mainstream, it is safe to predict that many NFT marketplaces may migrate to Cardano, Solana, or other more efficient blockchain bases. For now, though, this article considers the existing implementations of NFTs and blockchain technology within the film industry. Current Implementations The current applications of NFTs in film centre around financing and distribution. In terms of the former, NFTs are saleable items that can raise capital for production, distribution, or marketing. As previously mentioned, director Kevin Smith launched Jay & Silent Bob’s Crypto Studio in order to finish and release Killroy Was Here. Smith released over 600 limited edition tokens, including one of the film itself (Moore). In October 2021, renowned Hong Kong director Wong Kar-wai sold an NFT with unreleased footage from his film In the Mood for Love at Sotheby’s for US$550,000 (Raybaud). Quentin Tarantino entered the arena in January 2022, auctioning uncut scenes from his 1994 film Pulp Fiction, despite the threat of legal action from the film’s original distributor Miramax (Dailey). In Australia, an early adopter of the technology is director Michael Beets, who works in virtual production and immersive experiences. His immersive 14-minute VR film Nezunoban (2020) was split into seven different chapters, and each chapter was sold as an NFT. Beets also works with artists to develop entry tickets that are their own piece of generative art; with these tickets and the chapters selling for hundreds of dollars at a time, Beets seems to have achieved the impossible: turning a profit on a short film (Fletcher). Another Australian writer-producer, Samuel Wilson, now based in Canada, suggests that the technology does encourage filmmakers to think differently about what they create: At the moment, I’m making NFTs from extra footage of my feature film Miles Away, which will be released early next year. In one way, it’s like a new age of behind-the-scenes/bonus features. I have 14 hours of DV tapes that I’m cutting into a short film which I will then sell in chapters over the coming months. One chapter will feature the dashing KJ Apa (Songbird, Riverdale) without his shirt on. So, hopefully that can turn some heads. (Wilson, in Fletcher) In addition to individual directors, a number of startup companies are also seeking to get in on the action. One of these is Vuele, which is best understood as a blockchain-based streaming service: an NFT Netflix, if you like. In addition to films themselves, the service will offer extra content as NFTs, including “behind the scenes content, bonus features, exclusive Q&As, and memorabilia” (CurrencyWorks). Vuele’s launch title is Zero Contact, directed by Rick Dugdale and starring Anthony Hopkins. The film is marketed as “the World’s First NFT Feature Film” (as at the time of writing, though, both Vuele and its flagship film have yet to launch). Also launching is NFT Studios, a blockchain-based production company that distributes the executive producer role to those buying into the project. NFT Studios is a decentralised administrative organisation (DAO), guided by tech experts, producers, and film industry intermediaries. NFT Studios is launching with A Wing and a Prayer, a biopic of aeronaut Brian Milton (NFT Studios), and will announce their full slate across festivals in 2022. In Australia, Culture Vault states that its aim is to demystify crypto and champion Australian artists’ rights and access to the space. Co-founder and CEO Michelle Grey is well aware of the aforementioned current social capital of NFTs, but is also acutely aware of the space’s opacity and the ubiquity of often machine-generated tat. “The early NFT space was in its infancy, there was a lot of crap around, but don’t forget there’s a lot of garbage in the traditional art world too,” she says (cited in Miller). Grey and her company effectively act like art dealers; intermediaries between the tech and art worlds. These new companies claim to be adhering to the principles of web3, often selling themselves as collectives, DAOs, or distributed administrative systems. But the entrenched tendencies of the film industry — particularly the persistent Hollywood system — are not so easily broken down. Vuele is a joint venture between CurrencyWorks and Enderby Entertainment. The former is a financial technology company setting up blockchain systems for businesses, including the establishment of branded digital currencies such as the controversial FreedomCoin (Memoria); the latter, Enderby, is a production company founded by Canadian film producer (and former investor relations expert in the oil and uranium sectors) Rick Dugdale (Wiesner). Similarly, NFT Studios is partnered with consulting and marketing agencies and blockchain venture capitalists (NFT Investments PLC). Depending on how charitable or cynical one is feeling, these start-ups are either helpful intermediaries to facilitate legacy media moving into NFT technology, or the first bricks in the capitalist wall to bar access for entry to other players. The Future Is… Buffering Marketplaces like Mintable, OpenSea, and Rarible do indeed make the minting and selling of NFTs fairly straightforward — if you’ve ever listed an item for sale on eBay or Facebook, you can probably mint an NFT. Despite this, the current major barrier for average punters to the NFT space remains technical knowledge. The principles of blockchain remain fairly opaque — even this author, who has been on a deep dive for this article, remains sceptical that widespread adoption across multiple applications and industries is feasible. Even so, as Rennie notes, “the unknown is not what blockchain technology is, or even what it is for (there are countless ‘use cases’), but how it structures the actions of those who use it” (235). At the time of writing, a great many commentators and a small handful of scholars are speculating about the role of the metaverse in the creative space. If the endgame of the metaverse is realised, i.e., a virtual, interactive space where users can interact, trade, and consume entertainment, the role of creators, dealers, distributors, and other brokers and players will be up-ended, and have to re-settle once again. Film industry practitioners might look to the games space to see what the road might look like, but then again, in an industry that is — at its best — somewhat resistant to change, this may simply be a fad that blows over. Blockchain’s current employment as a get-rich-quick mechanism for the algorithmic literati and as a computational extension of existing power structures suggests nothing more than another techno-bubble primed to burst (Patrickson 591-2; Klein). Despite the aspirational commentary surrounding distributed administrative systems and organisations, the current implementations are restricted, for now, to startups like NFT Studios. In terms of cinema, it does remain to be seen whether the deployment of NFTs will move beyond a kind of “Netflix with tchotchkes” model, or a variant of crowdfunding with perks. Once Vuele and NFT Studios launch properly, we may have a sense of how this all will play out, particularly alongside less corporate-driven, more artistically-minded initiatives like that of Michael Beets and Culture Vault. It is possible, too, that blockchain technology may streamline the mechanics of the industry in terms of automating or simplifying parts of the production process, particularly around contracts, financing, licensing. This would obviously remove some of the associated labour and fees, but would also de-couple long-established parts and personnel of the industry — would Hollywood and similar industrial-entertainment complexes let this happen? As with any of the many revolutions that have threatened to kill or resurrect the (allegedly) long-suffering cinematic object, we just have to wait, and watch. References Alexander, Bryan. “Kevin Smith Reveals Why He’s Auctioning Off New His Film ‘Killroy Was Here’ as an NFT.” USA TODAY, 15 Apr. 2021. <https://www.usatoday.com/story/entertainment/movies/2021/04/15/kevin-smith-auctioning-new-film-nft-killroy-here/7244602002/>. Beekhuizen, Carl. “Ethereum’s Energy Usage Will Soon Decrease by ~99.95%.” Ethereum Foundation Blog, 18 May 2021. <https://blog.ethereum.org/2021/05/18/country-power-no-more/>. Beller, Jonathan. “Economic Media: Crypto and the Myth of Total Liquidity.” Australian Humanities Review 66 (2020): 215-225. Beller, Jonathan. The Cinematic Mode of Production: Attention Economy and the Society of the Spectacle. Hanover, NH: Dartmouth College P, 2006. Bowden, James, and Edward Thomas Jones. “NFTs Are Much Bigger than an Art Fad – Here’s How They Could Change the World.” The Conversation, 26 Apr. 2021. <http://theconversation.com/nfts-are-much-bigger-than-an-art-fad-heres-how-they-could-change-the-world-159563>. Cardano. “Cardano, Ouroboros.” 14 Feb. 2022 <https://cardano.org/ouroboros/>. Castor, Amy. “Why Ethereum Is Switching to Proof of Stake and How It Will Work.” MIT Technology Review, 4 Mar. 2022. <https://www.technologyreview.com/2022/03/04/1046636/ethereum-blockchain-proof-of-stake/>. CurrencyWorks. “Vuele - CurrencyWorks™.” 3 Feb. 2022 <https://currencyworks.io/project/vuele/>. Dailey, Natasha. “Quentin Tarantino Will Sell His ‘Pulp Fiction’ NFTs This Month despite a Lawsuit from the Film’s Producer Miramax.” Business Insider, 5 Jan. 2022. <https://www.businessinsider.com.au/quentin-tarantino-to-sell-pulp-fiction-nft-despite-miramax-lawsuit-2022-1>. Daly, Lyle. “What Is Proof of Work (PoW) in Crypto?” The Motley Fool, 27 Sep. 2021. <https://www.fool.com/investing/stock-market/market-sectors/financials/cryptocurrency-stocks/proof-of-work/>. Davis, Kathleen, and Ira Flatow. “Will Blockchain Really Change the Way the Internet Runs?” Science Friday, 23 July 2021. <https://www.sciencefriday.com/segments/blockchain-internet/>. De Vries, Alex, and Christian Stoll. “Bitcoin’s Growing E-Waste Problem.” Resources, Conservation & Recycling 175 (2021): 1-11. Dimitropoulos, Georgios. “Global Currencies and Domestic Regulation: Embedding through Enabling?” In Regulating Blockchain: Techno-Social and Legal Challenges. Eds. Philipp Hacker et al. Oxford: Oxford UP, 2019. 112–139. Edelman, Gilad. “What Is Web3, Anyway?” Wired, Nov. 2021. <https://www.wired.com/story/web3-gavin-wood-interview/>. European Business Review. “Future of Blockchain: How Will It Revolutionize the World in 2022 & Beyond!” The European Business Review, 1 Nov. 2021. <https://www.europeanbusinessreview.com/future-of-blockchain-how-will-it-revolutionize-the-world-in-2022-beyond/>. Fletcher, James. “How I Learned to Stop Worrying and Love the NFT!” FilmInk, 2 Oct. 2021. <https://www.filmink.com.au/how-i-learned-to-stop-worrying-and-love-the-nft/>. Gayvoronskaya, Tatiana, and Christoph Meinel. Blockchain: Hype or Innovation. Cham: Springer. Guadamuz, Andres. “The Treachery of Images: Non-Fungible Tokens and Copyright.” Journal of Intellectual Property Law & Practice 16.12 (2021): 1367–1385. Huang, Jon, Claire O’Neill, and Hiroko Tabuchi. “Bitcoin Uses More Electricity than Many Countries. How Is That Possible?” The New York Times, 3 Sep. 2021. <http://www.nytimes.com/interactive/2021/09/03/climate/bitcoin-carbon-footprint-electricity.html>. Hutchinson, Pamela. “Believe the Hype? What NFTs Mean for Film.” BFI, 22 July 2021. <https://www.bfi.org.uk/sight-and-sound/features/nfts-non-fungible-tokens-blockchain-film-funding-revolution-hype>. Klein, Ezra. “A Viral Case against Crypto, Explored.” The Ezra Klein Show, n.d. 7 Apr. 2022 <https://www.nytimes.com/2022/04/05/opinion/ezra-klein-podcast-dan-olson.html>. Livni, Ephrat. “Venture Capital Funding for Crypto Companies Is Surging.” The New York Times, 1 Dec. 2021. <https://www.nytimes.com/2021/12/01/business/dealbook/crypto-venture-capital.html>. Memoria, Francisco. “Popular Firearms Marketplace GunBroker to Launch ‘FreedomCoin’ Stablecoin.” CryptoGlobe, 30 Jan. 2019. <https://www.cryptoglobe.com/latest/2019/01/popular-firearm-marketplace-gunbroker-to-launch-freedomcoin-stablecoin/>. Miller, Nick. “Australian Start-Up Aims to Make the Weird World of NFT Art ‘Less Crap’.” Sydney Morning Herald, 19 Jan. 2022. <https://www.smh.com.au/culture/art-and-design/australian-startup-aims-to-make-the-weird-world-of-nft-art-less-crap-20220119-p59pev.html>. Moore, Kevin. “Kevin Smith Drops an NFT Project Packed with Utility.” One37pm, 27 Apr. 2021. <https://www.one37pm.com/nft/art/kevin-smith-jay-and-silent-bob-nft-killroy-was-here>. Nano. “Press Kit.” 14 Feb. 2022 <https://content.nano.org/Nano-Press-Kit.pdf>. Natalee. “James Bond No Time to Die VeVe NFTs Launch.” NFT Culture, 22 Sep. 2021. <https://www.nftculture.com/nft-marketplaces/4147/>. NewsBTC. “Mogul Productions to Conduct the First Ever Blockchain-Based Voting for Film Financing.” NewsBTC, 22 July 2021. <https://www.newsbtc.com/news/company/mogul-productions-to-conduct-the-first-ever-blockchain-based-voting-for-film-financing/>. NFT Investments PLC. “Approach.” 21 Jan. 2022 <https://www.nftinvest.pro/approach>. NFT Studios. “Projects.” 9 Feb. 2022 <https://nftstudios.dev/projects>. Norton, Robert. “NFTs Have Changed the Art of the Possible.” Wired UK, 14 Feb. 2022. <https://www.wired.co.uk/article/nft-art-world>. Ossinger, Joanna. “Crypto World Hits $3 Trillion Market Cap as Ether, Bitcoin Gain.” Bloomberg.com, 8 Nov. 2021. <https://www.bloomberg.com/news/articles/2021-11-08/crypto-world-hits-3-trillion-market-cap-as-ether-bitcoin-gain>. Patrickson, Bronwin. “What Do Blockchain Technologies Imply for Digital Creative Industries?” Creativity and Innovation Management 30.3 (2021): 585–595. Quiniou, Matthieu. Blockchain: The Advent of Disintermediation, New York: John Wiley, 2019. Raybaud, Sebastien. “First Asian Film NFT Sold, Wong Kar-Wai’s ‘In the Mood for Love’ Fetches US$550k in Sotheby’s Evening Sale, Auctions News.” TheValue.Com, 10 Oct. 2021. <https://en.thevalue.com/articles/sothebys-auction-wong-kar-wai-in-the-mood-for-love-nft>. Rennie, Ellie. “The Challenges of Distributed Administrative Systems.” Australian Humanities Review 66 (2020): 233-239. Roose, Kevin. “What are NFTs?” The New York Times, 18 Mar. 2022. <https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html>. Smee, Sebastian. “Will NFTs Transform the Art World? Are They Even Art?” Washington Post, 18 Dec. 2021. <https://www.washingtonpost.com/arts-entertainment/2021/12/18/nft-art-faq/>. Solana. “Solana’s Energy Use Report: November 2021.” Solana, 24 Nov. 2021. <https://solana.com/news/solana-energy-usage-report-november-2021>. Tewari, Hitesh. “Four Ways Blockchain Could Make the Internet Safer, Fairer and More Creative.” The Conversation, 12 July 2019. <http://theconversation.com/four-ways-blockchain-could-make-the-internet-safer-fairer-and-more-creative-118706>. Vaughan, Hunter. Hollywood’s Dirtiest Secret: The Hidden Environmental Costs of the Movies. New York: Columbia UP, 2019. Vision and Value. “CurrencyWorks (CWRK): Under-the-Radar, Crypto-Agnostic, Blockchain Pick-and-Shovel Play.” Seeking Alpha, 1 Dec. 2021. <https://seekingalpha.com/article/4472715-currencyworks-under-the-radar-crypto-agnostic-blockchain-pick-and-shovel-play>. Wiesner, Darren. “Exclusive – BC Producer – Rick Dugdale Becomes a Heavyweight.” Hollywood North Magazine, 29 Aug. 2017. <https://hnmag.ca/interview/exclusive-bc-producer-rick-dugdale-becomes-a-heavyweight/>. Yeung, Karen. “Regulation by Blockchain: The Emerging Battle for Supremacy between the Code of Law and Code as Law.” The Modern Law Review 82.2 (2019): 207–239.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography